Sonic Communication
In this project, we created a one-way communication system using a Raspberry Pi, a microphone, and a speaker. The speaker, acting as the transmitter in the system, was able to communicate arbitrary, fixed-length bit string frames to the Raspberry Pi by playing a series of tones, which the microphone recorded and the Pi processed to recover the bits. Bits were encoded using multiple frequency shift keying (MFSK), which assigns different tone frequencies to different combinations of bits, allowing for multiple bits to be sent in each transmitted symbol. By implementing a cyclic redundancy check, we were able to create a system that was fairly robust to noise and echoes; given the constraints of the frequency band that we were working with (about 350 Hz to 750 Hz), we did not expect to correctly receive every transmission, but we were able to at least detect whether a transmission was received without error.
TABER RPi Base Station Upgrade
The objective of this project was to expand on the TABER Group’s existing RPi base station and add new features to it. The original base station has several functionalities: an attached radio antenna detects bird tags near the station, an LED light blinks whenever a bird tag is detected, and the collected data is loaded onto a USB drive every 15 minutes. However, the setup, while good for autonomous running, was not conducive to human interaction. To alleviate this, we added a piTFT screen to display recent detected tags, GPIO buttons for base station shutdown and reboot, and additional functionalities such as on-screen touch controls and an audio signal for tag detection.
The Eye
The purpose of the project is to design a device to help blind people to understand the world they are facing. Based on some rapidly growing technologies such as face recognition technology and OCR technology, a Raspberry Pi based smart camera “The Eye” is designed and implemented in this project. Users just need to take a photo using the device, and it will give a general description of the photo, recognize the faces and extract text information in the photo. After that, all the information will be output in audio form.
Guitar Hero
Our project is a version of Guitar Hero run on the Raspberry Pi. Players play on an actual Guitar Hero guitar as well as a drum set we created. Single player mode is played on the guitar. Multiplayer mode is played between two people, one on the guitar and one on the drums. While playing the guitar, a player must hit the strum bar at the same time they press the button in order to earn points. Players have the option of selecting from a variety of songs as well. We used pygame to display falling notes corresponding to the buttons on the guitar or drum. We keep track of a score and provide audio feedback as well as penalize negative points when the wrong notes are pressed.
Raspberry Pi Music Assistant
The Raspberry Pi Music Assistant is a device that utilizes a Raspberry Pi to implement three useful musical practice tools. These include a metronome, tuner, and recorder, each of which is created completely through software. The RPi Music Assistant is controlled completely through its touch screen interface, and it also uses a microphone and speakers for audio input and output.
Snowball: An RPi-Controlled Dancing Robot
This project uses a Raspberry Pi to control a robot that dances in response to music that is played in its environment. The audio is captured through a microphone, processed in real-time to detect the bpm (beats per minute) and times of the beats of the song. These are then used to implement pre-programmed dance moves on the beat of the music. For entertainment value, the robot is programmed to perform robotic imitations of popular dance moves.
Responsive Flood Plain Monitoring
The goal of the project is to enable the integration of information regarding the conditions of the river within the river in itself and communicate these processes to the community at key moments of access and recreation along the river through the development of an embedded system that essentially connects the sensor system in the river to the remote display system on the flood wall. In this project the system mainly comprises of water level monitoring.
Autonomous Object Tracking Turret
For our ECE 5725 Design with Embedded Operating Systems final project, we created an autonomous object tracking turret. Our turret is able to locate blue objects in real time and autonomously track it with two degree freedom of motion (rotation and tilt). Our system can also be remotely controlled to emit a laser beam and to fire a rubber band at the target.
SketchBot
Our main objective for this project was to create a robot that could draw pictures accurately by using a camera to track its direction and position. We also wanted the robot to be fairly small so that it would fit comfortably on any size of paper as well as low cost and as simple as possible. Additionally, we wanted to provide users with some sort of interface which would display the current status of the robot as well as show a canvas of some sort on which the user could draw a picture which the robot would then draw on the paper.
RPi Integrated Smart Home Control
Smart home devices and the Internet of Things are proliferating at an astounding rate, yet controlling a smart home network is both costly and/or difficult. Dedicated controller hubs can cost well over $100 while open-source software requires experience (or a lot of documentation reading and forum diving). On top of that, controllers often need to be accessed over your home network, requiring network and router setup outside the comfort zone of most users. That’s when we thought, why not create a simple smart home controller using the RPi and include an integrated GUI? Enter our project, the RPi Integrated Smart Home Control System. This would eliminate much of the hassle of setting up the network yourself, high cost of dedicated controller hardware, as well as provide a direct interface to the network at the same time!
Humanoid Robot
We built an entertaining humanoid robot based on Raspberry Pi. Our purpose is to entertain people with its cute appearance and different functions. We designed four emotions for the cute robot, namely happy, sad, angry, bored. The emotions appear randomly and are connected by wander mode. The robot has different functions for different emotions.
Location Tracking Map
We set out to build a interface to show real time location data for a number of users on a visual interface. The system needs a cloud component and app to allow remote communication between a number of users from various locations and local endpoint to download this data and display. The system will be built using IoT tools and services such as Node Red and MQTT.
Autonomous Nerf with Stereo Vision
Design a robotic device capable moving to desired orientations using inverse kinematics. Create a system with two cameras, capable of obtaining depth data (stereo vision). Achieve approximate desired yaw and roll setpoints without feedback control. Create a safe and easy DIY project for users to replicate and work with. Hit targets that are found by the stereo vision.
TeleBot: Raspberry Pi Telepresence Robot
This highly modified RC car can be controlled over a wireless network and streams back live video to its pilot. An onboard TFT display, microphone, and speakers allow for virtual communication and presence. The 555-timer siren and warning LEDs can be switched on and off to catch people’s attention or perform traffic stops on pets and small humans.
Nerf Targeting System
The objective of this project was to build a Nerf aiming system that can autonomously track, aim, and fire at selected targets. The system should use computer vision to detect faces in a frame and perform facial recognition to determine if they belong to targets. If more than one target is present, then the system should choose the one with higher priority. The system should then use servos to adjust the Nerf gun’s position in order to track the target. Once the target’s face is centered, the Nerf gun should fire a Nerf dart. A touchscreen-based user interface will allow users to train new targets and civilians, set priorities, and start the system.
Fixed Area Traversing Robot
We created a robotic frame and added servo motors and optical encoders to it. We created control algorithms that would precisely control the velocities of each wheel to enable us to exactly determine the robot’s location and orientation. It moves in incremental steps and holds its position for a duration of 5 seconds. Additionally, we have implemented the ability to turn the robot once it clears a vertical column in the grid so that it can start working on the next vertical column. This process keeps on happening till it has traversed the entire surface.
Fetchbot: Autonomous Retriever Robot
The industry and interest in autonomous robotics is rapidly growing and gaining precedence. The ability to create a robot or device that is able to autonomously perform a task will help reduce the time spent on such tasks by people, allowing for more free time to do other activities. This project aims to design an autonomous robot that will retrieve a green tennis ball and deliver it to a designated location. This robot concept and design could be applied to tennis courts where this robot could retrieve tennis balls for the users, saving a tremendous amount of time and effort. With simple modifications, different target objects can be retrieved as long as the object is light and small enough for the robot to hold.
Dimensioning Robot
Our team built and programmed a robot to measure and calculate the dimensions of a 3-D object. The robot is first initialized with an interface to learn information about the object, such as if it has a circular or rectangular cross-section. Then, the robot moves around the object and maintains a constant distance from it until it returns to the starting position. Post-processing allows an x-y plot to be created as well as calculations such as cross-sectional area to be saved to a folder.This project had many successful components. The frame and mechanical components such as encoders were functioning in the robot. The robot was able to move around an object and maintain a constant distance, yet the motion is not entirely smooth. Furthermore, the robot did not always stop at the correct position after moving around the object, so a manual end button was created. Post processing was successful with accurate calculations and plots created, even with the challenges that occurred in this step.
RFID Tag Seeking Robot
We built an autonomous robot that searches for and then navigates to red RFID tagged objects. Our robot is composed of a Raspberry Pi, a camera, an RFID scanner, and a speaker mounted on a frame with two servos for wheels. The robot uses computer vision and scans for the color red. When a red color is identified, the robot navigates to the color by keeping the center of the detected color in the viewing area. While navigating, the robot is continuously scanning for an RFID signal. When the robot is close enough to the object to detect the RFID tag, the robot compares to the Unique Identification Number (UID) of the detected tag to the target UID. If the UIDs match, the robot stops and plays a celebratory song to alert the user of the tag’s location. If the UIDs do not match, the robot continues to search until it finds the target tag.
Coin Sorting Machine
For this project, we were interested in using computer vision based technology along with image processing to design a sorting machine that can sort some common US coins automatically for us. The design was built on the Raspberry Pi platform with OpenCV. Thie device utilizes several image processing techniques to perform the coin classification. This device support speed control at users’ will, and provides general information, such as coin type, counts, and total value of a single round of operation instead of simply doing the sorting. By using this device, user can not only have a bunch of coins sorted in order, but also obtain a total value of the change been sorted.
Sustainable Smartroom
To implement this project we used a Raspberry Pi along with several sensors, a Pi-compatible microphone, and python scripts. We used two pressure sensors as a detection mechanism to keep track of people entering and leaving the room. The logic to keep track of the number of people in the room was done using callback routines in python and data structures to keep track of the order and times at which each sensor was triggered. We also had a photoresistor serve as a light sensor to detect sunlight and performed an API call to get the current weather and used both of these to read whether there is enough sunlight to illuminate the room. In order to open and close the blinds we connected a parallax Servo to the RPi GPIO pin and created model blinds that open and close as the servo rotates. Lastly, to provide the user with voice control we used a USB microphone connected to the Raspberry Pi to constantly record audio files, performed an API call on the audio file to a speech to text service, and parsed through the text returned to determine whether a command was given to change the state of the lights or the blinds. The current state of the room: number of people, blinds state, and lights state was also displayed on the PiTFT screen using pygame animation.
Oyster Harvester
The oyster harvesting process is laborious and tedious, requiring immense patience and substantial man hours. Our project attempts to minimize the intensive man hours by automating the navigation and unclipping of oyster cages with a robotic boat controlled by a Raspberry Pi. The conception for this project came from Widow’s Hole Oysters, based in Greenport, New York.
Moving Object Tracker
Our project is a Raspberry Pi based tracker which tracks a moving object and autonomously follows the moving object. The system has two main parts: (1) tracking system (2) controlled vehicle. The PRi camera captures the image of moving object and sends the image frame back to the RPi. After obtaining an image frame of moving object, the RPi runs a tracking algorithm that detect the object in the frame and returns the position and size of the object by catching the contour of the object. The control algorithm will control the direction and speed of the vehicle by analyzing the position and size of the object.
Agricultural Drone
Quick and reliable detection of plant performance is one extremely crucial tool for agriculturists and researchers of plant sciences. In most cases this requires manual investigation of the crop by physically checking each plant making the process long, cumbersome and prone to human error. This is especially difficult when tall crops like corn are grown in large fields. Optimization of this process would allow farmers to be more aware of the crop quality and be able to implement changes more promptly increasing economic yield and reducing wastage of resources. Researchers have found that when a particular plant is struggling it reflects wavelengths from across the spectrum which are different from their thriving counterparts. Dynamic monitoring of these differences will help obtain plant distress signals. By using a spectrometer coupled with a drone, communicating with the help of a raspberry Pi we present a method of aerial data collection for faster, accurate and large scale detection of plant distress and enhance plant monitoring.
Pi-SHiFT: A Handheld Gaming Console
The objective of this project was to make a handheld gaming console that can switch controllers. The console would play retro-games using emulators as well as play games designed by us. There are many emulator programs currently available for the Raspberry Pi, however they almost always require an external monitor connected through HDMI. The result is that most RPi emulators are stationary systems hooked up to a TV. In this project, we make a system capable of running the emulator while being entirely portable.
Py Display
PyDisplay is a python library that is designed to easily organize data and graphs and display it on the PiTFT. The goal is to create a pygame display library that would allow users with a bit of programming experience to create their custom display pages easily. This means that they have more power over displaying what they want and they can focus more of their time on the actual applications of the embedded device rather than trying to have stuff display correctly.
Mobile Item Sorter
The Mobile Item Sorter is a robot capable of identifying, extracting, and sorting items by color. The mobile sorter uses a camera to detect and guide the robot to colored boxes on the floor and uses a paper scoop to lift them up to a sorting region in the robot’s chassis. The robot uses a Raspberry Pi located on top of the frame as its computer, running code written in Python to navigate and actuate the sorting and lifting motors.