The purpose of the Robotic Candy Sorter project was to implement a 3 degree of freedom robotic arm and vision system that can detect and sort candy by color. This was accomplished by building an integrated system that leverages high-level (Raspberry Pi) and low-level (PIC32) processing to accomplish an ambitious task. The Raspberry Pi (RPi) handled the image processing and sorting algorithms, while the PIC32 microcontroller (uC) maintained control of the motors by solving the inverse kinematics (IK).
We built a smart home security system and server on Raspberry Pi that is capable of live-monitoring of the security footage, moving the surveillance camera’s viewing angle, detecting intrusion and controlling other parameters such as lighting and air conditioning in the house. By logging in to the web interface served by the Raspberry Pi, we can easily turn on/off a light at home and watch live feed from the camera. We can also change the azimuth and elevation angle of the camera’s base mount so we can look at any wanted angle. We also fitted our design into a miniature interior design model so that we can fully demonstrate the robustness and functionalities of our design within the model.
The objective of this project is to construct a digital FM transceiver that is able to receive radio stations and transmit audios within a single Raspberry-Pi device. Also, the transceiver provides a touch-screen GUI to enhance user experience. Additionally, this project aims to connect to lecture materials of this class and capture the spirit of embedded operating system. Specifically, it makes the effort to leverage the support of linux operating system, which distinguishes this project with traditional bare-metal microcontroller projects in a significant way. For example, we used FIFO object to implement inter-process communication, I2C file descriptor to enable us to treat external devices as files and manipulate them at a higher level, and threads to guarantee the concurrency of the system while executing blocking tasks.
In this project, a Raspberry Pi Theremin was successfully built. After some troubleshooting with using the Raspbery Pi Camera to analyze real-time video footage in Python, a hand detection python script was successfully implemented to detect the area of a hand placed over the camera. This was manually tested for accuracy before implementation. After realizing that the Raspberry Pi built-in DAC could be accessed with a C program, more trouble came when attempting to use a socket to speed up the communication between Python and C for real-time processing of the hand detection algorithm. After going through many different types of sockets, we found ZMQ, which allowed for low latency communication. Once this was completed, implementing the various waveforms and assigning effects to the potentiometers connected to the Pi was not difficult. The waveforms were tested as they were made to determine if varying sounds were made by them. The last difficult procedure was implementing the delay effect which required a higher understanding of the C program that was used to output audio. The final tests of the whole device was simply using it, as the sounds created would be indicative of correct results.
In this project, we sought to write a software emulator for a retro game console and run it on the Raspberry Pi. We were motivated in part by the discovery of a discarded cathode ray television, and aimed to combine hardware and software systems to convert it into a fully embedded retro gaming console. Our project consists of four distinct components: a software implementation of the Chip 8 and SuperChip 8 microprocessors, a custom arcade game written in Chip 8 machine code, a module to interface with input hardware on the Pi, and a pygame user interface to tie everything together. We began by designing our software classes and a simple pygame interface, then moved on to implementing the emulator locally, and finally ported our code to the Raspberry Pi and added support for joystick and keypad inputs. At the same time, we studied Chip 8 machine code in detail, and modified an existing game to create one of our own.
In this project, we have built a robot dog based on Raspberry Pi. First, this little dog can find your face and follow your steps. This function is mainly achieved by detecting the user’s face and use the position and size information to control the motors. Second, this dog can recognize a “go” command which is a person’s “go” word and a “stop command” which is the sound of a whistle and make the corresponding response. This is achieved by detecting the sound with zero crossing frequency detection. Third, this dog can make various sounds when it cannot find the face, or reply to “go” and “stop” command.
The APD is to dispense the different pills autonomously according to user-defined dose and time instructions. Two different pills can be stored and dispensed for now for demonstration purposes; however, the number of pill types can be expanded as desired. The product is consisted of two main parts. The user will firstly enter the desired time and dose for each pill following the guidelines on the touchscreen. The user will be able to view and confirm their pill taking schedule. Then, the pills will be dispensed autonomously at the set-up time.
The goal of this project is to develop a wireless gesture controlled robot using two Raspberry Pis (RPis) and an accelerometer. This project consists of four parts. First, for the controller, we attached an accelerometer to one RPi to detect gestures including basic orientation, tilt, single tap, and double tap. Second, we also attached a LCD display as user interface to adjust robot speed and motion history. Third, we developed wireless communication between two RPis through Wi-Fi and socket. Fourth, we controlled two motors of the robot using hardware PWM of the other RPi. Then, user can tilt or tap the controller to configure and change the motion of the robot remotely.
The benefit to peer-to-peer downloading is it scales better than the traditional client-server architecture. Peers can request parts of the file from each other rather than everyone connecting to the same server, which could get overwhelmed. With peer-to-peer, the more active peers, the faster the download. For this project we build an HTTP server to act as the tracker. A python script was also developed which creates a metainfo (.pt) file so users can connect to the tracker and find peers involved with the desired file. We also wrote the code for the peers. The same peer code is run for the original seeder (the user which initially has the file) and all other peers. Command line options are passed so specify the port, metainfo (.pt) file, a peer name, if they already have the full file, and the name of the destination/source file. Peers can be dynamically added and each peer will connect to each other and request pieces from each other.
Keyboards are very ubiquitous in today’s society, so our project was not directly addressing a problem in society. However, we were more curious on how to make wireless keyboards and the actual circuitry behind the keyboard itself. Additionally, we are very interested in mechanical keyboards, and were going to implement functional layers to allow for customizability. The component of wirelessly communicating with any device (such as a computer or phone) and the process of debouncing keys and keystrokes was where the Raspberry Pi would come in. Additionally, we made our keyboard “programmable”, so that a user could SSH into the device and change any mappings.
The goal of this project is to develop a low cost device that can be integrated with the Raspberry Pi to reduce power consumption by allowing the Pi to turn on and off based on a timed schedule or external events. This would allow the Raspberry Pi to be used in a wider variety of applications. Specifically, outdoor environmental data collection projects were the motivation for this system. With improved power management these projects could be left to collect data for a long period of time using the Raspberry Pi without losing power.
The primary objective of the project is to create a robot that can be maneuvered remotely using any device connected to the internet, in a secure manner. The bot employs a Raspberry Pi 2 Model B at its core, gives a live video feed using camera module and hosts a Flask server that contains the GUI to control the bot. The Flask application also conducts user authentication to provide access to registered users only.
As the world of robotics becomes more popular, more people have begun to rely on autonomous systems that do everything for us. We have created a system that is able to search for a user-specified target based on RGB patterns and move towards such object until it is 10 cm away from it. Our robot is also able to avoid all obstacles that it encounters on its way to the target. The system is controlled by a Raspberry Pi unit that takes signals from a Pi-camera and four ultrasonic sensors.
We created a Pet Robot that resembles many of the qualities of your pet at home, except that this one has a microcontroller, a microphone, two servos, and a few wires attached to it. Our Pet Robot reacts to a set of user commands and obediently carries out the tasks it is given. Tell it to “move forward” and it will do so. Ask it to “move right” and it will do so and use its front distance sensor to ensure that it does not run into a wall, in case you were trying to play tricks on it. You may question the robot’s brain capacity, but simply ask it to find a blue or green object in the room. It will find both, and it will even navigate itself towards both of them to prove it to you. Our Pet Robot speaks – you can talk to it and expect a voice response back. We hope that this webpage helps you to understand the design behind our Pet Robot and the various resources we leveraged to enable its multiple functionalities.
The purpose of this project is to build a multifunctional alarm clock with Raspbery Pi. The smart alarm clock is embedded with various functions, including time display, weather report, music player, google calendar events reminder. Moreover, these basic function modules could be controlled either through gesture control or remotely via an android app on phone by user.
We use Pi Camera to read a Sudoku puzzle, then display this incomplete puzzle on PiTFT screen. Then we use our algorithm programmed in Python to solve this puzzle and display the completed puzzle back on the TFT screen again. We also tried different approaches to write a Sudoku Solver algorithm like Brute Force, Backtracking etc. We ended up implementing an improved version of backtracking algorithm.
Using the Raspberry Pi as controller, we have designed and built a LED music cube which shows a flowing frequency spectrum of the played music. For the hardware part of this project, we built an 8x8x8 LED cube using 512 LEDs. We also designed and connected circuits to control all the LEDs using 14 GPIO pins on Raspberry pi. For the software part of this project, we extracted the waveform data from the wav file and implemented FFT then changed the display of cube with the present frequency spectrum every 0.11 second.
Our objective is to build an inexpensive, easy to install home assistant, which would control the users home and also interact with him/her.It would also be capable of fetching the weather data, reading out the news and playing and controlling music. For this we used Jasper, an open source integrated platform for voice control developed by Charles Marsh and Shubhro Saha.We used an online Speech to Text(STT) engine wit.ai and Text to Speech we used was Mary-TTS. We did experiment with various offline STT and TSS modules but discovered that the best results were obtained with these.
The goal of the project is to design a separate module that can be put on a pen, robot or even people to track and record the object movement. Users can use this module and attach it with their other devices, so when the object is moved, our device can restore and track their trajectory and display the movement in the horizontal plane in the world frame on the screen. A very classic application is that users can use it as a pen, so their writings will be recorded and saved as an image. A special feature about this project is that the tracking does not reply on any specific plane and it can be applied on every plane, even in the air. The placement or tilt of the device also will not affect the final result.
In this project, we designed and implemented a gesture controlled music box on a Raspberry Pi Kit. The music box uses photo detectors for receiving hand gestures to further implement functions like music playback, pause, start and volume control. With self-invented algorithms, this music box has good human interactions.
We built a intelligent car equipped with camera. It can communicate with the control center (a iOS app) with Wifi. It would send the images it captures to the control center. The control center will send control signal to the Mars rover. The car can also autonomously navigate to a black object within its range of view.