We built a smart door system together with a web server on Raspberry Pi, give someone access to the house by face, speaker and fingerprint verifications. It also enables owner to remotely answer the door when someone visits. By logging into the web interface, the owner could easily check the visitor voice-mail box, get notified when someone knocks on the door, communicate with visitors by video streaming, and remotely control the door via network sockets. This design is fitted into a miniature clip-board model to demonstrate the full functionalities of the system.
The goal of this project was to construct a standalone table-top arcade station using the Raspberry Pi. Our inspiration for this project came from our mutual fondness for classic video games and interest in using the versatile capabilities of the Linux operating system on the Raspberry Pi. The arcade station features a pre-built kernel that runs an EmulationStation playing classic video games as well as our own custom-built Python games all housed within a custom-made laser-cut wooden cabinet.
For the project we created an audio server that queues and plays songs based on different user’s popularity. Songs are played on a speaker from a queue of submitted youtube links, where they can be upvoted or downvoted by other users connected to the device. This increases or decreases a user’s popularity. Users are able to register, submit links, and vote on songs via a webserver hosted on a Raspberry Pi.
Our project aims to design a remote-control object detection robot, and it mainly consists of three functions. Users can remotely control the robot to move around from a terminal like a laptop via LAN. Users can receive the real-time video with the camera installed at the head of the robot. Lastly, once starting the object detection function, the robot is able to search around and find the target.
The goal of this project is to build a voice-controlled coffee machine “Aron” which is able to make coffee based on real time voice input. Our system is a 3D printed module that fits on top of the Nespresso Essenza Mini coffee machine. The Raspberry Pi listens for key words and phrases to actuate a servo accordingly to control the buttons on the coffee machine.
Nowadays, people live in a rapid developing society with tight schedule, especially for those university students, who take many classes including lots of assignments, lab projects and exams. Therefore, a tiny smart calendar, with highly potent and specific useful function, is needed. The smart calendar, which only focuses on recording and displaying students schedule, can remind and assist students to better manage their time. It should be small enough that can be placed on bedside table like humidity monitor, and be cheap that affordable for all students. With its simple operation and voice control function, which delivers more interactive content, the smart calendar becomes more attractive.
The Infinity Mirror Music Player is an eye-catching displayer that combines persistence of vision optical illusion and infinity mirror effect. As a clock, it is unique in that the system utilizes persistence of vision theory to display real-time clock with the illusion that the numbers are floating in the air, in a three dimensional way. As a mirror, it produces the illusion of hundreds of glowing clocks receding into infinite depths when observing from the top. As a music player, it decodes the real music into frequency domain magnitude and display it on the rotating screen.
To create a rhythm-matching game based on Raspberry Pi and piTFT. For a song played on Raspberry Pi, Pi will translate its rhythm to successive colored blocks appearing from upper edge of TFT, with a variety of sizes, which will slide down their relative trails and hit the bottom band of screen in correspondence to the synchronized beat of the music. The game player taps electrical keys to Raspberry Pi at the same time with the hitting. When the tapping is detected in the right time, the bottom band will let the colored blocks pass through, or the bottom band would just obstruct that block for any part falling into the band, till the whole band is ‘eaten’.
The magic selfie mirror is a mirror that can send greetings, display date, humidity, temperature information and take selfies when you use it. Some people like taking selfies after finishing wearing makeup, the magic selfie mirror just combined the two processes together to make life easier and more fun.
We built an autonomous control system for a drone that tracks and follows an object. Our system is capable of switching control of the drone between the Raspberry Pi and a handheld radio controller as well as switch between hover and autonomous flight where the drone follows a red object below it. The Raspberry Pi 3 is used as the decision making unit to read data from multiple sensors and control outputs from a PWM generating chip to the onboard flight controller using a relay. We successfully integrated the necessary hardware to control the quadcopter and track a target. Additionally, we met the goals we set for ourselves within the scope of the embedded system.
Pi Resistor Sorter is a machine that detect the input resistors. The control unit of Pi Resistor Sorter is Raspberry Pi, and the detection of resistor value is based on hardware camera. OpenCV-Python library is used to analyze video frame. We implemented the mechanical hardware via 3D printed. To control mechanical movement, a servo is used to control convey belt unit, and another servo is used to control the rotation of the plate that allowed resistors to be sorted. A simple web page graphical user interface is implemented to show the status of the Pi Resistor Sorter.
While enrolled in ECE 5725, I found out that usually I am tied to the laboratory simply because I need some basic functions from an oscilloscope. Therefore, I designed a cheap, portable, yet powerful oscilloscope that is useful in signal verification within embedded systems development. The oscilloscope has accurate reading of analog voltage for mixed signal below 20 KHz frequency. The oscilloscope is able to change its display setting under user command to adapt to signals of different frequency (from 0 up to 100kHz) and amplitude (from 0 up to 3.3V).
A Raspberry Pi DJ loop was built as our 5725 final project. The DJ loop serves the similar functions as the DJ loop or launch pad in the market. Users can pre-install any sound resources into the Raspberry Pi and play with the sounds. Users can record any lengths of the sound and replay the recorded one in a loop. While looping the sound, users can still play other sound but not record it, or choose to add them into the existing loop. If users are not satisfied with any one of the recorded part, they can delete it without affecting the whole loop. To enhance the visual effects, a 32*16 RGB LED matrix is used as an audio spectrum.
The objective of the ECE 5725 final project is to design a project that uses the tools learned throughout the semester and is based on the Raspberry Pi (RPi) platform. It must cost less than $100 to make, excluding the price of an RPi, PiTFT screen, charger, SD cards, and case. With these guidelines, we set out to design, manufacture and develop software to control a Pixar inspired lamp. Our goal was to have a user place an Aruco marker on any object, with the limit of nine objects, and have a lamp shine as much light as possible on that object. They could also change which object to track if there is more than one Aruco marker. Finally, we wanted to make this as compact as possible to not take up workspace.
House security matters and people always try to make life easier at the same time. That’s why we put up with our project, Face Recognition Door Lock System. We developed this system based on Raspberry-pi 3, to make the house only accessible when your face is recognized by the recognition algorithms from OpenCV library and meanwhile you are allowed in by the house owner, who could monitor entrance remotely. By doing so, the system is less likely to be deceived: since the owner can check each visitor in the remote console, getting recognized by the camera using a photo won’t work. We also added passcode function for entrance in case that face recognition part corrupts.
We’ve designed a security robot based on Raspberry Pi which can navigate itself in your home avoiding obstacles and also detect suspicious events (motion) when you leave your house. Once a suspicious event is detected, the robot will send an email to the host which includes a photo to notify that. The security robot mainly consists of two parts: Motion Detection and Navigation. In Motion Detection, a Pi-Camera and the OpenCV module are used to detect moving objects. In the Navigation, we used three ultrasound distance sensors in front, right, left side of the robot to detect the distance to obstacles and the Raspberry Pi (R-Pi) will analyze the and decide what to do in next step.
The purpose of this project was to develop a multi-channel, high-current motor control system. In many robotic applications, the ultimate deliverable power of servo motors is limited by built-in H-Bridge circuitry, which is typically rated for low-current operation between 3.6-7.2V. This project’s objective was to develop a substitute, closed-loop control system capable of supplying up to 30A/24V (0.72kW) per channel. These channels would draw power from a supply that is independent of the logic circuitry.
The objective of this project is to design and implement a security monitoring robot which is able to detect surrounding motions and help the user watch personal belongings. The robot is designed to work under remote controls, which means that the user can manipulate it through a WIFI network using a computer. Control mode and Patrol mode are two running modes of the robot. In control mode, the robot’s movement is controlled by user in real time and in this way, it can be deployed to any position as the user wants. In the Patrol Mode, the user defines a routine and the robot follows the routine back and forth.
Voice recognition has been upgrading rapidly in recent years and many applications have incorporated this functionality to provide more user-oriented service. In this project, a Raspberry Pi based smart phone (or call it Piphone) is designed to implement voice control to achieve two basic functions in a phone system: make calls and play music.
The objective of this project was to create a robot that would navigate through a room and clean up any dust or dirt on the floor. The robot was to use a vacuum or sweeper roller as its cleaning mechanism. The robot also aimed to cover the entire floor of a given room, such that the entire surface would be cleaned.
The autonomous radio-controlled (RC) car utilizes a Raspberry Pi 3 as a means of providing basic self-driving capabilities to our RC car. These capabilities include road tracking and following on straight and curved roads, stop sign and traffic light detection, as well as collision avoidance. The RPi3, along with a Pi camera module, streams a video link to a computer via Wi-Fi. Through machine learning and training, the computer takes in the video link and performs image processing. From there, road tracking and traffic sign detection were achieved. Corresponding actions from both road tracking and traffic sign detection such as forward, brake, left, and right, are determined and sent through a modified RC car controller that is connected to an Arduino. Also equipped with an ultrasonic distance sensor, the RC car continuously checks for possible obstacles in front, and brakes if deemed necessary.
Fatigue driving has a huge impact on driving ability and has become the main cause of more than 30% traffic accidents in 2016. Beating fatigue driving is an urgent problem to be solved among lower-end vehicles, where limited safety features were taken into consideration. Our solution is an affordable, intelligent dual-lens Dashboard Camera that features eye/gaze tracking, lane departure warning, vehicle parameter recording and driving behavior analysis, all in real time.
For this project, our goal is using gesture to control a virtual instrument and play some simple melodies. To play the instrument, we do not need to touch any button or touch the screen. What we want to build is an instrument whose sensors acquire the input signal when we make gesture and send it to the corresponding GPIO pins. For different gestures, there will be different sound. Also, we will design an interface on RPi’s screen, which is just like a piano. Once the user send gesture to the sensors, the relevant key name will show on the piano screen with its own color. Also, there will be a quit button whose position will be indicated by an arrow show on the screen. This quit button will let the user to quit the program.
The goal of this project is to create a personal robot that provides gives the appearance of being present at a location other than the user’s current location. In addition to providing a real-time image, the robot’s remote location and movement can be altered through voice command communication. The teleoperation of the robot is controlled by raspberry pi single-board computer through the web application framework, “Django”. Under user direction, the robot can perform the following commands: move forward, move backward, pivot left, pivot right, and cease all movement. The increase or decrease in the robot’s velocity can also be controlled.
Construct a two-wheeled mobile robot which can traverse an indoor setting. Provide a touchscreen mounted at a comfortable height so that a user can request a delivery, or ask the robot to return to its home location after making a delivery. Implement an interface for a user to remotely operate the robot wirelessly. Implement an interface for a user to view a real-time video feed from a front-facing robot camera.
For our final project, we created an Raspberry Pi Encryption Cluster server, where we could send files from our laptops to be encrypted or decrypted with a certain key and number of rounds and automatically sent back. We wanted to accelerate the Tiny Encryption Algorithm (TEA) by exploiting parallel computation using three Raspberry Pi’s, ethernet cables, and a router. To this end, we used the OpenMP optimization pragmas and the MPICH library. We wanted to explore the different design spaces that the distributed computing environment offered us so we gathered performance data for the serial implementation, the OpenMP implementation, and the hybrid implementation (with both MPI and OpenMP) on a variety of file sizes and encryption rounds. With the hybrid implementation, we managed to achieve ~6x speedup from the serial implementation and ~2x speedup from the OpenMP implementation. In addition to speedup, our final result offered an interface which allows the user to send files to be encrypted or decrypted remotely. This was completed using TCP/IP socket programming in Python.
Our product is a model of the action potentials (the voltages over time) of optical neurons. We utilized three LEDs as our neurons, a piTFT screen as our display, as well as three other LEDs as indicator lights of neuron “spikes”. Our neuron behavior was characterized by the integrate and fire model with lateral inhibition due to this model’s fantastic teaching value. In terms of integration, we protoboarded our entire circuit and constructed a simple wooden box with a sliding top to demonstrate the functionality of our neurons.
For this project, we integrated Google’s new AIY Audio kit with the Raspberry Pi, as a means to easily perform speech-to-text on the user’s words, and text-to-speech for the assistant’s output. With this kit as our input/output device, we created our assistant software, which uses Python’s Selenium library and the headless browser PhantomJS to search a popular cooking website, Allrecipes, for relevant recipes. We created an intuitive, state-based control flow for the voice assistant so that finding and cooking to a recipe would be as natural as possible. Also, with more detailed Selenium web navigation, we were able to present users with detailed information about the cooking time, caloric content, number of servings, and more.
We built a robot car using the components given to us in Lab 3. After constructing the robot car from Lab 3, we installed a camera and a robot arm to interface with the robot car. We assembled the robot arm in the front of the robot car, and connected the camera module to the Raspberry Pi. We then wrote codes to serve as the client and server on both the Raspberry Pi and a Windows PC laptop to enable real-time communication between the laptop and the robot car. Control signals would be sent from an Xbox 360 controller to the PC, then the PC would forward the signals via the server-client interface to the Raspberry Pi so that we could control the robot car’s movement and the robot arm. We also developed interface in order to observe video stream from the camera on our laptop in real time, and we could use the camera feed as guidance to control the robot’s movement from a remote position.