Air Painter is a project that allows users to make a quick sketch with the use of hand gestures. With your hands, users can freely manipulate a canvas and develop their drawings. Through computer vision, the system recognizes gestures and the hand’s location to create the drawing. The project was inspired by our interest in computer vision and desire to apply it in a visual project.
For this project, we created a head unit for vehicles that can display information that is not available on the standard gauges, can display and record dashcam footage, and can display a primitive navigation system. Automotive diagnostics are very important for keeping cars running properly. All cars sold in the USA after 1996 have an OBDII (On-Board Diagnostics II) port which can be used to access information about the vehicle, including live data and check-engine codes. With most older cars lacking key information or functionality we decided to provide our own system.
This project is to build a Smart Home Security System that is cheap, robust and easy to install. It can be wall-powered or battery powered, with access to an Internet connection (Ethernet or WiFi) as the only requirement. The goal of this project is to provide safety and peace of mind to the owner at an affordable cost. The system requires minimal maintenance and is very easy to operate.
As an alternative to expensive OEM data logger solutions, the team constructed a Raspberry Pi based embedded system that will log and display data from vehicle and external sensors. Key functions include communicating with the vehicle’s OBD-II (on-board diagnostics) port via a bluetooth transceiver, wired communication with external accelerometer, GPS and a camera. The Pi will log and record the data and display relevant numbers onto the PiTFT with a web interface developed to view previously recorded data and video.
For this project, we designed all the circuits and the software by ourselves. For the power supply, we have to use two different power supplies. One is about 170V for the nixie tube, the other is 5V for the RPi. Theoretically, we can get 120*1.414=170V from standard voltage. So we will use a full bridge rectifier and a smooth circuit to get a 170VDC. For the 5V power, we will use an external power or a transformer (depends on the budget). Next problem is the control of the clock. To control the six nixie tubes, we need to control 6*10=60 pins using the RPi. To achieve this, we used at least eight 74HC595,the 8-bit parallel-out shift register. Also used transistors that can tolerate high voltages as switches to control the tubes and also use them to do the insulation work. We also want to add some back lights at the bottom of the tubes to make the clock look more fancy or use the lights to realize other functions. For the software part, we need to generate waveforms to drive the shift registers to control the number shown in the tubes. Also we need to make an interface that can let us change time and do other things.
Our final project aims to integrate the RPi, piTFT, various sensors, a servo, and a remote device into one smart door lock system that can function on its own and provide various methods of allowing/denying entry for user convenience in multiple scenarios by implementing software and building hardware modules that are reliable.
In this project, I programmed and created a multi-threaded mobile application that dynamically updates UI, establishes a bluetooth pairing with a Raspberry Pi and detect the current free fall state of the phone. While I only physically constructed one Pi attached to a Raspberry Pi 7” touch screen display, this software can accommodate a wide array of display options of Raspberry Pi models. The code running on Raspberry Pi creates two processes that run on two cores. One process serves to receive the data being sent by the phone in near real time while the other process updates the display at its own pace. Overall, the system works well and is actually quite interesting to see the type of plots you can create.
We built a mobile robot that receives commands from a base station. The base station sends commands corresponding to detected hand gestures that it analyzes using OpenCV from a PiCamera input. The mobile robot also relies on the use of an accelerometer to correct errors in motion and an ultrasonic sensor to prevent collisions.
An aimbot is a piece of software that provides virtual inputs to turn the player to face the enemy. Our project was to create a hardware based aimbot which monitors the game with a camera and moves a wireless mouse to target a creeper in Minecraft. The user would be able to navigate in the game, while the aimbot keeps the creeper in the center of the display. This hardware aimbot would not require the user to install any software, and could function on any monitor.
While many MIDI controllers exist on the market, they are mostly proprietary hardware and are expensive due to the niche market. Many electric keyboards support MIDI output, but they are large and bulky. Other devices are smaller but come at a premium and rely on proprietary software. We aim to create a flexible, cheap, and simple platform using the Raspberry Pi.
This project aims to point a camera at a lecturer while also maintaining the ability to control the camera manually, record video to a local destination, or stream over a network. Parts were printed on a pair of inexpensive 3D printers and assembled. The team developed software to control the pan/tilt platform, and then added custom network control software.
Our goal in this project was to design a feature-packed digital theremin using a Raspberry Pi 3 Model B+. This entailed concurrently polling two distance sensors over an I2C connection, interpreting user controls, and generating and modulating sound waves without noticeable delay. The final product had to combine all these tasks into an expressive and responsive system that would be a joy to play.
A coin sorting machine is a machine that can sort a random collection of coins into separate bins. We intended to design a coin sorter that could sort the four most common coins in the U.S.: Quarter(¢25), Dime(¢10), Nickel(¢5) and Penny(¢1). We were interested in using image processing technology to recognize and sort these four common U.S. coins. Our system was built with Raspberry Pi, OpenCV and some machine learning algorithm.
The SmartKart is our take on the cashier-less shopping experience! The idea was inspired by the Amazongo cashier-less shops available is selected states across the country. We wanted to make something similar and portable for an experience as such. The smartKart was made using the Raspberry-Pi as the central processor with the Pi-camera attached on the shopping basket to detect objects continuously. With a combination of Haar Cascade Classifiers, trained to detect specific objects and Scale-invariant Feature Transform(SIFT), we accurately detect objects as the user puts them in or takes them out of the shopping basket in real time. The objects placed in the cart are displayed on the interactive Pi TFT screen, along with their costs and quantity. To give the users an enhanced shopping experience, we have also enabled a touch checkout button on the Pi TFT screen that would send the user an email of the items purchased during their visit.
The Smart Home Hub will always tell you your current investment portfolio, what you should do next, whats going on in the room and always there on your desk. You don’t have to unlock anything to see it. It even has bonus features that tell you the weather, temperature, and humidity outside, even for the whole week, and your room temperature so you know your ac is working. It is implemented as a pygame script and consists of multiple library and api calls.
Our goal is to build the raspberry pi with a camera to recognize the face before the camera, and then raspberry pi will query the database inside to find the favorite music style of these people, then the RPi will use the existing music of this style to generate a new music by deep learning.
The musical instrument box is equipped with a keypad that communicates with Raspberry Pi over I2C so that users can play instrumental sounds through an external speaker by pressing the buttons, of which each button responses to a certain note. The notes list includes C4, D4, E4, F4, G4, C5, D5, E5, F5, G5, A5, B5, C6, D6, E6 both piano and guitar notes. The musical instrument box provides three modes for users, namely free mode, guide mode, and record mode. In free mode, users can play pre-set notes freely. In guide-mode, users can pick and load a song from melody lists and play the whole song by pressing buttons following the order the LEDs flash. In record mode, users can still play music just like in guide mode. The music box will record the notes and duration for each notes that the user played and make playback of what the user played.
My project, digit gesture recognition, is designed to recognize gestures of digits from 0 – 5. This project is based on Raspberry Pi, when the camera captures the image that contains the figure of hand, the machine learning algorithm will predict the digit and print it on the screen with the figure of hand that is selected automatically.
Due to the pandemic–Covid 19,maintaining the social distancing and taking temperature are extremely critical. However taking temperature by humans can easily break the rule of social distancing and easily get infected. We designed a thermometer robot that can take the temperature for a human so that direct contact can be avoided and social distance can be kept during the measurement. The healthcare workers can enter the ID and let the robot find the associated person using face recognition technology. The robot will take the temperature and display it on both piTFT and the desktop monitor.
Our goal was to solve the simultaneous localization and mapping (SLAM) problem by using WiFi measurements. Specifically we wanted to implement this paper, where they solved SLAM through a Graph SLAM formulation with WiFi signal strength and movement data from an IMU. Due to the constraints of graph SLAM, data would first be collected, and then SLAM would be computed offline
CloudCam is a cloud-connected camera capable of capturing and customizing images with a slew of digital effects computed on board a Raspberry Pi 3. Users can also run Machine Learning model predictions on captured photos through CloudCam’s Amazon Web Services Elastic Compute Cloud (AWS EC2) interface to detect human emotions or common objects. Finally, all captured shots can be optionally saved locally and/or uploaded to a remote AWS Simple Storage Service (S3) server for later use!
PlutoBoard is a model prototype of a climbing wall route-management system, which consists of an array of LED and button pairs corresponding to climbing holds and a touchscreen UI. Typically, climbers use tape, chalk, or their memory to remember which holds are included in each climbing route, but on the PlutoBoard, the buttons and LEDs are used in tandem to set routes: LEDs are used to indicate whether a specific hold is included, buttons are used to toggles the on-state of its corresponding LED. By interacting with the UI, climbers can create user profiles, save and grade routes, generate routes with randomized pattern by difficulty, and access/edit routes later.
The goal of Pool Projection was to provide a helpful aim assist for pool players in order to sink balls in various states of the gameplay. Pool Projection uses an overhead picture of the pool table to determine the location of the balls and pool cue in the image. Using this information, the program presents how each of the balls would interact if the cue ball was struck at different trajectories.
PointyBot was developed by Kristina Nemeth and Cuyler Crandall, working remotely from Florida and Ithaca, respectively. The hardware for PointyBot includes a dual-axis pointed driven by a pair of servos in conjunction with two absolute value encoders, alongside a GPS and IMU which provide sensing to determine the system’s position and orientation. The system runs on a Raspberry Pi 3 and users interact with it via touchscreen commands on a PiTFT display. In software, PointyBot executes two code loops in parallel: a foreground one which users interact with that constantly updates the position of the module and of its targets, and a background loop which ensures that the pointer is constantly oriented towards the module’s target.
Our project uses a web server to allow friends to send short messages to be displayed on a interactive LED matrix dashboard. This project consists of two nearly identical hardware setups, each located in a different physical house. Each setup includes one Raspberry Pi 3, one RGB LED matrix, and one distance sensor.
This project is the second version of BeeCam project in Spring 2020. We aim to build a box-like device to observe bees’ behaviors. Our goal is to identify each bee with numerical tags and record images of bees while they are in the tunnel. Two Raspberry pi are also synchronized to provide the same time stamp for images.
The objective of this project was to showcase a novel and inexpensive way of using LiDAR technology in a low power, portable embedded system. Using LiDAR technology, a map of a room or multiple rooms could be made automatically by navigating the LiDAR mounted to a robot autonomously without colliding with any obstacles it encounters. As a result, these tasks can be accomplished without any additional sensor feedback and can aid in understanding visually how indoor surroundings look like.
It’s really hard to imagine what a color blind person sees in their daily life, so we built a color visualizer to not only let people without color blindness see how those with these deficiencies view the world but also help people with these deficiencies distinguish colors better. Our goal is that through our project, a color blind person can pass a color blindness test and someone with normal vision can better empathize with people by experiencing sight with all eight types of deficiencies.
Our objective is to create an audio looper with multiple channels, in which one can record short sound bites to be overlaid with one another for quick music production. We also want our project to have a keypad that can be used to play a variety of instruments that can also be recorded and looped over one another.
Translation is needed when people pay a visit to new countries with unlearned language. It is time consuming if we type in the sentences or words into the translators when we need translation. Hence, we hope to get text information from different kinds of input (such as images, voice and so on) instead of typing in them every time, which can bring a lot of convenience. Online Translation service is introduced since they have a better performance with Machine Learning training.