6.111 | Spring 2006 | Undergraduate

Introductory Digital Systems Laboratory

Projects

Student projects involve teams of one to three students. Each team chooses its own project, though topic suggestions and guidance for scope are available from the course staff. A wide variety of projects are undertaken each semester. A selection of presentations, final reports, and demonstration videos are included in this section. The project presentations occurred early in the semester and capture the projects in the design phase. The reports and videos show the final results of the projects. The student work in this section is courtesy of the students listed and used with permission.

Project Resources

Project Information (PDF)

How to Make your Project Work (PDF)

Available Hardware (PDF)

Video: Lab Kit Overview by Nathan Ickes (MP4 - 3MB)

Final Projects

To view the abstracts, final presentations, and reports, click on the project links.

Group # GROUP MEMBERS FINAL PROJECTS VIDEOS
1 David Blau, Uzoma Orji, Reesa Phillips “Let’s Take This Outside” Boxing (MP4 - 9MB)
2 Xinpeng Huang, William Putnam Laser Pointer Mouse (MP4 - 11MB)
3 Hana Adaniya, Shirley Fung Drum Machine (MP4 - 10MB)
4 Bashira Chowdhury, Cheryl Texin Fingerprint Verification System (MP4 - 8MB)
5 Igor Ginzburg 3-D Pong (MP4 - 7MB)
6 Leon Fay, Miranda Ha, Vinith Misra A Novel Approach to Active 2D Sonar (MP4 - 11MB)
7 Masood Qazi, Zhongying Zhou A Voice Training Karaoke Machine (MP4 - 8MB)
8 Mariela Buchin, WonRon Cho, Scott Fisher Have a Safe Flight: Bon Voyage (MP4 - 20MB)
9 Chris Buenrostro, Isaac Rosmarin, Archana Venkataraman A Two-Input Polygraph (MP4 - 17MB)
10 Noel Campbell, Vivek Shah, Raymond Tong Video Surveillance System  
11 Michael Huhs, Sanjay Jhaveri Snapshot  
12 Helen Liang, Wendi Li, David Meyer, Lucia Tian Piano Dance Revolution (MP4 - 16MB)
13 Cameron Lewis, Xin Sun Audio-Driven Laser Tetris (MP4 - 8MB)
14 Annamaria Ayuso, Sharmeen Browarek MIT Dance Dance Revolution (MP4 - 11MB)
15 Matthew Doherty Local Decoding of Walsh Codes to Reduce CDMA Despreading Computation (MP4 - 5MB)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

“Let’s Take This Outside” Boxing

By David Blau, Uzoma Orji, and Reesa Phillips

Abstract

“Let’s Take This Outside” Boxing is a one player or two player game in which fighter box to the death. The user interface is comprised of a camera and gloves equipped with accelerometers. The camera detects the position of the gloves and the accelerometers measure the force of a punch. The position of the boxer is inferred from the on-screen coordinates of the gloves. When a punch is detected, the control unit determines whether the punch hit the opponent and how forceful it was. The more forceful the punch, the more damage is done. The game continues until one player loses all her life. The output displays an image of the gloves and the opponent and health meter. The fighters, their health meters, and the boxing ring are drawn using sprites read from ROM.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF - 1.8 MB)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Wireless Surveillance System

By Noel Campbell, Vivek Shah, and Raymond Tong

Abstract

This project implements a networked video surveillance system in digital logic. It allows a user to view video input from a remote camera on a VGA monitor by capturing camera data, encoding it, transmitting it wirelessly, and subsequently receiving, decoding, and re-displaying the data. To test the system, the project is divided into three main components: video capture/display, data encoding/decoding, and wireless transmission/reception. Each part is put through a comprehensive series of tests using ModelSim and simulated data. Once each component passes design and functional tests, all three components are then connected to transmit video data wirelessly.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Snapshot

By Michael Huhs and Sanjay Jhaveri

Abstract

The goal of this final project is to implement a digital camera using a Xilinx Virtex II FPGA that is built into the 6.111 Labkit. The FPGA will interface with a video camera, and output the digital signal to a VGA monitor. The user will then be able to capture the displayed image by pressing a button on the labkit. Once the image has been captured, it will be compressed using a discrete cosine transform (DCT) in order to be stored and viewed at a later time. In addition to image compression, the user will be able to perform zoom and rotate operations as well as perform some simple image filtering. Implementing Snapshot will involve using the onboard ZBT memory of the 6.111 Labkit to store the video signal. This project should be a natural continuation of 6.111 Labs 3 and 4, as it builds upon the concepts of VGA display and memory. The lab will be broken up into: extracting the received video signal, displaying the signal to the monitor, and storing the video signal in the onboard Labkit memory.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Piano Dance Revolution

By Helen Liang, Wendi Li, David Meyer, Lucia Tian

Abstract

Inspired by the FAO Schwartz piano, Piano Dance Revolution consists of a large piano keyboard projected onto the floor. The user interacts with the piano by stepping on the keys. At every step, the activated keys will light up and the corresponding note will play from the speakers. This is accomplished through three major components: the projectors, which project the image of the piano onto the ground and changes the color of the keys upon activation; the motion detection system, which detects the location of the player’s feet and determines the key stepped on; and audio output, which plays the activated note.

There are two modes of operation playback and game mode. In playback mode, the user can play any notes he or she chooses. In game mode, the piano plays a pre-recorded song and the user must match his or her steps to the appropriate notes, represented by lit key projections. The timing and accuracy of the users steps are used to determine his or her score.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Audio-Driven Laser Tetris

By Cameron Lewis and James Sun

Abstract

The purpose of this project is to demonstrate an advanced version of the classic arcade game Tetris. Our version boasts a much more dynamic and random game-play experience than conventional implementations, bringing new meaning to the concept of background music. Users will manipulate the falling objects as in the traditional game, but there will be several twists. The entire game will be driven by music and then projected onto a large screen using a laser raster system. In addition, more detailed game graphics along with other audio-based effects will be displayed on a VGA monitor connected to the FPGA Labkit.

The audio-based elements of the game will operate in the following manner: the drop rate of the game pieces will be controlled by music frequencies and magnitudes. The audio input to the system will be connected to the AC97 audio decoder on the Labkit which will digitize the analog signals and pass them into the FPGA for processing.

The video components will be comprised of two display elements. First will be the high resolution VGA imaging module, which will display the complete Tetris game with scores, settings, and other game errata. Second will be the laser display, which will interface with the core graphics output of the Tetris game by a separate laser controller module - whose job will be to rapidly modulate the output of the laser at appropriate times as to create a low resolution, scanning raster image of the Tetris playing field.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

MIT Dance Dance Revolution

By Annamaria Ayuso and Sharmeen Browarek

Abstract

Our goal for this project is to create a newer and better version of DDR. We plan to use infrared sensors to track the footwork of game players to remove the dependence on a physical console pad. The game itself is very similar to a standard DDR game with a main menu and a game play screen. There will be one song on the menu, due to memory restrictions, but three difficulty levels for the song. Also, the accuracy of each player’s performance will be rated at the end of the level. The objective of the game remains the same; to sync footsteps with the pattern of arrows displayed on the screen. Instead of arrows, we plan to have symbols that represent MIT (i.e., pi, the dome, beaver footprint, etc) ascending on the screen. The game will also keep track of the perfectly synchronized steps and output the highest number of consecutive, perfectly synchronized steps at the end of a round. We will create a dynamic background on the right hand side by using a series of pictures that change in progression to look like a dancing beaver. Our version of the game will revolutionize DDR as we know it today. We are taking the first step to creating a portable game that people can enjoy anywhere, anytime.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Local Decoding of Walsh Codes to Reduce CDMA Despreading Computation

By Matthew Doherty

Abstract

Chan et al. invented several novel classes of algorithms to reduce computation in software implementations of the IS-95 reverse link by exploiting software’s inherent flexibility over hardware. The algorithms work by processing only a fraction of the despread signal to decode Walsh codewords, relying on an outer feedback loop to choose the fraction of despread signal to process in order to maintain a target bit error rate. Because FPGAs are significantly more flexible than ASICs, the same algorithms can be implemented in an FPGA to reduce computation in hardware. This reduction in computation results directly in power savings because FPGA power consumption is heavily dependent on its gates’ switching activity. We propose to implement the “novel and elegant” Generalized Local Decoding of Walsh codewords in an FPGA to determine the potential power savings of the algorithm in hardware.

In doing so, despread codewords from a test data set are processed by the Walsh decoder. Their output bit streams are checked against the known optimal decoding and the bit error rate is determined. This bit error rate is fed back to the decoder, which uses it to choose the fraction of the despread signal to process.

To validate the results, the test data set can be chosen from several of varying signal-to-noise ratio (SNR). In addition, the outer feedback loop can be turned off, so the corresponding bit error rate of the open loop system can be analyzed.

The results are shown in real time on the LCD display, simultaneously comparing the bit error rates and power usages of the optimal decoder, feedback-controlled suboptimal decoder, and open loop suboptimal decoder. Each codeword is processed slowly enough so that it is apparent that the feedback loop is keeping the bit error rate constant and using minimal power.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Laser Pointer Mouse

By Xinpeng Huang and William Putnam

Abstract

The purpose of this project is to design an implement a laser pointer mouse. When doing a PowerPoint presentation or using the computer for any other occasion for which it is inconvenient to be sitting in front of it, users would like a way to control the computer remotely. The laser pointer mouse allows lecturers and presenters to point at the screen, and, with the press of a button, move the mouse cursor to the location of the laser, without ever touching the computer or mouse. A few more buttons allow the user to perform wirelessly transmitted left, right, and double clicks. Support for drawing over the screen, e.g. arrows and circles for increased presentation effectiveness, will be implemented as time permits. The system will be implemented in Verilog, and realized on the FPGA on the 6.111 labkit.

Project Files

Presentation (PDF)

Report (PDF - 5.7 MB)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Drum Machine

By Hana Adaniya and Shirley Fung

Abstract

The pattern sequencer allows the user to compose rhythms from a sound bank of audio samples. The resulting loop is a combination of sixteen different audio channels. Each channel can be programmed individually for different beats by selecting a particular pattern. The sixteen audio channels are then combined using additive mixing based on a weighted average algorithm.

Each of the 16 channels is assigned with a one second 16-bit audio sample. These samples are recorded at 44.1 kHz and stored in about 1 Megabyte of memory. The pattern has a template of sixteen 16th notes. A row of sixteen squares on the video interface represents the on/off state of each sixteenth note of a channel. The user programs the different beats by turning the sample on or off at each sixteenth note. Along with the current state of the beat sequence, the video interface displays the name of the audio sample as well as the overall beats per minute (BPM) and the FFT of the master output audio signal.

Project Files

Presentation (PDF - 1.1 MB)

Report (PDF - 1.3 MB)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Fingerprint Verification System

By Bashira Chowdhury and Cheryl Texin

Abstract

We will design and implement an image recognition system to identify fingerprints based on a given database. We will begin by inputting simple images and checking that the system accurately identifies those images. As the system is developed, more complex images can be used. The final stage of the project will involve identifying an individual’s fingerprint based on standard points of identification used in common practice.

This project consists of a few stages. The initial stage will involve creating a database in memory for the image comparison. The next stage will be developing an interface between the camera and a RAM to store the image that needs to be identified. Once the image has been loaded into the system, it must be processed to select the appropriate characteristics for the comparison to the database. The processed image will then be compared to the images in the database to determine the quality of the similarities. The most similar image will be selected and presented to the user interface along with the quality of the identification.

The image processing will involve a series of filters in the spatial domain. There will be an edge-detection filter to sharpen the image, prior to binarization of the fingerprint. Another filter will select the unique components of the fingerprint. The database will contain the post-processed fingerprint information to minimize the size of the stored data. The database size will be limited to the memory of the labkit, which will be sufficient to demonstrate the functionality of the fingerprint matching system.

The work will be split into two components. Bashira will be responsible for interfacing the camera to the labkit, as well as managing the data storage in memory. Cheryl will implement the image processing to isolate the data for the analysis and the matching. Once the fingerprint recognition scheme is working, both team members will work to enhance the identification interface as time allows to create a visually appealing result.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

3-D Pong

By Igor Ginsburg

Abstract

3D Pong takes MIT Pong to the next level with a 3D interface. At the heart of the project there is a hardware based 3D renderer. The renderer takes in a 3D model, specifically a sequence of colored triangles in a 3D space, and produces a 2D SVGA image. The view is controlled through a trackball mouse, which specifies rotations, translations and zoom. While the renderer can take in pre-built models stored in on-chip ROM, during gameplay a model of the current board is generated dynamically.

The project contains several high-level modules in addition to the renderer. A track-ball driver connects to the PS/2 interface and provide rotation, translation and zoom inputs, along with a possible lightsource input, to the renderer. A game-logic module provides ball and paddle coordinates to the game-model builder. The game-model builder turns the ball and paddle coordinates into a 3D model of the game field. The 2D image produced by the renderer is buffered in a double-buffer module which interfaces with the labkit SRAM. An SVGA module uses these buffered images to generate monitor outputs.

The renderer is pipelined, and is divided into several submodules. These include a rotator, a translator, a triangle shader, a projector, and a pixel-painter. The rotator module uses matrix multiplication to rotate triangle vertices about the origin. The translator module uses signed subtraction to recenter the points about a new origin. The shader module calculates a vector normal to the triangle’s plain, then take a dot product with the light-source vector, in order to calculate the proper color for the entire triangle. The projector module, uses the z-coordinate of each point to rescale the x and y coordinates, based on a given lens focal length. Finally, the pixel painter enumerates the pixels in the interior of the triangle, storing their colors and z-coordinates to the buffer module.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

A Novel Approach to Active 2D Sonar

By Leon Fay, Miranda Ha, and Vinith Misra (a.k.a The Tunafish Team)

Abstract

This project will implement a phase-array sonar system that will create a two-dimensional map of the environment directly in front of the array and display it on a screen for the user. The system will consist of one acoustic transmitter and a number of receivers. The receivers will be placed in a linear array spaced appropriately. Analysis of the phase relationships (to determine the angle) and delays (to calculate distances) in the receivers will allow a two-dimensional map of the environment to be drawn.

The project will consist of three parts: a signal processor, a master controller, and a display module. The signal processor will manipulate the phases of the different received signals to determine the distance to the target at a certain angle. The master controller dictates when data is gathered, processed, and post-processed. The display module will make a two-dimensional color-coded map that shows distance and highlights edges. There will also be alternate display modes that show waveforms of received signals to help with debugging.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

A Voice Training Karaoke Machine

By Masood Qazi and Zhongying Zhou

Abstract

A Karaoke machine will be implemented for the purpose of vocal training. It will have as its two primary inputs the user’s singing and the notes from the sheet music of the vocals for a selected song. The user’s voice will be recorded through a microphone interfaced to an analog to digital converter. The user’s song will be analyzed with digital filters to determine the local spectral content. The system will then compare the user’s singing, by pitch and rhythm, with what is described in the sheet music. This comparative analysis will be presented in a meaningful way to the user through a VGA display. Finally, the user will also have access to audio outputs such as the sequence of correct tones and the user’s recorded voice through headphones.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

Have a Safe Flight: Bon Voyage

By Mariela Buchin, WonRon Cho, and Scott Fisher

Abstract

This is the creation of a “smart flight vest” that detects movements from a human body and translates them into parameters that determine the pitch and roll of an airplane in flight. The throttle of the plane is determined by a pressure sensor located under the pilot’s foot. The pilot flying the plane stands in front of a monitor that displays the main features of an airplane console, including an attitude indicator, a compass, and an altitude and vertical velocity display.

There are two devices imbedded in the flight vest to detect the horizontal and forward tilt of the upper body of the pilot, corresponding to the roll and pitch of the plane. The outputs of these devices are converted to digital signals and sent to a physics module that determines the orientation, altitude, direction, vertical velocity, and position of the airplane.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Groups: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15

A Two-Input Polygraph

By Chris Buenrostro, Isaac Rosmarin, Archana Venkataraman

Abstract

In this project we propose to design and implement a 2-input polygraph using the Xilinx Virtex2 FPGA. The two physiological signals that we will focus on are pulse rate and skin conductivity. These inputs were chosen because they are fairly easy to measure and interpret. During times of emotional stress, such as when the subject is forced to lie, his pulse rate increases. Likewise, the subject is more likely to sweat, thus increasing his skin conductivity.

The project will be divided into three portions: the Physiological Sensor Block, the Digital Control Block, and the Display Block. The sensor block will focus on extracting data from the analog sensors used to measure pulse and skin conductivity. It will involve assembling the two sensors needed for the project, interfacing with an analog-to-digital converter (ADC), and storing the results in a Block RAM. The digital control block is the major control unit for the project. Although it will provide for both user interaction and memory access, most of this section will focus on analyzing data from the sensor. Since the subject is asked several control questions which serve as a baseline, the system must be able to identify different types of questions and algorithmically analyze the physiological signals appropriately. The display block will focus on outputting the measurements and the results. Just as conventional polygraphs record the physiological data using pen and paper, we would like to display the sensor outputs and the computer’s decision on the monitor. This portion will provide for additional features such as screen capture, so data can be compared visually.

Project Files

Presentation (PDF)

Report (PDF)

Report Appendix (PDF)

Course Info

As Taught In
Spring 2006
Learning Resource Types
Problem Sets
Exams with Solutions
Lecture Notes
Projects with Examples