A simple, wearable device enhances the real world with digital information
MIT
Retrieving information from the Web when you're on the go can be a challenge. To make it easier, graduate student Pranav Mistry has developed SixthSense, a device that is worn like a pendant and superimposes digital information on the physical world. Unlike previous "augmented reality" systems, Mistry's consists of inexpensive, off-the-shelf hardware. Two cables connect an LED projector and webcam to a Web-enabled mobile phone, but the system can easily be made wireless, says Mistry.
Users control SixthSense with simple hand gestures; putting your fingers and thumbs together to create a picture frame tells the camera to snap a photo, while drawing an @ symbol in the air allows you to check your e-mail. It is also designed to automatically recognize objects and retrieve relevant information: hold up a book, for instance, and the device projects reader ratings from sites like Amazon.com onto its cover. With text-to-speech software and a Bluetooth headset, it can "whisper" the information to you instead.
Remarkably, Mistry developed SixthSense in less than five months, and it costs under $350 to build (not including the phone). Users must currently wear colored "markers" on their fingers so that the system can track their hand gestures, but he is designing algorithms that will enable the phone to recognize them directly. --Brittany Sauser
1. Camera: A webcam captures an object in view and tracks the user's hand gestures. It sends the data to the smart phone.
2. Colored Markers: Marking the user's fingers with red, yellow, green, and blue tape helps the webcam recognize gestures. Mistry is working on gesture-recognition algorithms that could eliminate the need for the markers.
3. Projector: A tiny LED projector displays data sent from the smart phone on any surface in view--object, wall, or person. Mistry hopes to start using laser projectors to increase the brightness.
4. Smart Phone: A Web-enabled smart phone in the user's pocket processes the video data, using vision algorithms to identify the object. Other software searches the Web and interprets the hand gestures.
sumber: