My new favorite thing – perceptual computing. Actually it’s hand gestures, object tracking and finger tracking hardware, voice recognition and an SDK we’re looking at here and Intel are pushing it hard. There’s a dev kit a free SDK and I think it’s going to be an important part of computing control in the future – Minority Report style. The hands-on demo here is shown on an Ultrabook.
The obvious end-game here is that the sensors get built into laptops but Intel and Creative are getting together now to create hardware that can be used to develop applications. Developers will feed back on issues, there’s a chance to standardize on a common set of gestures and in the meantime, we expect the technology to get smaller and for some of the processing to go into silicon. Gaming is one obvious choice (demonstrated in the video below) but there are many other applications.
If you’re a developer you can buy in for $150. You’ll get the SDK and then you’re off. with features like speech recognition (demo), facial analysis, close-range tracking, 2D/3D object tracking with a 720p cam, QVGA IR cam, dual array mics and a 6 inches to 3.25 operating range.
If you’re interested, read the product brief (PDF) and comment below because we’re trying to get a few of these kits for you devs out there and we want to know what you’ll do with a freebie!
These kinds of things are nice in movies (Iron Man) or some select real world scenarios (ie. presentations, TV news and kids’ toys) but I don’t really see much use for general consumers. Kind of like touch screens on clamshell notebooks.
I have to disagree, especially if you are suggesting there is no value to having user interfaces beyond mouse/keyboard. Natural User Interfaces continue to get more sophisticated and will to some extent be what differentiates apps in the coming years. Devices like this (ex. Kinect) are just the starting point and are geared more towards developers & enthusiasts. We are probably 16 months away before the cost & form factor are reduced to where it needs to be. In the meantime, developers need to gain experience & figure out how best to leverage these capabilities in their current/future applications.
I’d rather not use any of these kinds of inputs for most personal computing needs. I’ll refrain from flailing my hands, arms and body until someone comes out with a holodeck.
Also, Joe didn’t suggest at all that there is no value to having input beyond mice and keyboards. He even gave a few examples. In addition to what Joe suggested these things can be useful in stores to virtually try on clothes, shoes, hairstyles, etc. I guess I just see these mainly used for industry/professional situations and not much for consumer usage beyond games (Wii, Kinect, eye tracking, etc.) or that holodeck I’m waiting for.