For sometime now, I have been of the opinion that some of the next big advances in computing will be in wearable computing, particularly some kind of device aiding augmented reality. So when Google Glass was announced I was excited. Kudos to Google for pursuing something truly innovate and making it happen. The product is under development and there is a developer edition for sale for people so inclined. I imagine one of the problems the Glass team will be dealing with is the UX issue of the user having to focus separately on the display of the device whenever they want to consume some information from it (see 1:03 in the video below).
There seems to be technology out there that can tackle this issue as claimed by Innovega. They claim to be able to allow users to focus on objects in two different planes simultaneously. Such capability would be needed for true “terminator” vision where information is super-imposed on your view of the world without you having to focus on anything special. In any case, Google says Glass is in the “skunkworks pre-alpha” phase .. they may well solve the problem in the coming months.
Update 01/04/2013: Google Engineer, Babak Parviz, in an interview has revealed that Goggle will not be prioritizing the Augmented Reality (information overlaid over the physical landscape) possibility in Glass. Not immediately at least.
The state aim is to “allow people to connect to others with images and video” for “pictorial communications” and to “allow people to access information very, very quickly.”. Interesting!