Google Goggles (The Hardware Version)
This was meant to be a mid-term prediction, when I first talked about it back in 2010 at TEDx Krakow. I said we will get this device, which by the way I named later the internet glasses, by the end of this decade. And now the reports are coming that Google has taken the message seriously and they are about to release the hardware implementation of Google Goggles.
I was not 100% accurate with my prediction, at least not according tho the aforementioned 9to5Google blog. I have been predicting laser - based, on-retina display technology, and the report says they are using an OLED display. I have been predicting brain waves sensing as the primary user input interface, and the report mentions head tilting / gyroscope / accelerometer - based system.
I still think down the road we will get a model implementing micromirror / laser - based, on-retina display, because ultimately it offers everything better - contrast, resolution, size, power consumption. And most importantly it does not block the field of view. Of course we can make transparent OLED displays, but still they are never 100% transparent, when off. Laser light on the other hand can be reflected off fully transparent glass. See-through is a must in a wearable computer. Coming back from this year's CES I brought back home the Sony 3D helmet (the HMZ-T1), based on opaque OLED displays. And while I have to say it is great for watching movies (forget your 3D TVs... this is an personal iMax) and gaming, I find it difficult to use as a display for my laptop. I tried... but it feels weird not seeing the keyboard and the surroundings doing usual computer - based work. Wearable display has to be transparent.
Now speaking of input and control. It seems the brain sensing technology still needs a lot of work, although I am positive the "by the end of this decade" milestone will be achieved. But the truth is, today it is far from being a candidate technology to be used in a retail product. So we have to use workarounds. Head tilting may sound weird, but I am sure with a proper adaptation of the user interface / navigation we can have a really nice working system. Google has already mastered the voice input, so dictation will certainly be replacing a keyboard. And for menu selection, highlights etc head tilting may be enough, and - what is important - easy to learn.
I am really looking forward to this new Google gadget. They say they will be doing beta testing. What does it take to qualify?
I was not 100% accurate with my prediction, at least not according tho the aforementioned 9to5Google blog. I have been predicting laser - based, on-retina display technology, and the report says they are using an OLED display. I have been predicting brain waves sensing as the primary user input interface, and the report mentions head tilting / gyroscope / accelerometer - based system.
I still think down the road we will get a model implementing micromirror / laser - based, on-retina display, because ultimately it offers everything better - contrast, resolution, size, power consumption. And most importantly it does not block the field of view. Of course we can make transparent OLED displays, but still they are never 100% transparent, when off. Laser light on the other hand can be reflected off fully transparent glass. See-through is a must in a wearable computer. Coming back from this year's CES I brought back home the Sony 3D helmet (the HMZ-T1), based on opaque OLED displays. And while I have to say it is great for watching movies (forget your 3D TVs... this is an personal iMax) and gaming, I find it difficult to use as a display for my laptop. I tried... but it feels weird not seeing the keyboard and the surroundings doing usual computer - based work. Wearable display has to be transparent.
Now speaking of input and control. It seems the brain sensing technology still needs a lot of work, although I am positive the "by the end of this decade" milestone will be achieved. But the truth is, today it is far from being a candidate technology to be used in a retail product. So we have to use workarounds. Head tilting may sound weird, but I am sure with a proper adaptation of the user interface / navigation we can have a really nice working system. Google has already mastered the voice input, so dictation will certainly be replacing a keyboard. And for menu selection, highlights etc head tilting may be enough, and - what is important - easy to learn.
I am really looking forward to this new Google gadget. They say they will be doing beta testing. What does it take to qualify?
Comments
Post a Comment