Human - Machine Interface: The Next Fronteer

Mobile Internet occupies the headlines. No wonder. The 2001-2010 decade was ruled by desktop Internet, aka Web 2.0. Starting sometime around the year 2000, sitting at the desktop computer, we started our transition from humans to super-humans. Armed with a keyboard, a mouse and a screen, with fast broadband connectivity each one of us could win the "Who Wants To Be A Millionaire" show. Courtesy of Google.

Things are different in the Mobile Internet decade 2011-2020. Broadband is either already here or is coming. But to mobilize our super - humanity, we need to break free from screens and keyboards. Screens and keyboards form cages keeping us from the real mobile freedom. Any screen you take is both too large (when you have to carry it) or too small (when you look at it). Ditto keyboard. Both screens and keyboards are dead ends in the evolution of human - machine interfaces.

Imagine a powerful, connected mobile computer. One you can communicate with freely, without having to look at and not having to type on. Sitting in your pocket. One that does not have to be pulled and opened to interact with. Wouldn't it become an integral part of yourself, augmenting senses and decision processes?

Already today there have been a number of technologies and even products bringing us closer to the real mobile freedom. Let me mention just a few to stimulate the imagination.

Direct, on-retina image projection. A company named Microvision, mentioned here on this blog a number of times. They are able to project a computer - generated image directly on a retina. Using tiny laser beams reflected by ordinary pair of glasses. A display you wear on your eye, but does not block your vision. I was wearing the monochrome version of their product back in 2004. Now they wait for the mass production of miniaturized green lasers to be able to project a full color image.

Speaker independent, continuous speech recognition. Have you read Andy Kessler's Grumby? You should. It is a great story of the potential next big thing after Google and Facebook, where speech recognition is the enabler. Once computers are able to listen and make sense of our voice interactions, they can do a lot. And I do mean a lot. A product / device that will be able to sit and listen and understand what we say, without the need to be trained and specially referred to, will make a difference. And will free us from keyboards. At the moment it still seems we are far from achieving this goal. But chances are such system will be delivered before 2020. And it will change our lives forever.

Direct mind interaction. We are far from sending information directly to our minds. Simply because our minds cannot consciously receive information bypassing our senses. But we are quite close to building a number of task - specific interfaces reading our minds. Products like the OCZ NIA - The Neural Impulse Actuator can read your brain waves. And trained - even your eye movements. So using the Microvision on-retina display and the NIA you can build a computer you can operate without using hands and in complete silence. Navigating the virtual user interface projected on your retina by moving your eyes. And this has been a consumer product for a couple of years now. Not perfect, but being a proof of concept an interface like that can be build inexpensively today. There are other products in development using similar technologies. This February NTT DoCoMo was showcasing eye-controlled media player. Earphones were the actual electrodes picking the brain signals. And with some extra filtering these signals were used to control playback of music coming from a cellphone in a pocket.

Such a few pieces of food for thought to digest. Spark up your imagination. 2020 will be different from 2010 :).

Comments