Messy Signals
Machine systems that anticipate human expectations are the long lasting dream: a smart home that anticipates the inhabitants, an automatic gearbox that learns the driving style of a driver, a music service that streams music matching the mood of a listener.
This concept, however, is far from reality. Smart homes are often irritating, cars with manual stick are still popular and people like to hand-pick music to listen.
There are two reasons for that. First is the feedback - or the lack of it. Programming the behavior of a smart home is extremely tedious, unless you can do this on your own. I was doing that at my smart home several years ago. Ended up with hundreds of rules and exceptions based on very complex decision engine. For several months I was fine-tuning the rules based on careful observation of expectations and every day annoyances. I think it is fine now, but no way for that system to be converted into a mass market product. It is just too complex. Of course you may say - "it should learn itself". True, but how does it get the feedback? To give a really valuable feedback I would have to wear a brain scanning headband providing signals to the system. Because all other signals humans give, are messy at best. It is difficult for a human to learn to interpret correctly signals given by other humans. With limited variety of sensors and still limited intelligence, machines still have hard time anticipating what we really want.
Second are the emotions and moods. A gearbox learns my driving style. But how does it know if right here, right now, I want a smooth lazy slow or a sharp drive? How does it know it should be learning my style now as lazy or as sharp? How does it know, without my explicit input, which style to choose?
Humans are emotional and give messy signals. This is the key challenge for any personal artificial intelligence system. A challenge that will take a long time to be properly addressed. In the meantime the machine intelligence is getting better at being reactive. And that is fair. I would not wait for anything really anticipatory from machines anytime soon.
This concept, however, is far from reality. Smart homes are often irritating, cars with manual stick are still popular and people like to hand-pick music to listen.
There are two reasons for that. First is the feedback - or the lack of it. Programming the behavior of a smart home is extremely tedious, unless you can do this on your own. I was doing that at my smart home several years ago. Ended up with hundreds of rules and exceptions based on very complex decision engine. For several months I was fine-tuning the rules based on careful observation of expectations and every day annoyances. I think it is fine now, but no way for that system to be converted into a mass market product. It is just too complex. Of course you may say - "it should learn itself". True, but how does it get the feedback? To give a really valuable feedback I would have to wear a brain scanning headband providing signals to the system. Because all other signals humans give, are messy at best. It is difficult for a human to learn to interpret correctly signals given by other humans. With limited variety of sensors and still limited intelligence, machines still have hard time anticipating what we really want.
Second are the emotions and moods. A gearbox learns my driving style. But how does it know if right here, right now, I want a smooth lazy slow or a sharp drive? How does it know it should be learning my style now as lazy or as sharp? How does it know, without my explicit input, which style to choose?
Humans are emotional and give messy signals. This is the key challenge for any personal artificial intelligence system. A challenge that will take a long time to be properly addressed. In the meantime the machine intelligence is getting better at being reactive. And that is fair. I would not wait for anything really anticipatory from machines anytime soon.
Comments
Post a Comment