Sensing Intents
I have done many circles around this subject. And finally it dawned on me what is the difference between a dumb and a smart system. The difference is understanding, processing and communicating INTENTS.
Dumb computers were doing what users instructed them to. The old era that has passed. Smart computers are doing whatever it takes to match the intents of their users.
Everything is a computer today. A smartphone. A car. A house (I am working on the last one with my team at HomerSoft, which we still keep under a cover). And whatever we do with the computers when we interact with them, the more they understand our intents, the more we like them.
Intents are loosely coupled. It is never a single press of a button or a command or a gesture. It is always a series of various loosely federated events, coupled with additional "environment" data. So it is difficult to correctly identify intents. It has always been, even without computers, just among people themselves.
But I can see working with intents will be our long term focus in building the HC (House Computer). It is such a vast area to explore and cover. Today we have this widely used term of Intelligent House. This usually means "I can turn on the light with my iPhone. Even remotely". Nice. But intelligent? Far from it...
Speaking of what I consider intelligent, I can give two simple examples.
Motion sensors, monitoring activity around the house. A single sensor can turn on lights. Of course. Two sensors can report a direction. When I go upstairs, the lights in my bedroom turn on before the bedroom motion sensor actually senses me. When I go downstairs, and there is nobody upstairs, the bedroom lights are turned off without the usual "inactivity timer". When a family lives in a house, there is a very repeatable pattern. The sensors "see" a number of people moving around, and then the activity slows down as we reach the beds and finally fall asleep. Our indirect intent is the house should fall asleep too, so any lights left behind are turned off, the lower floor burglar alarm is automatically armed, the hot water circular pump is turned off and so on. The lights are automatically switched to the "night mode", meaning if they are activated by motion sensors, they will come up gradually, reaching, say 60% in 5 seconds, when somebody wakes up in the night to go to the bathroom. See, there is no need to "program" any specific schedule, like "the night time starts at 11pm and ends at 6.30am". The house computer understands our very indirectly expressed intents. Of course when I set the alarm clock for 4:40am, it should signal my explicit intent for the house to be ready earlier than usual for my wake up. This means hot bath prepared, coffee already waiting, without any need to manually press the "coffee machine" button on my iPhone.
Car navigation. I am away (my car computer or my smartphone knows exactly where). I launch the navigation app, selecting the "guide me home" option. A very direct intent, which by the way is completely wasted today. Just from that intent alone, my house should start preparing for my arrival. Raise the temperature to the comfort level. Turn the ventilation on, so I am greeted with a refreshed air inside. It will know the time of my arrival, so it may light the driveway before the gate motion sensors spot me. It may also discreetly inform the family members I am on the way. The car navigation target is such a very specific intent, so many local and remote applications could build upon. None does today.
Intents are vastly underexploited today. There are many opportunities there. Federated sensors. Fuzzy logic reasoning algorithms. Messaging bus. We are still at the very beginning of the road. There is no support for intents in the iOS. Android has a simple Intents API, which is a good starting point and already a huge advantage over the iOS, but this is still very, very basic. I expect a lot happening in this area in the years to come. After all we will be moving from living with dumb machines to living with intelligent machines. And them sensing our intents is the key.
Dumb computers were doing what users instructed them to. The old era that has passed. Smart computers are doing whatever it takes to match the intents of their users.
Everything is a computer today. A smartphone. A car. A house (I am working on the last one with my team at HomerSoft, which we still keep under a cover). And whatever we do with the computers when we interact with them, the more they understand our intents, the more we like them.
Intents are loosely coupled. It is never a single press of a button or a command or a gesture. It is always a series of various loosely federated events, coupled with additional "environment" data. So it is difficult to correctly identify intents. It has always been, even without computers, just among people themselves.
But I can see working with intents will be our long term focus in building the HC (House Computer). It is such a vast area to explore and cover. Today we have this widely used term of Intelligent House. This usually means "I can turn on the light with my iPhone. Even remotely". Nice. But intelligent? Far from it...
Speaking of what I consider intelligent, I can give two simple examples.
Motion sensors, monitoring activity around the house. A single sensor can turn on lights. Of course. Two sensors can report a direction. When I go upstairs, the lights in my bedroom turn on before the bedroom motion sensor actually senses me. When I go downstairs, and there is nobody upstairs, the bedroom lights are turned off without the usual "inactivity timer". When a family lives in a house, there is a very repeatable pattern. The sensors "see" a number of people moving around, and then the activity slows down as we reach the beds and finally fall asleep. Our indirect intent is the house should fall asleep too, so any lights left behind are turned off, the lower floor burglar alarm is automatically armed, the hot water circular pump is turned off and so on. The lights are automatically switched to the "night mode", meaning if they are activated by motion sensors, they will come up gradually, reaching, say 60% in 5 seconds, when somebody wakes up in the night to go to the bathroom. See, there is no need to "program" any specific schedule, like "the night time starts at 11pm and ends at 6.30am". The house computer understands our very indirectly expressed intents. Of course when I set the alarm clock for 4:40am, it should signal my explicit intent for the house to be ready earlier than usual for my wake up. This means hot bath prepared, coffee already waiting, without any need to manually press the "coffee machine" button on my iPhone.
Car navigation. I am away (my car computer or my smartphone knows exactly where). I launch the navigation app, selecting the "guide me home" option. A very direct intent, which by the way is completely wasted today. Just from that intent alone, my house should start preparing for my arrival. Raise the temperature to the comfort level. Turn the ventilation on, so I am greeted with a refreshed air inside. It will know the time of my arrival, so it may light the driveway before the gate motion sensors spot me. It may also discreetly inform the family members I am on the way. The car navigation target is such a very specific intent, so many local and remote applications could build upon. None does today.
Intents are vastly underexploited today. There are many opportunities there. Federated sensors. Fuzzy logic reasoning algorithms. Messaging bus. We are still at the very beginning of the road. There is no support for intents in the iOS. Android has a simple Intents API, which is a good starting point and already a huge advantage over the iOS, but this is still very, very basic. I expect a lot happening in this area in the years to come. After all we will be moving from living with dumb machines to living with intelligent machines. And them sensing our intents is the key.
Comments
Post a Comment