AI Learning Curve

AI is all the buzz these days. We are stunned by AI software beating Go and chess human champions and promised self driving cars and self caring machines anytime soon. But one of the key problem with AI (and, actually, with any "I") is maturity. Intelligence must go through a learning curve to mature and become intelligent. For humans this takes years. Years of continuous feedback stream: both positive and negative.

And while the chess-playing software can use all the power provided by GHz clocks and GBs of memory, this is simply not possible in a self-driving scenario. Feedback in a game of chess is straightforward: you either win or you don't. So the software can try all possible ways and strategies and improve on each iteration after winning (or losing) a game.

Self-driving cars would do the same if we simply let them all loose and allow to crash on each other.

But for practical reasons this sort of feedback is not possible. A car cannot crash and try again and crash again and finally understand what to do to avoid crashes. A chess playing AI is conscious of the entire situation, has all inputs (which are the positions of the pieces on the board and the moves executed). A car that got stuck on a closed lane lacks most of the inputs. It does not understand what a closed lane is, while it should have been trying to switch the lanes way in advance, and finally, what to do next, sitting in front of a barrier, while the adjacent lane is full of bumper-to-bumper vehicles that are not willing to stop and invite the orphaned AI in.

AI will no doubt revolutionize many product and service categories. Mostly these where they are able to get very precise feedback, both negative and positive. But wherever the feedback is not that straightforward, or is scarce, AI will be struggling to mature.

Comments