Language vs Thinking

"Comprehension does appear to be separate from thinking" has been a revelation to me - as explained by Edward Gibson (a psycholinguistics professor at MIT and the head of the MIT Language Lab). Technically there is a separate neural network in a human brain which does language processing. And it has nothing to do with thinking. That makes a lot of sense and explains why the net output from all the AI progress has largely been negative

AI Large Language Models (LLMs) work by finding statistical patterns in text. Most importantly - without connecting words to meaning. This becomes especially visible when LLMs often fabricate facts and figures, misunderstand questions, and exhibit biases found in its training data.

But because LLMs have mastered language and thus sound very eloquent, we are tricked into thinking they are intelligent.

This does not mean of course that transformer models - the underlying technology for LLMs is useless. Of course not. But the broadest progress in AI have been LLMs and they are wrongly considered as intelligence. AI so far has not produced anything entirely new or groundbreaking, nothing out of  the box. It is all a transformation of what we have already said and done. True thinking clearly requires something else, an ingredient in the overall architecture, which we have not yet identified.

Comments