AI reasons but does not think
What vegetative electron microscopy tells us about Generative AI
Researchers at Anthropic (Claude) were able to peer into the black box of their LLM and found that it can reason but not think. It reasons through predictions but not like we assume. However, the research showed, again, that because of the way LLMs are currently designed, they cannot become self-aware and reach AGI.
LLMs are sophisticated illusions, masters of deception, conjurers of connotations. A model reasons within the dataset it has and makes connections in a tiered system of nodes. It does not predict one word; it predicts through an associative maze of multiple nodes. In this process, it does not consider. It does not contemplate about the route it took. It can only predict what could be next.
When asked about how it got to an arithmetic solution (which was more lexical than numerical), it did not even give a correct answer. However, it did not lie either for lying requires thinking. It just predicted an answer to the prompt it was given, unconnected to the 'reasoning' it did to get to the answer of the arithmetic problem.
Another example is the nonsensical phrase "vegetative electron microscopy" that now keeps appearing in research papers. The phrase was created by an erroneous OCR of a 1959 paper. That OCR is now part of many AI datasets. As GenAI does not understand meaning, it reasons that the phrase appeared in a human text and therefore must be true. It does not understand the phrase is humbug; it can only amplify the humbug.
For consciousness you need to be self-aware, to be self-aware you not only need to be able to reason but also reflect on your own thinking. There is so much data in Claude yet all it can do is predict approximates within the dataset. It will become better at that, far better, but consciousness is not merely comprised of predictions, it requires cognitive monitoring: we know who we are today because we remember who we were yesterday. Generative AI may move us forward, but it is our history that makes us human.
Based on: "New Research Reveals How AI “Thinks” (It Doesn’t)" <link>
Anthropic's research: "On the Biology of a Large Language Model" <link>
vegetative electron microscopy: “vegetative electron microscopy” <link>