A whole lot of people I know are fascinated with these Large Language Models (LLMs) that we sometimes call Artificial Intelligence (AI). Unfortunately, the vast majority are seriously confused about what these things are.
People will describe an interaction they had with ChatGPT or similar and reflexively assume human characteristics to it. They will wonder, “What was it thinking?” or “How could it come to that conclusion?” or similar.
We have to disabuse ourselves of notions like this. These LLMs aren’t thinking. They aren’t doing anything close to thinking. They don’t have “minds”. There’s no “there” there.
And it’s not even that they are “almost” there. LLMs are a million miles away from “thinking”. There aren’t any incremental improvements an engineer can make to turn a pickup truck into a pine tree. It’s a difference in kind, not of degree.
One of the best ways to see this is to read the work of a buddy of mine. His name is Bernie and he calls his Substack The Twadpockle Report. I highly recommend it. He’s a really smart and interesting fellow who writes about a lot of smart and interesting things, including “AI”.
Bernie likes to toy with an LLM by asking it a series of questions that quickly demonstrate that whatever it’s doing “under the hood”, it surely isn’t anything like “thinking”. Lately he’s been doing this by playing 20 Questions with the thing.
The results are more than fascinating. They demonstrate exactly what LLMs are, and what they are not. We really need to adjust our intuitions about ChatGPT, Grok, and all the rest before we “put them in charge” of something serious and learn the hard way that they aren’t thinking.
Start here and click around after you inevitably get very very interested.
I like the way Bernie thinks. And I really like the way he demonstrates that “AI” doesn’t.
Naturally,
Adam
Follow me on Twitter(X): @rerazer
Well to be fair, there isn’t much thinking happening anywhere these days.
Followed you on TwiXer.
The AI-isn't-really-intelligent is a topic I've harped on, too. Did a little of it in this early Substack post based on a pod interview: https://goodneighborbadcitizen.substack.com/p/drunk-with-possibility-or-drunk-with
AI is supercharged hyperrationality. That's not intelligence. But many-to-most people can't tell the difference.