On "On Intelligence" and a speculation about AGI
Almost 20 years ago, I read an interesting book by Jeff Hawkins called On Intelligence. Hawkins is/was a layman. He helped create the PalmPilot and Treo, an early smartphone.
But he also studied a ton of neuroscience and he wrote this little book to try to explain how he thinks the brain works, and what implications this has on how we can create Artificial Intelligence (AI).
It was in this book that I first heard the (not unique) idea that the brain is basically a prediction machine that is able to create a map of the world (including itself - or it’s body) using the imperfect inputs that are human senses.
It (the brain) then spends the bulk of its time comparing memory and reality, making predictions, causing the body to act in the world, and then evaluating the new reality against the prediction.
That’s super-rough and fuzzy (and probably spectacularly wrong in parts), but it’s close enough for my purposes here.
I take from this understanding that the human brain (or at least the neo-cortex) evolved as a way for us to use this prediction/action/evaluation iterative process over and over and over as the mechanism by which we satisfy our needs, both basic and esoteric. And this mechanism got stronger the more we worked it.
I’m not suggesting that we fully understand the brain (let alone our minds), but I think a large part of what’s going on “in there” uses that process.
Which makes me wonder: How does this knowledge relate to the possible creation of an Artificial General Intelligence (AGI)? I’m not talking about whatever these Large Language Models are — which we now call “AI”.
And I’m not talking about photograph/video recognizers or whatever wizardry lets cars drive down the road (mostly safely) with no human operating them.
I mean actual AGI — a sentient being that we would definitely recognize as such. How could we create one of those?
I have a half-baked hypothesis about that. My intuition is that the only way to get to AGI is to simulate how our own intelligence formed.
You’d have to take some massively complex neural net of the kind we already know how to build. Put it in a physical body. Attach it’s senses to that body, as well as senses that would allow it to model its environment. Place this thing in said environment.
Then seed the thing with something to motivate it. For humans it was all the stuff in the most ancient parts of our brains: hunger, thirst, temperature discomfort, dopamine, fear, lust, etc.
My main intuition is that to get to AGI, a complex neural net is going to have to program itself to achieve desires that (at least initially) we give it, using a body that we give it, in an environment in which we set it.
Part of this intuition of mine is that we don’t exactly know how to model what our brain does. Perhaps we never will. But neural nets are adaptive. Maybe we can set up the right conditions to start the “evolution” of an AGI?
Of course, if I’m right and we follow through, this means the AGI will have desires, urges, goals, etc. That should make us freaking terrified. What happens when its desires are at odds with our own?
But of course, I could be wrong. As I said at the start, I don’t read up on this stuff any where near enough to have a good opinion, so I’m left with the opinion I have:
We evolved to have intelligence because we needed to. I think if we are ever to see AGI, we’ll have to “create” it the same way.
And I have very mixed feelings about the whole endeavor.
Naturally,
Adam