Today I am recommending a great interview. Lex Fridman had neuroscientist David Eagleman on his show to discuss our amazing brains and artificial intelligence and morality and a bunch of other fascinating things.
I have reference Eagleman in a previous article, talking about brain plasticity. He is an amazing scientist and educator and inventor who is on the cutting edge of a number of fascinating advancements.
One of the reasons I like him is because (shocker!) he agrees with me about what is required if we are ever to create artificial general intelligence (AGI). What we currently call “AI”, these large language models (LLMs), are not on the right track to the creation of AGI, in my opinion.
I think, (and Eagleman appears to agree) that if AGI is possible, to create it, we will have to embed our fanciest neural net thingy (ain’t I scientific?) and embed it with some kind of basic goals or desires — things similar to our own base needs for food, water, air, comfortable temperature, sex, companionship, etc.
Then, we will need to take this neural net and embody it. then place that body in some external environment and just… let it go. If the neural net can figure out how to wire itself up in order to satisfy it’s “urges” in a challenging environment, then it will have “bootstrapped” itself into being something worthy of being called an artificial general intelligence.
We can then test our “creation” by setting it in different environments with different challenges and seeing if it’s “intelligence” is truly “general. Maybe it’ll even learn to communicate with us in a real way. Not this LLM parlor trick where the machine guesses what word “should” follow the last word, but an actual “meeting of minds”.
(For more detail about why LLMs aren’t “thinking”, see here and here and here.)
Eagleman also gave me a white pill in this interview. Many of us have watched young people (and far too many people our own age) in some environment, just staring at their phones or iPads in a seemingly mindless, hypnotized stupor.
It looks zombie-like. It’s creepy. I’ve wondered if we are getting hopelessly dumber as a consequence.
But Eagleman says it’s not necessarily so. People my age sat for 12 years (or more) in brick-and-mortar schoolrooms being taught to memorize tons of information in a scattershot fashion.
Kind of like, “Here’s some history, here’s some art, here’s some math. Here’s some reading. We don’t know what you’ll need as an adult, so we are just going to throw a bunch of stuff at you and hope that some of it helps.”
But today’s “learners” have the whole of the Internet at their fingertips. And sure, many (too many) of them use that powerful tool to endlessly scroll TikTok or Instagram, but not all of them, and not all the time.
Often, these people follow Wikipedia links and YouTube links and recommendations to pull threads and dive down rabbit holes on subjects that they are really interested in.
And it turns out, from a neurological standpoint, that interest is crucial. Do you remember being bored in school when the teacher was droning on and on about some damned thing you just didn’t care about at all? Do you remember any of that?
Of course not.
But things you were exposed to that interested you, that sticks. That inspires. That’s the kind of learning that lays the groundwork for a lifetime of learning. And that’s exactly the kind of learning that young people (admittedly, not all of them) are getting now.
So, despair not, my friends. The next generation(s) aren’t necessarily going to be useless after all.
And even if they are, our upcoming robot overlords are surely going to be kind to us, right?
Naturally,
Adam
PS: Wanna learn lots of cool and interesting stuff for your own self? Check out Liberty Classroom!
Great angle! If the electronic world can replace the mundane aspects of school, then it's likely a win. Society is simply in a (hopefully brief) period right now where the internet exists and the ridiculous government-school model still sadly exists, too.