AI Is Clever, Not Wise
AI is clever, not wise.
What do I mean by clever vs. wise? In his seminal work, "Small Is Beautiful: Economics as if People Mattered," E.F. Schumacher, a German-British economist, uses the term "cleverness" to refer to the technical and analytical capabilities that have enabled humanity to achieve remarkable feats, such as landing on the moon, creating powerful computers, and developing sophisticated technologies. Cleverness, in his view, is about mastery over the material world, efficiency, and the pursuit of economic growth often at the expense of environmental health and social well-being.
On the other hand, "wisdom" for Schumacher encompasses a deeper understanding and respect for the natural world, the recognition of the limitations of human knowledge and technology, and the importance of moral and ethical considerations in guiding human actions. Wisdom, to Schumacher, implies a commitment to sustainability, where economic activities are conducted in harmony with the environment and society's long-term interests. It involves making decisions that are not only technically feasible but also ecologically viable and socially just.
As LLMs continue to get more and more performant, as seen by Anthropic’s recent Claude 3 release besting almost every performance benchmark, it’s starting to feel like AI is the exact technological outcome Schumacher warned of when a culture focuses almost entirely on cleverness and growth vs. wisdom and sustainability.
I see Schumacher’s ethical-economic framework applying to AI in two respects: the first is concerning the current state of the technology, which is more tongue-in-cheek and likely won’t be the case forever. The second application is in the trajectory of the technology and the consequences of the current pace and approach to which we’re developing it, which is less tongue-in-cheek and more existential.
Starting with the current state of the technology, I’d claim nothing AI currently outputs is particularly wise by definition. It can solve known, complex math problems, find specific data across massive text documents, and even write decent prose, but nothing it has produced to date is truly novel. LLMs are derivative machines; they use vast amounts of training data and reinforced learning to predict the next most logical token in an array. There is no deeper understanding happening within the machine; there are no moral or ethical considerations to its output. That’s why some experts refer to LLMs as stochastic parrots; they generate plausible outputs, without understanding the meaning of the language they process and produce.
If a human behaved this way, we might call them clever, but we certainly wouldn’t call them wise. It’s the same reason we refer to children as clever but not wise. When a young child does something particularly intelligent, we often recognize they are pantomiming the adults around them, not connecting disparate threads of thought into a novel new application.
As I said earlier, this claim about the current state of the technology may not be the case forever, which leads me to the second AI application of Schumacher’s ethical-economic framework: the trajectory, pace, and approach to which we are developing AI, which is to say, as seemingly fast as possible. So much so, that many leaders in the space are constantly expressing fears about the technology—even what’s been released so far. Take Sam Altman’s comments on his AI fears: "What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT. That maybe there was something hard and complicated in there (the system) that we didn't understand and have now already kicked it off."
Maybe that fear will turn out to be invalidated, or maybe not. The real issue is in our current inability to know the difference, without any pause or reflection, as teams race to push the edge of AI even further each day.
There’s no rewinding and putting LLMs back in the proverbial box, and I wouldn’t argue for that course of action; there are many beneficial, equitable applications to this technology, but I believe E.F. Schumacher would press us to go slow here, tread lightly, and make sustainable progress that is not only technically feasible but also ecologically viable and socially just.
Time will tell if our leading AI technologists will listen. If they, like their AI creations, will be wise, or simply clever.