Intelligence in the Age of Mechanical Reproduction by Charles Eisenstein

There is something deeply unsettling about these images. They evoke the warnings of critics of modernity, who feared that industry’s standardization of parts and processes would induce the same in human beings: standard roles, standard beliefs, standard desires, standard ways of life. Does an analogous fate await our minds as more and more of what we read, hear, watch, and think draws on AI-generated content?

The original alignment problem

AI developers can counter the degradation of generative AI by continually introducing new, human-generated content into the training data, a strategy with provocative implications for the future of intelligence, human and beyond. It is not only artificial intelligence whose output gets more homogeneous and more delusional as it gets self-absorbed in manufactured information. The same happens to any human society to the extent that it shuts out information from the real world—from the body, from the senses, from the heart, from the beings of nature, from dissidents, from its exploited and oppressed, and especially from those it locks away, locks up, and locks out. As with AI, orthodoxies filter out and distort the very information that would overthrow them, and the society loses its mooring in reality.

In that sense, AI does not pose a new threat, just the rapid intensification of an age-old collective insanity.

Indigenous cultures too have faced the challenge of how to manage the destructive and generative power of word, symbol, and story, how to stay connected to a truth beyond all those things. Otherwise, catastrophe could overtake society: blood feuds, internecine warfare, black magic, ecological degradation and collapse, plagues, invasions, natural disasters. (Of course, modern mythology says the latter have nothing to do with the abuse of the power of word, but most ancient and indigenous cultures have believed otherwise.) Disaster ensues when we become detached from the reality beneath our symbols.

What happens to AI and society also happens to the individual. To me, anyway. I go crazy when too much of my experience is digital. Words shed their nuances; I start using “great,” “amazing,” “awesome,” “wonderful,” etc. interchangeably. Important, essential, crucial. Narratives and counternarratives become indistinguishable in my body as they all draw from exactly the same experience—the experience of sitting in front of a computer. Each has, to support it, only ephemera, only words, images, and sounds emanating from a box. Relying only on the internet, one can justify any belief, however outlandish. It is not only AI that “hallucinates.”

I am writing this from Taiwan. Yesterday we climbed one of Yang Ming Mountain’s foothills, which unlike most hills on this fecund island shows a bald head instead of the usual coiffure of jungle. I thought it might be rude to ascend to the very top of what is surely some kind of sacred site, so I leaned against the rock face to ask permission. The way I do this, I don’t formulate the request in words. I tune into sensations. The sensation was powerful. I could feel the connection of this outcropping of bedrock to the entire island, a profound consciousness greater than that of any boulder. I invited my son Cary (who is 11) to lean against the rock also, and I asked him what he felt. Without any other prompting, he described the same thing. I knew that it was OK to ascend the remaining 20 feet; that this spot is benignant in nature, forgiving, indulgent. Hundreds of people tramp on it every weekend, of no more consequence to it than ants. But to those who communicate with it, it delivers information, a blessing. It would be a good pilgrimage spot for anyone aspiring to achieve something on the scale of the whole island and maybe beyond.

Is that intention compatible with conquering the peak? I chose not to ascend.

What is the “peak” that humanity is attempting to conquer? What blessings are available if we apply a different listening and align with other goals?

For me, this kind of experience is analogous to introducing new human-generated data into the AI training set. I’m not relying on abstractions and symbols alone, spinning webs of words only from the strands of previous webs of words, going slowly insane. Please, whoever is listening, let me not forgot the need to touch sometimes the bedrock. That is how I keep from going mad. That is how I stave off dementia.

AI amplifies the intellectual capacities of its creator, the human collective. In fact, the “A” should probably stand for “amplified,” not “artificial.” AI certainly does amplify our intelligence, but it also amplifies our stupidity, our insanity, our disconnection, and the consequences of our errors. We must understand it in this way if we are to use it well. The need to reconnect abstract, intellectual intelligence with its ultimate source becomes more obvious with each innovation in information technology, going back through computation, film, printing, art, the written word, all the way to the origin of symbolic culture—the naming of the world.

These innovations are fundamental to what it is to be human. We are the animal that, for better or for worse, for better and for worse, tells stories about ourselves to ourselves. What an enormous power it is, the power of word, the power of symbol, the power of story. And what terrifying consequences result from its misuse. 

Only by understanding the generality of the use and abuse of the power of word can we approach a solution to the problem of how to align AI with human well-being given its potential to automate the opposite, whether as a tool of totalitarians and madmen or as an autonomous agent itself.

It is not a mere technical problem. It is the latest iteration of the original alignment problem of symbolic culture that every society has grappled with. AI merely brings to it a new level of urgency.