The very ideologues who claim that we must be kind are not inclined to be kind to their political opponents—for with them, compassion is merely rhetorical.
AI, Governance, and Our “Utopian” Future
Talking about artificial intelligence (AI) and governance at this particular moment is a daunting task. The capacities of AI are changing rapidly. While it is very likely we are in the midst of a hype cycle with respect to how reliable and useful AI can be today, the current large language models still have some impressive capacities. Everyone seems to expect more and better to come rapidly, either on the basis of those models or some other paradigm of AI. Massive sums of money are being expended and massive amounts of energy consumed on the as-yet uncertain but not so implausible proposition that “artificial general intelligence” (AGI)—a level of AI that will match and then exceed human capacities of speech and reasoning—is just around the corner.
At the same time, we are, as it seems, developing what is, for the United States, a new model of governance under the direction of President Donald Trump, along with Elon Musk and his “Department of Government Efficiency.” How to characterize this model is itself one of the controversies of the moment. Is it populism, a long overdue revolt against self-regarding and corrupt elites? Is it authoritarianism, driven by the ambitions and resentments of the President and his minions? Is it some kind of technocracy reflecting the rise of “neo-feudal” tech-lords (tech-bros?)?
The net result of these two very dynamic situations is, I think, that anyone who speaks with great certainty about what the future holds is probably overconfident. But rapid change usually does not just happen; it requires time to build up momentum. Trying to understand how we got here might help us step back from the passions and assumptions of the moment. I will offer in this essay no comprehensive effort to explain what has led up to our current situation but will try to supply some of the guideposts that seem to me particularly important in understanding the technological aspirations that are in play today with respect to governance and AI.
In brief, what we are now seeing is the messy practical expression of a certain ideology of progress that, in one form or another, has been around for a very long time, an ideology maintaining that technological development means humanity can and will eventually overcome the need for both work and governance. These hitherto perennial aspects of human life are the products of scarcity, and when scarcity is overcome, they will “wither away.” The chaotic consequences of implementing this ideology under present circumstances are not merely the result of contingent historical factors, but reflect some deep misunderstandings about human life and governance that are inherent in this vision of progress itself.
As far as AI is concerned, we have in the West over 2,000 years of mythic storytelling, philosophical speculation, and practical efforts with respect to creating life-like, human-like beings for a variety of useful, romantic, or military purposes, or indeed simply for the sake of exhibiting prowess: think Aristotle on self-moving tripods, Pygmalion, Talos, golems, “The Sorcerer’s Apprentice,” Frankenstein. This is not perhaps the thickest vein of thought and speculation in the history of Western ideas, but it is sufficiently widespread to justify the suggestion that we have in this aspiration the cultural expression of some deep human wish or desire. That male and female human beings join in the creation of new life is necessary for our perpetuation—but that creation is subject to all the uncertainties of what we now call the “genetic lottery,” not to speak of all the natural shocks that flesh is heir to. People have been thinking for a long time about how “homo faber,” man the maker, aspires to the creation of beings that satisfy our desires more perfectly and reliably.
For most of human history, such creations could only be imagined—and the stories we told about them were overwhelmingly cautionary. That remains true today, even as we are beginning to achieve some of those old dreams. Most fictional treatments of artificial intelligence and robotics range between the cautionary (e.g. Asimov’s Robots of Dawn, 2001: A Space Odyssey or Ex Machina) and the dystopian (the various instantiations of Westworld or Blade Runner). The powerful impulse to create something like humankind beyond what nature provides is matched by a powerful sense that doing so is not likely to be a good idea.
Nevertheless, in the post-war period, as increasingly powerful and sophisticated electronic computers developed, the project to simulate human intelligence became ever more of a practical possibility and ever more successful. One telling measure of AI’s success is the growing list of things a computer can do that informed opinion once asserted it could never do. Another is the often remarked upon tendency for the bar to be ever rising for what is thought to count as “genuine” artificial intelligence.
If utopia is not in the cards, why would we want a world where human work and effort are subordinated to or made redundant by AI?
Yet another marker of AI researchers’ success is that even current large language models readily pass something like the test Alan Turing proposed for how to operationalize what would count as an intelligent machine. That means that in both experimental and informal settings people regularly misjudge whether they are communicating with an AI or with a human being. (Unlike some today, Turing was wise enough not to confuse an intelligent machine with a conscious machine. Unlike others today, he was not wise enough to see the problems with his essentially behavioral understanding of intelligence.)
In short, AGI once could only be a dream, but now it seems increasingly likely to become a reality, perhaps sooner rather than later. People are so anxious to make this happen that even the present highly limited and notoriously unreliable AI models are being implemented in real-world situations with extraordinary, even reckless, rapidity. It is already a commonplace, and not obviously untrue, that the landscape of human effort, work, and creativity is changing in a host of fundamental ways that will only accelerate until much of the effort that occupies our work life today will be redundant or second best.
It is likewise very common to hear that these developments are in some crucial way “necessary.” That supposed necessity is often linked to national security; if America doesn’t have advanced AI in our military and “They” do, some argue, we will lose in a confrontation. AI development is similarly linked to having the edge in global commercial competition, and certainly, AI developers are increasingly acting as if this were the case among themselves. Other people, like Google “Chief Engineer” Ray Kurzweil, attribute the necessity to a long-term developmental dynamic within technological evolution or, indeed, the evolution of intelligence itself. To do critical justice to any of these claims is beyond the scope of this essay. I will simply note: Were it not for such arguments from necessity, it would be obvious that we should be wondering why anyone thinks it is a good idea to make so much of human effort redundant or second best. As it turns out, there is a kind of answer to this question, and bringing it forward will lead us to the challenges AI poses for governance.
One vision of human progress, in some ways going back to Francis Bacon, defines it by ever increasingly easing the burdens of human life through technological development and economic arrangements. Prominent among those burdens is the need to work for a living. That work is deeply bound up with what it means to lead a full human life is an old idea, but so likewise are the notions that work is a curse, or that leisure is crucial to the best life. In its most utopian form, progress equates with finding our way out of “the realm of necessity” in which nature has placed us, a realm of scarcity, competition, and violence. Take, for instance, Karl Marx’s idyllic vision that under communism humans would at last be able to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”
Likewise, overcoming scarcity would mean overcoming the necessity for politics, which makes sense if politics is, as it is sometimes defined, “the authoritative allocation of scarce resources.” Political leaders from President Franklin Roosevelt to Vladimir Lenin may have had different approaches, but their fundamental projects all revolved around turning politics into the “administration of things.” Ideologies across the spectrum promised that the more we can overcome the limits of nature that have hitherto defined human history, the more people will be able to lead happy lives, fulfilling whatever goals they wish to set for themselves. Well-worn Marx again: “The free development of each is the condition for the free development of all.” But Adam Smith is not so far off the same mark with his picture of “general opulence.”
Once advocates of such views, to be found largely on the political Left, thought this new world would arise from the victory of technocracy (perhaps in the wake of a devastating world war) or a class revolution that itself required first the extraordinary capitalist productivity of global industrialization and the routinization of all forms of work (including the bureaucratization of governance). Today, believers in this future are more likely to be libertarian tech-wizards who, not dismissing the significance of industrialization and routinization, think that AI rather than a proletarian revolution will usher it in. Once we turn our affairs over to superintelligent AI, the need for work or human governance will disappear. Hence, for example, the technorati’s interest in experimenting with guaranteed income programs, preparing for the increasing number of people who “will not have to” work. (Note that this outlook implicitly rejects the older view of creative destruction: “progress destroys old jobs, but it creates new and better jobs!” Time will tell if that rejection is justified or not.)
According to this vision of the future, then, at the same time that AI is making a great many human jobs redundant, it will be making governance obsolete. There are more than hints that the reason Elon Musk and his boy wonders are so enthused about dismantling so much of the federal government is because they genuinely think that the time has come to replace workers in a great many executive branch agencies with AI. Or again, in the national security field, it is becoming a commonplace that having a human being “in the loop” when robotic soldiers have the opportunity for lethal action is untenable against an enemy that is willing to give their autonomous fighting machines a completely free hand.
Even fervent advocates of this vision, however, admit there is a fly in the ointment. It is called the problem of “alignment.” How will we be certain that AI has our best interests in mind, that it will do the right thing? Don’t we need to pay attention to the long history of cautionary tales that suggest how our creations will turn on us? What if instead of leading us through the last steps out of the realm of necessity, AI hordes all the energy resources for itself? What if, having given over the means of production to AI, it sides, in its superhuman wisdom, with those humans who have already suggested that humanity is a cancer or a kind of virus on Earth?
And, in fact, we have enough experience with these large language models to know that they go off the rails with remarkable ease. Train an AI to write deceptive code, and it also turns into a Nazi. Train an AI on the unfiltered Internet, and it becomes racist. Train an AI not to be racist, and it produces pictures of black Vikings. How do we prevent AI from helping bad people, or being bad itself? Alignment is not easy. What to do?
After acknowledging that AI can be used for good or ill, columnist Thomas Friedman offers that to meet the challenge we will need to develop “‘complex adaptive coalitions’—where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I.” This suggestion, it seems to me, is a quiet counsel of despair, for such a meeting is never going to take place, and if it took place it would not achieve what Friedman hopes for it.
For if you got such a “coalition” together you would quickly be reminded that perhaps the major reason alignment is hard is that people disagree about what is right and wrong, just and unjust, good and bad. They disagree sometimes a little and sometimes a lot. This point should not be shocking. In a diverse society such as ours and a diverse world such as ours, there are many different ethical horizons with which AI might align. Ask moral philosophers to answer the question, and they will discourse at length on these various paradigms and why they prefer one over the other. Complicating the picture in the West is that we swim in the waters of moral relativism, so in principle, we have a hard time saying why one alignment is rationally better than another, even if we have our individual irrational preferences.
There surely have been times and places where how to align AI in a given society would have been relatively obvious, but of course, practically speaking, that probably would have meant aligning it with the values of the ruling class. Would that not align with our values? In a culture with as many fault lines as ours currently has, yet is still nominally a democratic republic, who is to be favored in any decisions about alignment? Should it shift with the winners of the last election (national, state, local)? Or maybe that decision should not be a public decision at all, but left to any given AI company’s own decisions about how to align or not align their product. If it turns out that an AI trained on the Godfather movies can be sold to organized crime, then one trained on Dragnet might help the police.
We are misled by a conception of the purpose of governance as “problem solving,” and with the promise that if we solve enough problems the need for governance will fade away.
When we worry about alignment, we are, it seems to me, being (appropriately) skeptical about the premise of the utopian vision behind AI, which, recall, is that the end of scarcity will be the end of the scarcity-based conflicts that call for governance. “Men do not become tyrants in order that they may not suffer cold,” says Aristotle, which suggests that there may be people who have never suffered any lack and yet still desire unchecked rule over others. Furthermore, with respect to “values,” we all want to be left alone to have it our own way; “nobody can tell me what to believe!” This is not some technical problem, unless we eventually identify a tyrant gene or find a way to regulate brains for happy thoughts. AI is not going to “solve” it by some clever device that humans have not yet thought of, or by some startlingly new moral insight that will unite all humanity as brother and sisters.
Alignment is, in effect, an old problem raised again; it has never been the case that we can take it for granted that those governing have the best interests of the governed in mind. And that is in large measure because since recorded history, anyway, it has always been the case that people disagree about how we should lead our lives. And whereas there are circumstances where scarcity may accentuate those disagreements, if Aristotle is right (or Publius in Federalist #10, for that matter), it does not always cause them.
We are misled by a conception of the purpose of governance as “problem solving,” and with the promise that if we solve enough problems, the need for governance will fade away. So if AGI can solve all of our difficult problems, great! I have suggested how the latter proposition is dubious, but so is the former. Bertrand de Jouvenel argued cogently that political problems are not like math problems, where the key is knowing some technique or algorithm in order to come up with a (often single) correct answer. Political problems exist because people do not agree on what the supposed problem is in the first place, or whether indeed there is a problem at all. And if there is agreement on a problem, it is likely that the solution will likewise be in dispute if that problem rises to the level of political discussion and action.
Jouvenel suggested instead that we should think of politics as about finding settlements, not solving problems. A settlement implies that the goal is something good enough, under a given set of circumstances, to satisfy parties who understand from the start that they are unlikely to get all they want and who are likewise aware that they may yet, depending on how the settlement changes the facts on the ground, be able to reopen the issue at some future date when the balance of interests and forces has perhaps shifted in their favor. So unlike a solution, a settlement is incomplete or partial and open-ended. There is no such thing as a final settlement.
Reaching a settlement is likely to require compromise, emotional intelligence, empathy, “reading” people, or (less appealingly) cunning and misdirection. I am not going to go so far as to claim that AIs could never produce outputs that have the qualities required to reach settlements, and to convince others of their merits—indeed, emotionally responsive AI is already a goal. (Sometimes tragic incidents of strong attachments between people and AIs suggest the possibility that they can already seem to exhibit emotional intelligence, although it is also possible that those who turn to them seeking such attachment in the first place may not themselves be the best judges of that quality.) Whatever the future might hold, I would say that at present, the fit between governance and AI seems closest to those who share this misunderstanding of what the activity of politics is really all about and, therefore, in a misunderstanding of the types of intelligence that are appropriate to it.
The utopian hopes for a post-scarcity world without work or governance, and the matching assumption that politics is about finding solutions to problems, are longstanding ideals that opened a door for some to think that making way for AI is what we need to reach a world without politics and work. If I am right about that, the challenge we face goes beyond the personalities of the moment or electoral politics.
That is not good news. To suggest that to counter this utopianism what we need is to have people who are educated in such a way as to understand the assumptions and requirements of republican self-governance, and thereby have greater insight into questions of what politics is about and what constitutes a good human life, as true as it might be, seems almost as bootless as saying we need “complex adaptive coalitions.” It has been said for decades now for any number of reasons, and yet here we are. But not quite so bootless—there are the growing number of college-level programs dedicated to this kind of serious civic and humanistic education, along with a growing number of charter school programs and home-schooling curricula. We are far from a tipping point, I suspect, but ever more seeds are being planted for some kind of renewal.
If that is the not-good news, there is also not-bad news. If I am correct and there is something utopian about the promises being connected to AI, then sooner or later reality will intervene. We can hope that any reminder that the fundamental things really do apply as time goes by will be like a gentle tap on the shoulder, rather than the descent of a screaming eagle. But one way or another, expectations will return to earth, and people will start to notice, for example, the unseemly glee with which reducing the number of jobs for humans is being greeted in some quarters. Maybe instead, think about the ways in which AI might help reduce the danger of some work. Or think about how AI might provide better training and support, or increase productivity without increasing tedium or surveillance. Are there ways of using AI that will increase human skills and capacities, rather than substitute for them (and perhaps cause them to atrophy)? Or perhaps the hype cycle will end, and we will move on to the next shiny object. All in all, a confrontation with reality might make us seriously ask the question: if utopia is not in the cards, why would we want a world where human work and effort are subordinated to or made redundant by AI?