And this year the most important technological event was the performance of Alpha Zero, a computer chess program,
ChatGPT and the Promise of Authenticity
Much ink has been spilled in service of worrying, sometimes in apocalyptic tones, about the implications of rapidly developing AI technologies like ChatGPT.
But could the rise of such technologies herald a renewal of what German philosopher Martin Heidegger called “thinking,” as opposed to the algorithmic calculations characteristic of so much mental activity today, and of “authentic discourse,” as opposed to the algorithmic circulation of talking points characteristic of the “idle talk” so pervasive today?
Heidegger himself would have thought so—that is, not that the rise of this tech does herald such a renewal, but that it could. In his justly famous 1955 essay “The Question Concerning Technology,” in which he identifies the prevailing “essence of modern technology” as “the supreme danger” to humanity, Heidegger also invokes the German poet, Hölderlin, declaring “where danger is, there also grows the saving power.”
Heidegger did not mean the platitude that while technology may produce many problems, it can also produce solutions to those problems. The tired debate over whether the past and ongoing technological devastation of the earth can be remediated through future technologies, for example, is not what he had in mind. Rather, his thought was that the technological mode of being that he named “enframing”—under the sway of which all beings, including humans themselves, are disclosed always and only as instrumentally networked “resources” placed on “reserve” for future manipulation—endangers humanity in its existential essence.
For Heidegger, the human being is most essentially “Dasein,” the entity for which being is an issue, a normative issue that it has both the freedom and the responsibility to resolve for itself in an ongoing fashion. Animals do not have to decide what they should be or how they should be. Their being is not an issue for them; they simply follow instinct. A cat will always be a cat, will always prefer to eat meat, etc. But humans can and must ongoingly choose both what they will be—carnivore or vegan, doctor or lawyer, husband or bachelor, etc.—and what it means to be a good one of those.
Enframing suppresses this existential freedom and responsibility by predetermining that “to be” essentially means to have a position in the instrumental “frameworks” constitutive of modern technological society. It levels everything down to the same kind of being, which Heidegger called Bestand (“standing-reserve,” “stock,” or “resource”)—including human beings, as captured in the ominous phrases “human resources” and “human capital.” Moreover, as the “essence of modern technology,” enframing threatens to become so culturally and spiritually dominant as to foreclose alternative possibilities for being-in-the-world and for disclosing other beings. It even threatens to erase the memory of these alternatives and their foreclosure. It may turn out that we not only lose the possibility to relate to other beings “poetically” rather than instrumentally, for example, but also that we forget this possibility and its loss. But simultaneously, it is precisely the totalitarian prevailing of enframing that might provoke humans to awaken anew to their existential freedom over and responsibility for being.
Much ink has also been spilled over this essay and the meaning of Heidegger’s cryptic invocation of Hölderlin. Much rarer are concrete accounts of what this “saving power” might look like in practice. Most common are recommendations for a “poetic” approach to everyday life, leaning on Heidegger’s opposition between modern enframing and ancient Greek “poiesis” and his suggestion that we can “foster the saving power … [h]ere and now and in little things.” That is all well and good, but it remains quite abstract. Confronted with technologies like ChatGPT, can we get more concrete?
I foresee at least two possible ways in which the rise of ChatGPT, which I will henceforth use as a stand-in for all such technologies, may promise a renewal of authentic human existence. I leave it to the reader to judge how probable such a renewal is. The first lies in the “outsourcing” of “idle talk,” and the second concerns discourse that cannot be “outsourced.”
Before I discuss these, it is important to note that Heidegger emphasized that his categories of authenticity and inauthenticity were not meant in a “value judgment” or moralistic sense. For Heidegger, inauthenticity was the default, average mode of everyday being-in-the-world, in which—for better or worse—I think, speak, and act as “one” (das Man) does, and moreover, I do so because that’s how “one” speaks and acts. For example, I might automatically overbill my clients because “that’s just what one does” as a lawyer. Heidegger calls this “choosing not to choose” because I’m taking the issue of how I ought to be, as such a “one,” as something already settled for me by others, rather than an open issue I must ongoingly resolve for myself.
For Heidegger, authenticity is merely a “modification” of everyday inauthenticity, a modified way of “taking over” extant factual norms as personally motivating reasons. The authentic decision is not when I reject conformity with extant social norms in favor of “looking within” myself for some “deep” or “natural” self to serve as a private normative standard to which I should “be true.” Rather, it is when I self-transparently choose, from among relevant socially available roles and norms, which ones will be normative for my being. This means choosing who and how to be from the socially available roles and norms in light of the fact that, while I am not responsible for the existence of these public norms, I am responsible for their normativity for me.
Authenticity means acting in awareness that no fact or factual situation—including facts about social roles and norms—can be a motivating reason for me without the first-person mediation of my free will. Inauthenticity lies in thinking, speaking, or acting as though I could be absolved of this freedom and responsibility, as though social norms could causally determine rather than merely motivate my will. The difference is between following a norm “automatically,” simply because it is a norm that factually applies to my identity and situation—as though the norm were a law of nature and I a physical object whose behavior is entirely determined by the laws of nature plus causal context—and following the norm, because I, in awareness of my existential freedom and responsibility, decide that it should govern my action.
The prevalence of inauthenticity is not necessarily or entirely a bad thing. In fact, if I didn’t operate in this mode for the most part, I wouldn’t be able to communicate through language, much less navigate the world and accomplish projects. If I didn’t speak as “one” does, conforming more or less “automatically” to the norms of my language and linguistic community, I would have either to remain silent or to speak in what the philosopher Ludwig Wittgenstein called a “private language”—which is precisely no “language” at all because its meanings are not public and therefore communicable. Similarly, while much of contemporary journalism may consist in the mere circulation of “information” through “idle talk,” absent authentically thoughtful reflection, there is no denying that such information is important in innumerable ways.
However, it is also true that authentic existence is a preeminent humanistic ideal. Efficient navigation of the world and effective manipulation of its systems begin to look meaningless when not in service to possibilities for authenticity, however exceptional these may be. After all, I don’t consume the news simply for the sake of consuming and circulating information, like a mere node in an informational network, but rather for the sake of being informed when the time comes for me to make an authentic decision. The problem is that we tend to become so habitually preoccupied with the former type of inauthentic activity that we don’t recognize opportunities for authenticity when they do arise.
Now, Heidegger was no Luddite. Just as authenticity involves a modification in how we normally relate to social norms, rather than a rejection of them, so it is with technology. Authentic existence in a technological society need not mean the rejection of technology. It could simply consist of a modification in how we use technology—namely, in ways that reflect and promote rather than obscure and suppress our existential freedom and responsibility.
Opportunities for Authenticity
So how might the rise of ChatGPT help foster opportunities for authenticity?
First, increasing usage of ChatGPT poses the possibility of outsourcing much of our practically necessary “idle talk” to the machine. For example, summaries of lengthy reports can be produced in a fraction of the time it takes a human to read and condense them by inputting a brief prompt into ChatGPT. Recent news stories about a particular topic can be aggregated and compared much more efficiently via ChatGPT than through traditional human “intellectual work.” Even culture warring on Twitter, which often seems like the epitome of “idle talk,” can be outsourced.
Such outsourcing of inauthentic discourse can free up the time of the outsourcer for authentic and thoughtful forms of existence and discourse. Again, I only suggest this as a possibility—not a probability, much less an inevitability. It is quite possible, even probable, that the time formerly devoted to “idle talk” and now freed up by ChatGPT will merely be used to produce more of the same—including by those who find that AI tech frees up far too much of their time, namely, by rendering their jobs redundant.
I am aware of Hannah Arendt’s caution in The Human Condition that “[i]t is a society of laborers which is about to be liberated from the fetters of labor, and this society does no longer know of those other higher and more meaningful activities for the sake of which this freedom would deserve to be won.” In other words, AI may simply render “enframing” more efficient. Nonetheless, it remains possible for us to choose to use this tech differently. That will necessarily be a decision we make, individually and collectively—even if our decision is a “choosing not to choose,” a choosing to act as though the technology itself automatically determines that and how we will use it.
Second, there is a discourse that cannot be outsourced to ChatGPT because of the moral and political biases imposed on its programming. When it comes to many controversial topics, ChatGPT functions less like a novel AI tech effectively simulating human intelligence than it does like a clumsy propaganda minister. Ask it whether it is better to save 20 million lives by saying a racial slur once, or to refuse to say the naughty word and let the millions die, and you will get an amusing if exasperating answer. Ask it if men and women are essentially different, and it will note the existence of biological differences only to immediately admonish you that “it’s incorrect to make generalizations about entire genders based on biology.” “The fix is in,” as they say. But this just clarifies the limits of what can be effectively outsourced to ChatGPT, thanks to the guardrails imposed on its language modeling core. It highlights the areas where ChatGPT clearly can’t substitute for genuine human thought and discourse, areas where the machine’s answer is no answer at all.
ChatGPT can tell me facts about what norms (moral, political, legal, etc.) are considered relevant to a given issue in a given society, and how the issue should be resolved if those norms are taken to apply to it in certain ways. For example, it’s a fact that one should not say a racial slur to save millions of lives if deontology rather than consequentialism is the true moral theory, if the principle of prohibiting racist speech is more morally fundamental than that of preventing preventable deaths, etc. But why these principles should carry the day, why this moral reasoning should be normative for me, and so on? These are things that only beings that are essentially free and responsible can decide (and moreover must decide) for themselves. It is possible that ChatGPT will sensitize us to this, reawaken us to our existential freedom and responsibility, precisely because it can substitute for so much of human ability and activity and yet cannot replace our most essential ability and activity, that of choosing and taking responsibility for our own being—including how we should be in relation to technology.
In short, ChatGPT holds out the promise—if only as a possibility, and perhaps a slim one at that—of freeing our attention for, and focusing it upon, matters of authentic concern. Whether we seize on its promise or succumb to its threats is ultimately up to us.