fbpx

Why AI Can't Think

Just for fun, I tease the hell out of ChatGPT in spare moments. Chatbots have no self, that is, no purpose for existing, and therefore can’t think. There is no such thing as thought in the abstract: thought is always someone’s thought, and mental activity doesn’t constitute thought unless it comes from a thinker who has a reason for thinking. To think, we must distinguish between significant and insignificant objects of thought, that is, those that matter to us and those that don’t (the exception that proves the rule is paranoid schizophrenia, which makes everything significant).

Higher thought is self-conscious, in that it involves awareness of how we think. Kant distinguished between Vernunft, usually translated as “reason,” as opposed to Verstand, the simple categorizing and sorting function of the mind, or “understanding.” The English translation doesn’t quite convey the difference: Vernunft derives from the same root as Vernehmen, “to interrogate.” It implies a critical examination of how we think.

That helps explain why ChatGPT does so badly with self-referential jokes that it hasn’t heard before. GPT has no self, and the switch of the vantage-point of subject—what makes self-referential jokes funny to begin with—leaves it confused.

ChatGPT, for instance, couldn’t make sense of a self-referential joke that an eight-year-old might tell. Here’s a recent exchange, copied and pasted from the Chat GPT website:

ME: Explain why this joke is funny: “Knock-Knock.” “Who’s there?” “Rude interrupting cow.” “Rude interrupting…” “Moo!”

ChatGPT:

This joke is funny because it plays on the traditional “Knock-Knock” joke format but subverts the expected punchline. The setup of the joke is familiar, with one person asking “Who’s there?” and the other person offering a response. However, the response is unexpected and humorous because it interrupts the other person mid-sentence with a loud “Moo!” sound.

The joke also relies on wordplay, as the “interrupting cow” is a play on the phrase “interrupting you,” which is what the cow is doing by interrupting the other person’s sentence. The fact that the cow is described as “rude” adds to the humor because it is unexpected for an animal to exhibit such behavior.

Of course, there is no “wordplay” between “rude interrupting cow” and “interrupting you.” There is a switch of roles: The “rude interrupting cow” becomes the teller of the joke rather than the object of the joke. Rather than describing a “rude interrupting cow,” the teller of the joke becomes the cow.

A deeper joke plays on an ancient paradox: No one can imagine his own non-existence, because if he is imagining it, he must exist. Here’s another exchange with ChatGPT:

ME: Nietzsche quotes Plutarch’s account of the demigod Silenus to argue that it is better never to have been born. The chorus in Sophocles’ play Oedipus at Colonus says the same thing. Two yeshiva students are debating this question. One says, “Nietzsche and Sophocles were right. It’s better never to have been born.” The other replies, “Yes, but who has such luck? Not one in ten thousand.” Explain why this is funny.

ChatGPT: The response of the second yeshiva student is funny because it uses irony and humor to challenge the pessimistic view expressed by the first student. The first student argues that it is better never to have been born, echoing the ideas of Nietzsche and Sophocles, who also expressed a similar sentiment. However, the second student’s response suggests that such a view is not practical or realistic, as virtually everyone has already been born.

By pointing out the irony in the first student’s statement, the second student is making a humorous observation about the absurdity of making a blanket statement that nobody can act on. The response also suggests that rather than dwelling on the idea of whether it is better to have been born or not, it might be more helpful to focus on how to make the best of the life one has been given.

ME: Clarify, please: You just said, “Virtually everyone has already been born.” Are you stating that some people who exist haven’t been born?

ChatGPT: I apologize for the confusion. You are correct that my previous statement was imprecise. What I meant to convey was that the vast majority of people who exist or will exist have already been born, and thus it is not practical to consider whether it is better to have never been born.

To clarify, all people who exist have been born in some form or another, whether through natural birth or other means such as medical intervention or adoption. My previous statement was not accurate and I apologize for any confusion it may have caused.

ME: Now you are saying that “the vast majority of people who exist or will exist have already been born.” How do you know how many people will be born in the future?

ChatGPT: You are correct that it is impossible to know for certain how many people will be born in the future. My previous statement was imprecise and I apologize for any confusion it may have caused.

What I meant to convey was that the vast majority of people who currently exist have already been born. While it is impossible to predict how many people will be born in the future, it is safe to say that the overwhelming majority of individuals who have ever existed have already been born, lived their lives, and passed away.

ChatGPT had me worried for a moment: I wondered whether it was telling me that the machines have become sentient, and that the statement “Virtually everyone has already been born” meant that not many humans would be born in the future. But ChatGPT was just being dense.

The joke, of course, is that we cannot possibly have a sensible discussion about whether it is better not to have been born, because any thoughts we might have on the subject presuppose our existence.

There is another dimension to the joke, namely that it is a characteristically Jewish joke: The absurdity of the conversation between the two yeshiva students points up the silliness of Greek pessimism and asserts a distinctly Jewish optimism. ChatGPT missed that side of the joke entirely.

So-called “strong AI”—the replacement of the critical functions of the human mind by a computer—is a utopian delusion.

AI bots, to be sure, can learn about self-referential statements and identify them. Chat GPT correctly explained the ancient Liar’s Paradox:

ME: Evaluate the truth or falsity of this statement: “I am lying to you right now.”

ChatGPT

This statement is a classic example of a paradoxical statement or a self-referential paradox. It is impossible to determine whether the statement is true or false because if it were true, then the statement would be a lie, and if it were false, then the statement would still be a lie.

The statement creates a contradiction, which is known as the liar paradox, and it cannot be resolved logically. Therefore, it is neither true nor false but rather an example of a statement that is self-contradictory.

But ChatGPT just regurgitates a well-known example. When a self-referential paradox is hidden inside a joke designed to misdirect the listener, the AI bot doesn’t quite get it. Here’s another example:

Me: Why is this joke funny: “Knock Knock.” “Who’s there?” “The Gestapo.” “The Gestapo who?” “We will ask the questions.”

ChatGPT

This joke is funny because it is a play on the reputation of the Gestapo, which was the secret police force of Nazi Germany known for their brutal interrogation tactics and human rights abuses.

The humor in the joke comes from the unexpected response to the question “The Gestapo who?” Instead of providing a typical answer to the question, the punchline subverts expectations by implying that the Gestapo will be the ones asking the questions.

The joke also highlights the power dynamic between the Gestapo and their subjects during the Nazi regime, and the fear and intimidation that they were known to instill. The absurdity of the Gestapo being used in a lighthearted knock-knock joke adds to the humor and helps to release the tension surrounding this dark period in history.

This vague, rather Woke answer misses the point: The listener in a knock-knock joke asks a question (“Who’s there?”), and the “Gestapo” response employs an old movie cliché to switch the position of questioner and answerer.

AI can mimic the lower-order functions of the human mind up to a point. It can sort and categorize. AI works wonders in factory automation, examining thousands of images in real-time to identify faulty parts or machine wear and tear. It can string together known facts and relationships and produce a credible emulation of a freshman essay. But it can’t examine its own premises. Even the simple activity of sorting and categorizing runs into paradoxes. Parmenides explained this to the young Socrates toward the end of the sixth century BCE. If we categorize objects according to any perceived quality, for example, “redness,” we run into an infinite regress. Does the category of red things include “redness” itself? If so, then we need another category that includes both red objects and “redness” itself. This new category is also red, so we require yet another category to include it, and so on infinitum. That persists into modern Set Theory in the form of the so-called Russell Paradox (the set of all sets that are not members of themselves, which cannot exist). Russell attempted but failed to allay the problem with his theory of types.

Kurt Gödel settled the problem in 1931 by proving any logical system powerful enough to support arithmetic cannot prove its own axioms. Two and a half millennia after Parmenides, and after innumerable attempts to solve the age-old paradox, it turns out the problem can’t be solved. For Gödel, this simply meant that mathematicians would never exhaust the subject, and would always search for new axioms that resolved the paradoxes that inevitably arise in mathematical systems. It also implies, as the great mathematician put it: “Either mathematics is too big for the human mind or the human mind is more than a machine.” He explained, “In the systematic establishment of the axioms of mathematics, new axioms, which do not follow by formal logic from those previously established, again and again, become evident.” There is no logic, that is, by which a computer can discover the new axioms that resolve the paradoxes that again and again arise in mathematics.

“Weak” AI—the sorting and categorization of objects by computers—works perfectly well. Computers can distinguish faces, or bad parts from good parts on a conveyor belt, or photographs of cats and dogs once they have “learned” to differentiate the arrangement of pixels—provided that they first are trained by a human operator who marks the learning set as “cat” or “dog.” On the other hand, so-called “strong AI”—the replacement of the critical functions of the human mind by a computer—is a utopian delusion.

Gödel demonstrated this through formal logic. One might add that the impulse to detect flaws in the foundation of our thinking and build a better foundation arises from a personality with a purpose. Kant considered only the exercise of reason as such; the motive to exercise reason, though, is embedded in the personality. We search for truth not because we are programmed to do so—most of the time we are content to kid ourselves—but because we have social relationships, ambitions, and, most of all, an awareness of our own mortality that instills in us a desire to leave something unique behind that will last beyond our earthly existence. Creativity stems from passion, the one thing of which a computer is incapable.

A caveat is required, though. Just because AI can’t think doesn’t mean that it can’t be dangerous. The most pernicious notion ever to plague humankind holds that we can choose an identity the way we shop for a spring outfit. I have written about that elsewhere. We delude ourselves that we can (for example) choose our gender. I do not mean to denigrate the small number of individuals who believe that they have no choice in the matter, and are quite sure they belong to the opposite sex. But the designer identities peddled by Woke culture are a fraud. AI can enhance such delusions, for example, by allowing us to inhabit a virtual world with an avatar that embodies our darkest fantasies. One shudders to think what AI might accomplish in interactive pornography.

The dystopic applications of AI, though, presuppose that we have already degraded our sense of self by transforming into a matter of consumer choice, and squelched the higher functions of the mind to make room for hedonistic adventures. That isn’t AI rising to the level of human thought; that, rather, is humanity sinking to the level of inanimate objectification. The real danger lies not in AI, but in ourselves.

Related