Sometimes artificial intelligence can be a useful tool, but it can also alienate us from real goods.
ChatGPT has become ubiquitous. My first faculty meeting this semester considered the problem of law students using the technology to provide answers to exam questions or writing papers. Two law professors have published an article showing that ChatGPT could already pass two parts of the bar exam (the torts and evidence sections). A company last week offered a million dollars for an attorney to wear a headset and repeat the Chat’s answers to questions at an oral argument at the Supreme Court. The last development is, to be sure, a marketing gimmick. No attorney would do so, even if it were legal, and ChatGPT is not yet ready for Supreme Court primetime. Nevertheless, the company with the gimmick is offering services in traffic court next month.
And these are just a few of the developments in the single field of law. Because ChatGPT is being offered now as a free service, millions of people have already used it for both work and pleasure—far more than Google and Facebook in their initial periods. One enjoyable parlor game is to ask it to write in a favorite author’s style, like Hemingway, or explain matters under different constraints, like writing a sonnet in iambic pentameter explicating the deterrence theory of Thomas Schelling. The latter result was not bad, particularly considering its instantaneous formulation.
Strengths, Limitations, and Future Advances
ChatGPT (more formally Chat Generative Pre-Trained Transformer) is an advanced chatbot. It is trained on a vast amount of text, using the latest progress in the computational field of neural nets. Neural nets are a silicon version of the neurons in the brain, and like our brain, they may take on positive or negative weights to help with predicting future states of the world. Chat uses these neural nets to keep predicting what words and phrases should come next, given the prompt that the user gives, such as “write a five-hundred-word defense of Citizens United v. FEC.” It is a far more powerful version of what Outlook does when it suggests completions to our sentences on e-mail. The Chat can also meld two different nets to make a third when its prompt is “write a defense of Citizens United in the style of Henry James.”
The system’s capabilities are striking—it has an extremely rapid and grammatically impeccable fluency on any subject. There remain some glitches and weaknesses, however. It makes mistakes: When I gave it the Citizens United prompt, four of its five defenses were excellent and probably the same four with which I would have led, but it also wrongly stated that the case required campaign contributions to be publicized. Instead, the majority stated only that legislatures could require such publicity. And sometimes it completely hallucinates or fabricates. When a colleague asked it to write a scholarly article about a legal subject, it inserted support from articles that have never been written, even though the authors of these phantom pieces are indeed experts in the field!
And theoretically, there is a limitation: ChatGPT is just connected to the words people have written about the world, not to the world itself. It floats on the vast sea of verbiage we have created and is not connected directly to the actual sea or anything else about the world outside of our representations of it.
But GPT models will get better in all sorts of ways. First, as computer hardware becomes more powerful, following Moore’s law, the models will become more capable. Second, they will become more specialized by melding general language training with training that emphasizes and gives more weight to specific kinds of texts. Open AI, the producer of ChatGPT, already is contemplating its legal extension with specific training in law texts. Assuming this innovation then proves more successful at generating legal texts than the general language model, this development is of jurisprudential significance, showing that law has specialized language that cannot be reduced to ordinary language. Students in law school are learning to talk like a lawyer in a more than metaphorical sense.
Third, ChatGPT will expand its scope, evaluating other texts, not just writing its own. For instance, it will be able to highlight non-standard provisions in a contract and, when cross-referenced with other computational capabilities, tell us how those various provisions fared in court.
Fourth, it will improve as humans interact with it, correcting it in their chatbot conversations. That is why ChatGPT is free for the moment; the millions of interactions every day improve its accuracy and are therefore worth more than selling its services immediately. That will change in time, and ChatGPT will be big business on its own and in conjunction with other programs. Fifth, and only slightly more speculatively, ChatGPT will be combined with other kinds of computation programs that will perform fact- and logic-checking functions, cutting down on errors. As a result of such advances, ChatGPT, like other intelligent machines, will not have a one-off effect on society, but a continually transformative one.
Educating for Value-Add
While initial discussions in educational institutions, including at my own school, have focused on the problem of cheating on tests and papers, the implications for education, particularly professional education, are far more profound. Students will be preparing to work in a world of ever more intelligent machines. They will not be paid for what ChatGPT and its successors can do but only for the value that they will add to the work of those machines.
Thus, professional education must integrate intelligent machines into its program. First, students must learn how to use the machines most efficiently to complement their own skills. Using the machines remains an art, not just a science. For instance, ChatGPT gives better answers when it’s given a better prompt, and the prompt is the professional’s responsibility—at least for now.
But in creating a lesson plan for students, a professional school must project what computers cannot do, not only now, but in the next decade—for the machines will continuously gain ground. Only through such prediction can we determine the specialties and skills that our legal professionals (and those in other disciplines) should cultivate that machines are not likely to usurp.
In law, it is possible to make a few generalizations about the successful lawyer of tomorrow. First, machines will find it easier to colonize areas of law that are very stable, like trust and estate, than those that are changing quickly like banking law or other law that has rapidly adjusted to technological advances or political turnover. A lawyer should specialize in the more mutating law, as should law schools. Technology law, like that surrounding cryptocurrency, also will have fewer templates for machines. A lawyer today is wise to keep abreast of the latest technologically driven legal change. Highly conceptual skills, such as drawing analogies between legal areas that have previously seemed disparate, may also be relatively impervious to machines. That suggests that professional education should be even more conceptual and less information-oriented than it is now. Legal rules will be easily known.
But while the law, in general, may be easily known, developing the specific facts of each case is a unique endeavor, for which the past is an imperfect guide. That is an area where humans can likely add value. Some legal matters, like negotiation and persuading juries, require emotion as well as logic, and oral as well as written communication. These are also likely to be a last redoubt of lawyers. Softer skills may become more in demand than the proverbial steel trap mind, and law schools should program accordingly.
The need to reorient legal education necessarily requires changing the part of the curriculum that focuses on developing independent legal skills. Students have to master them to provide good prompts and add value in the most cutting-edge cases. But to be practice-ready, they need to integrate advances in machine intelligence. And what is true for law is true of the rest of professional education as well.
The Politics of Added Value
Already, pundits are worrying about the effect of ChatGTP in making it easier to lobby legislatures and influence administrative agencies with a flood of computer-generated comments. But so-called “astroturf lobbying”—the simulation of public support by a flood of fabricated communications—is already an advanced art. The far more disquieting prospect is the way ChatGPT and other AI tools empowered by the continuing exponential increase in computation can destabilize our politics by continually changing the value our citizens add to work.
It has been plausibly and empirically argued that free trade (with China in particular) had a destabilizing effect because some domestic workers no longer could add any value once their jobs were offshored. But these effects were limited to particular industries and to particular locations. The change brought by AI will be far wider in scope.
To be sure, many blue-collar workers have the least to fear. AI isn’t going to replace plumbers any time soon. One of the happy effects of the rise of AI may be the restoration of respect for manual labor, because for now, that is part of the added value of being human. But the West is now a predominantly white-collar economy and intelligent machines will be replacing much of what white-collar workers have done. That does not mean that white-collar jobs will disappear. Humans can still complement the machines, but the jobs will change fast and in some cases very substantially.
Thus, AI will generate political problems as well as fluent text. How can a society arise where workers need continual education and redeployment to add value to the latest wave of intelligent machines? How can unemployment insurance and other elements of the social safety net be reframed to sustain workers without creating dependency and encouraging idleness? Calls for a guaranteed income will grow with the rise of intelligent machines, but guaranteed income programs discourage the work that gives meaning to almost all lives.
Meaning for Man
Beyond its alteration of work, the new intelligent machines are likely to challenge man’s self-image in more profound ways. That’s nothing new: science and technology have been transforming it for the last five hundred years. The triumph of heliocentrism dethroned man from the center of the universe. Evolution raised questions that undermined his image as a select creature in touch with the divine rather than just one of many intelligent apes. But still, the human brain that devised such scientific theories continued to set us apart. While Chat GPS itself is not making these discoveries, it will summarize them better than almost any of us can, and other advances in AI may soon be responsible for actual scientific discoveries.
But in one area, man does retain an advantage—morality. Machines have not replaced our conscience and do not appear to be on the verge of doing so. For instance, ChatGPT did not blush when it wrote up an article with entirely false cites. A commitment to truth is part of our conscience.
To be sure, intelligent machines can make a list of the costs and benefits of decisions, but the weights given to these costs and benefits will remain debatable, as will even the larger questions of the extent to which we should be consequentialist rather than deontological in our judgments. Immanuel Kant said that two things filled him with wonder: the starry heavens above and moral conscience within. And that latter sense of wonder remains intact from the current advances of AI.
Recognizing our moral sense as our real addition to the value of the world may do wonders for society as well. While capitalism and other modern science create great wealth and alleviate poverty, they remain instrumental goods. And like any instrument, these systems—and AI—must be guided by individuals making moral decisions, deciding for themselves such matters as what should be sold in the market and what should be left to other forms of human interaction. The rise of AI may remind us that morality is the ultimate measure of man and could thus even become a force for social regeneration.