How AI Could Save Liberal Education

Even before the publication of Stephen Marche’s Atlantic piece, “The College Essay is Dead,” there had already been discussions about AI writing programs like ChatGPT in the academy. But the past few months have seen a flurry of activity with college administrators calling emergency meetings, professors changing their assignments, and educators writing essays (some perhaps written by AI?) that range in reaction from the nonchalant to the apocalyptic about the fate of college writings, the future of liberal education, and the outlook of higher education.

While the glitches and weaknesses of ChatGPT have been pointed out, these presumably will be corrected over time as the AI technology improves and its data set enlarges. There has also been discussion about cybersecurity and the economic and political implications that AI poses for societies. But most of the public discourse so far has been about higher education.

Unlike most academics who are skeptical, suspicious, or resigned about ChatGPT, I am hopeful, believing that AI could offer a genuine path for liberal education to revitalize itself in the university. Keep in mind I am not arguing from some techno-utopian perspective where transhumanism is the answer to everything—I also have reservations and concerns about the ubiquitous adoption of technology in our contemporary lives. But I do think technologies like ChatGPT could return American higher education to the fundamental questions of human identity, meaning, and flourishing that have been pushed aside the past fifty years for economic credentialing.

The Reactions So Far

For those not familiar with AI programs like ChatGPT, they are chatbots—computer programs to simulate conversations with humans—that predict what words and phrases should come next. As an AI, they continually learn as they gather more data, from human interaction and from texts like articles, books, and websites. The GPT-3 model, for example, was “trained” on a text set that included 8 million documents and over 10 billion words. While there are other AI chatbot programs, ChatGPT received the most attention when its third prototype launched this past November. That’s mainly because its technology convincingly mimicked human writing, though the business also had a superb marketing strategy that resulted in an estimated $29 billion valuation of OpenAI, ChatGPTS’s parent company.

The most common response from the academy has been resignation mixed with suspicion. Faculty know there is nothing they can do to stop AI from entering the academy. All that is left is to adjust and accommodate in the hope professors can retire before human teaching is entirely replaced. Recommendations include in-class writing examinations, assigning content behind a paywall, adopting show-and-tell exercises, and actually employing AI to teach students how to write better. While these suggestions are useful in that they identify what constitutes human writing, they are only stop-gap measures before AI passes the Turing Test.

Strangely, one response to ChatGPT that is notably absent is an eagerness to discuss how AI can help students with disabilities in their writing. One would think that ChatGPT could help students with disabilities to learn how to write better or do their writing assignments. AI could possibly open the doors of higher education to a new set of students who may have difficulty completing writing assignments, for example, those with dyslexia or dysgraphia. In this sense, AI could expand access to higher education in ways that were previously not thought possible.

A second response has been skepticism. AI will not replace human writing because AI does not interact directly with the world and therefore is unable to represent it as humans do. As John O. McGinnis puts it, “ChatGPT is just connected to the words people have written about the world, not to the world itself. It floats on the vast sea of verbiage we have created and is not connected directly to the actual sea.” Practically, this is evident when, after entering a writing prompt in AI, professors evaluate them as acceptable in structure (for example, English’s composition five-paragraph essay) but point out factual mistakes or raise aesthetic questions about its writing, like its “voice” or its inability to engage the reader emotionally. According to this group, ChatGPT is a good facsimile of freshman undergraduate writing, but it is only a facsimile.

I suspect, however, that these obstacles will be overcome in time as the technology in AI improves. Its neural network will eventually process data more efficiently and its capacity to learn will improve as it gathers more data from additional texts and human interactions. It also raises the philosophical, Philip K. Dick type of question of what actually constitutes human writing if an AI can do it as well as people (a question better pursued another time). As far as AI not being connected to the “actual sea” of reality, humans have been interfacing with AI for years now, from purchasing airline tickets to visits to the doctor. Unfortunately for our skeptics, AI has been here for a while and is not going away.

A third response, which is a variation of the second, has been the belief that ChatGPT will never replace human skills like critical and creative thinking. Human writing may be replaced by AI but not human thinking itself. But this may not be true, as evidenced by AI programs like AlphaGo which has defeated the best Go players in the world, a feat that required both critical and creative thinking. If AI can think critically and creatively, there is no reason it couldn’t be designed to educate students in these skills in the near future.

In fact, ChatGPT’s ability to convincingly mimic human writing is actually a reflection of critical and creative thinking. Like numerical manipulation, writing is essentially about solving problems, whether they are about politics, policy, philosophy, aesthetics, or something else. What I think makes ChatGPT so unnerving to so many is that writing is perceived as a uniquely human endeavor, unlike numerical calculations. But both tasks—whether solving interpolation problems or writing philosophical essays—are fundamentally the same. They are solving problems—to think critically and creatively. AI programs now can do this both numerically and linguistically, albeit the latter imperfectly at the moment. With the advent of AI, the rationale that only humans can teach students critical and creative thinking has a limited shelf life.

ChatGPT appears to put another nail in the coffin of liberal education; however, a closer look suggests it could be the key to liberal education’s resurrection.

Perhaps more controversially, it is not clear that universities actually teach critical thinking to their students. With every college course now required to articulate its student learning outcomes (SLOs)—outcomes that have to be quantified and measured—in order to demonstrate students are learning, one wonders, are these SLOs really telling us anything of value? Now that assessment drives academic content in American universities, the result is a flat account of critical thinking, where one number represents excellence and another number mediocrity. As one essay’s subheading about ChatGPT states, “In a world where students are taught to write like robots, it’s no surprise that a robot can write for them.”

Embrace the Future

The conversation about ChatGPT so far has mainly focused on the effects it will have on the humanities. Putting aside the ideological nature of the humanities and the problems that assessment poses to student learning, I think that humanities professors will be relatively better off compared to their faculty peers when AI is fully adopted by the university. The problem won’t be the mass unemployment of English, history, philosophy, classics, or theology professors (this already is a problem); rather, the problem will be the mass unemployment of STEM (science, technology, engineering, mathematics) and pre-professional faculty. AI presents a greater threat to faculty teaching courses like Fixed Income Securities, Health Assessment, and Biochemistry than to faculty helping students understand and enjoy the texts of Homer, Aquinas, and Nietzsche.

In other words, the subjects taught by STEM and pre-professional faculty appear to be most likely to be replaced by AI in the future. These subjects require numerical critical thinking in making assessments about populations—something which AI does as well as, if not better, than humans now. For example, some AI programs have better diagnostic accuracy than human doctors, and last year an AI’s stock picks generated a higher price return than the S&P 500. Some are currently discussing whether AI will replace engineers, nurses, and accountants in the near future. The question that parents should ask their college-aged children now is not what they are going to do with that English degree, but rather, will there even be a job available when their civil engineer, nursing, or accounting major graduates?

Even more depressing for STEM and pre-professional faculty is the rise of alternative credentialing programs. Businesses like Google, Bank of America, GM, IBM, and Tesla have removed the college degree requirement for any positions in their companies. In some states, one can become a teacher at a private school without having an education degree. As AI improves its numerical and linguistic critical thinking skills, companies are likely to incorporate AI into their pre-screening and training of employees. There is also great potential for growth in alternative credential agencies, which can certify students in certain skills, and much will likely be available free online. All these trends challenge the university’s primary status as a credentialer and signaler to employers who can think and write.

This in turn raises the question of why parents should shell out tens of thousands of dollars every year for their children to attend college when they can learn free online, get accredited elsewhere cheaper and quicker, or be trained by their employer. For the elite universities—the Harvards, the Yales, the Stanfords—this is not likely to be an issue, because the opportunity to network with children of the elite will outweigh any financial cost or lack of learning. But for those institutions in the mid- and low-tier, such as public regional comprehensive schools, AI poses an existential threat, especially if their funding model is based on STEM and pre-professional students. Granted, this process may take a few generations or a few years, but at some point in the future, the rationale for universities to teach STEM and pre-professional students will be weakened, if not outright disappear.

If the news about AI is bad for schools that rely on their STEM and pre-professional programs, it could be good for those universities that have a clearly defined mission and identity rooted in liberal education. If liberal education is to study something for its own sake in order for us to reflect upon who we are and what our purpose in life is, then this can be best accomplished by studying the humanities. By reading and discussing literature, history, philosophy, and other traditions of the humanities, students learn the inherent value of liberal education—to be free from the demands of necessity and call for utility in order to be connected to what authentically makes one a human being.

With AI, the point of university education might shift. It is no longer about the acquisition of economic or critical skills, but about becoming a free and reflective human being. One enrolls in college because it is primarily understood as an intrinsic good for human flourishing. If you just want a job, go learn AI on the Internet (although conceivably AI could be incorporated as part of a liberal education). Strangely, we may in the future return to Plato’s Academy and the Bologna University where higher education was about contemplative learning, allowing students to reflect upon the fundamental and existential questions of identity, meaning, and purpose in their lives.

One potential concern is whether liberal education would be reserved only for the elite—really reflecting Plato’s Academy where only the upper class could participate—while most of the populace is being trained by or replaced by AI. This is particularly problematic in a democratic society where inequality currently is a prominent topic in the public discourse. But it is also possible that a widely accessible liberal education may be available, as evident in the rise of the classical school movement, which places the humanities at the heart of its curriculum, or looking at past attempts like Robert Hutchins’ and Mortimer Adler’s Great Books program which believed liberal education was necessary for the survival of democracy.

Since the turn of the century, concerns about the place and relevance of liberal education in the American university have continued unabated. ChatGPT appears to put another nail in the coffin of liberal education; however, a closer look suggests it could be the key to liberal education’s resurrection. With employment demands, assessment requirements, and skill training gone, what is left for the university to do in the age of AI? To study things for their own sake—and only liberal education can provide that.