fbpx

The Promise of AI

I have thoroughly enjoyed the rousing discussion following my original essay, with Asheesh Agarwal and Vinay Agrawal explaining the importance of keeping pace with China in the development of AI, and Dominique Lazanski reflecting on the implications of AI for the freedoms that Americans and Europeans value so highly. Brandon Weichert raises more doubts about the value of generative AI to society in the long term, so my response will focus primarily on his arguments, addressing the other two essays as appropriate.

Weichert begins his response with a quote about the importance of humans being generalists—and then, bafflingly, spends the rest of the piece rejecting a leading technology to enable this. Generative AI arguably exceeds any other recent technology in aggregating and presenting information in a manner that allows the amateur to learn the skills to perform a wide variety of tasks (YouTube is likely the other contender for this honor). The examples of generative AI allowing amateurs to perform better are numerous, including debugging “that sound” in the car, creating art, writing like a legal or real estate expert, or even creating a recipe with the ingredients that are in the pantry. Weichert implies that allowing AI to disseminate this information will somehow make humanity into specialists, through a method he declines to explain.

Generative AI is not unique in democratizing access to skills, making quasi-specialists out of generalists. Humanity continues to get more prosperous because this is largely the role that technology has always played in society. While there are probably earlier examples, the invention of writing and its simplification into alphabets and syllabaries led to an explosion of generalization. Instead of the limited amount of information that they could memorize and remember, people could gain knowledge from the writings of practitioners and scholars.

The printing press accelerated the trend to make polymaths, making it exponentially less expensive to disseminate information. Of course, like those urging the Precautionary Principle against generative AI, the printing press had similar detractors. The religious institutions and governments were afraid of losing their authority and saw the impending demise of their influence monopolies. However, it would have been a lot harder to see the Great Enlightenment in the turmoil. Weichert’s piece has echoes of similar messages, as do many of the incumbents who are worried that they will lose their power as specialists. The Screenwriters’ Strike was a function of concern that generative AI would democratize parts of their jobs. Similarly, in 1942, there was a strike of the American Federation of Musicians, as recording technology allowed anyone to play music, rather than having to spend years specializing in learning how to play a particular instrument. Inexpensive recorded music certainly hurt the professional musicians in the short term. However, it was an incredible boon to society, increasing the total music market, and making more total money for musicians in the long term.

Another classic example is the Luddites, specialist weavers displaced by mechanized looms, which could be operated by unskilled labor (generalists, not specialists). Today, “Luddite” is a synonym for technophobe, fueled by a lack of imagination and a good dose of the Precautionary Principle. More modern Luddites fought the telephone, fearing it was going to put the telegraph out of business (and of course, it did). They could have perhaps predicted the telephone industry itself would be a major employer, far exceeding the number of jobs in the telegraph industry. It would have been harder to see how it would streamline transportation, allow businesses to extend their employment, revolutionize emergency response, or even eventually get cheap enough that kids could regularly talk to Grandma in another city. Perhaps it would have been impossible to imagine the longer-term effects, of technologies like answering machines, fax machines, and the Internet. We are in a similar state today with generative AI, before the beginning of the compound benefits to humanity.

Society must fight against the policymakers, eager to solve a problem that doesn’t exist so they can get credit for its absence.

Joseph Schumpeter termed this phenomenon, when new technologies and processes inherently involve both progress and disruption, “creative destruction.” He noted that society is constantly in flux, caused by new ideas and technologies emerging. This churn leads to innovation and economic growth, higher living standards, and greater overall prosperity. However, it also means job loss, and occasionally the collapse of entire industries. It may take a little faith to see the “creative” for generative AI amid the “destruction.” As both Lazanski and I mentioned, we are seeing some benefits from generative AI, particularly in the fields of medicine and drug discovery. But the full effects are still in our future, including many of the secondary effects.

Technology more often replaces the constituent tasks which compose jobs, rather than the jobs themselves. For example, GPS has replaced “The Knowledge,” the legendary test that requires taxi drivers to memorize 25,000 streets in London. However, people still demand to be driven around; the years of specialized knowledge acquisition are just no longer necessary. In fact, GPS-assisted drivers have an advantage over those who just memorized the streets, because they have real-time traffic information. Likewise, AI will not replace screenwriters, but accelerate their work. GitHub Co-Pilot makes developers more productive, with a disproportionate effect on the less experienced coders. We are seeing intelligence augmentation (IA) rather than AI. Doomsayers always predict that while an increase in human freedom and prosperity have largely followed from previous technological advances, this time will be different. Perhaps it will. But the most likely outcome is that industry will shift as will people’s jobs, and society will become more prosperous.

Even slowing the adoption of technology, for instance, by adopting the six-month pause advocated by Future of Life, can have negative effects on society. The Ottoman Empire delayed adopting the printing press, citing caution but likely worried it would diminish religious authority. Exclusion from this vital technology caused it to fall behind Europe, and the same thing could happen today. The Agarwals have explained that Western reluctance to adopt generative AI could lead to Chinese supremacy. American and European regulation will not stop generative AI from being developed, but it may give the upper hand to those less likely to conform to our ethics. It is not a question of whether generative AI is developed, but rather by whom. Thus, we must be careful not to hinder the nascent Western generative AI industry with a harsh regulatory regime.

The heavy regulatory regime proposed by Weichert would retard Western progress beyond just AI. In Europe, similar regulation (known as the General Data Protection Regulation, or GDPR) has squashed startups and small businesses, which cannot afford to implement such onerous requirements, and are afraid of the risks if they misinterpret the vague sections. Companies that do pass these requirements are shackled by the lack of data for AI and Machine Learning, hindering Europe from developing novel algorithms. Consumers cannot decide their personal trade-off between privacy and the usage of innovative tools. Thus, GDPR has caused Europe to have significantly less venture capital funding, no significant foundational models (e.g. Gemini and ChatGPT), and fewer AI startups.

Likewise, Weichert makes the classic mistake of blaming AI for the problems with Big Data. Without Big Data, AI isn’t terribly powerful but large data collections did not begin with AI, never mind generative AI. Companies and governments have been collecting data on their customers and citizens for thousands of years. The Bible, for instance, includes a lengthy story of Moses conducting a census. Government data collection is alarming, particularly surveillance and biometrics, but largely orthogonal to generative AI.

Warfare remains a nebulous topic in many dimensions, including AI. It’s easy to imagine a nightmare situation because war is a nightmare. In the future, maybe war will become unmanned or end altogether, as Lazanski predicts. Maybe, as the Agarwals note, it will continue to morph onto the digital front, becoming cyber warfare, which would take far fewer lives. It is notoriously hard to predict.

However, there seems to be a negative correlation between the most terrible weapons and the deaths they cause. The effects of chlorine gas turned out to be so horrific that treaties banned its use in WWII. Not even that Nazis deployed poison gasses in warfare, although they existed in their arsenals. Following WWII, nuclear weapons have never been used in war. Perhaps drone soldiers will be considered so horrific that they will never be deployed. But countries should invest in ensuring that their defense is other drone soldiers, and not humans.

All of this is to say that it is easy to be short-sighted and fearful about generative AI, or really any other novel technology. But a complete view must include a glimpse at history, which shows a positive correlation between technological innovation and human progress. Society must fight against the policymakers, eager to solve a problem that doesn’t exist so they can get credit for its absence. It must also be wary of incumbent market leaders, eager to use fear to promote regulatory capture. This is not to deny that generative AI will transform parts of some jobs, or that some companies will fall as others rise. But rather than fighting against the tide, humanity should harness it.