fbpx

Winning the AI Race

In her excellent essay, Rachel Lomasky ably describes the ways in which artificial intelligence (AI) could transform the economy and workforce. This essay discusses the converse: how political and regulatory responses could affect AI’s development and ultimately do more harm than good.

As Lomasky points out, we have been living with AI for decades. IBM’s Deep Blue supercomputer beat Garry Kasparov at chess in 1997. Match.com has been using chatbots since at least 2017. In the popular consciousness, AI fears have been around even longer. The Terminator came out in 1984, The Matrix in 1999. If people begin to form unhealthy emotional ties with chatbots, one fictional depiction of AI could soon become reality—in the 2013 movie Her, Joaquin Phoenix dates and falls in love with an advanced AI, which ends up dumping him

Today, with the widespread release of large language models and other generative AI tools, AI euphoria—largely excited by the vast productivity improvements in mundane tasks and activities—and dysphoria—usually seen in hypotheticals about machines taking over—are at record levels. According to one report, by 2030 AI could contribute $15 trillion to the global economy across a range of sectors from agriculture to defense to healthcare. (By comparison, last year the US GDP slightly exceeded $27 trillion). Earlier this year, Microsoft and a partner lab announced that AI and cloud computing allowed them to discover a new kind of solid-state electrolyte, a material that could use less lithium with fewer risks than today’s lithium-ion batteries, and in just eighty hours, rather than years. Of course, AI concerns also continue to abound, from job losses to undetectable deepfakes to AI-created biological weapons (and, of course, killer robots). 

How should policymakers respond? In a word, caution. To be sure, policymakers should study AI and prepare to fill any gaps in existing laws and policies. For instance, there are good-faith disagreements about compensation, as content creators want payment for language models that trained on their work, and about the propriety of using artificial images in political advertising. Still, existing legal regimes, from copyright to labor to tax, likely can and will address most disputes via time-tested principles, risk management, and transparency. Particularly with a technology that is evolving as quickly as AI, premature or overly burdensome regulation could stifle innovation and delay new products.

In the US and Europe, however, some policymakers appear determined to regulate first, and ask questions about the need for those regulations later. The Biden administration has already issued a lengthy Executive Order that rests on a dubious legal foundation, the Commerce Department has mandated that companies share sensitive data about language models, and the Federal Trade Commission has launched an extensive study and signaled a predisposition to find competitive problems. Likewise, the European Union, which leads the world in regulation but has struggled to build a leading tech sector, has created an AI Office and drafted a law that stringently regulates AI and punishes violators with large fines (note that there may be some correlation, and even causation, between heavy regulations and lackluster innovation). 

This premature zeal to regulate and investigate carries significant risks for both our economic and national security.

The China Challenge

Imagine a world in which China takes a commanding lead over the United States in AI. Because AI can facilitate new discoveries, Chinese companies, rather than American ones, begin to develop better medicines, materials, and manufacturing processes. Because AI can have dual civilian and military uses, the Chinese military, rather than ours, begins to produce the world’s most advanced drones, missiles, and encryption protocols. As a result, the locus of global innovation shifts from America to China and much of the world begins to look to Beijing, rather than Washington, for guidance and leadership. As Chairman Mao might say, political power flows from the source code of advanced AI.

For Americans used to treating technological supremacy as a birthright, it’s time to look over your shoulder. The Chinese Communist Party has outlined a plan to lead the world in AI by 2030, with a view toward becoming the world’s sole superpower by 2040. The race is on. Since 2017, China has had a larger proportion of peer-reviewed AI publications than the US and the EU. In 2022, 80% more AI patents were filed in China than in the US. Earlier this year, China approved more than forty AI models for public use. Although America’s dynamic free market economy has led the world in research and innovation for generations, China’s state-driven economy has several advantages that could allow it to catch or even surpass the US in AI. 

The US should want AI to develop “according to our norms and ethics, which is the antithesis of how China is using it against their citizens … through surveillance [and] oppression of their minority groups.” 

First is access to data. Large language models require lots of data to improve their performance. China, with its 1.4 billion people, has few privacy constraints to prevent researchers from accessing all the data they want. According to one report, in China, “the idea of a right to privacy is not respected or thought of nearly the same way as it is in the United States. And for that reason, gargantuan amounts of very finite, very invasive data is collected on behalf of features that are developed for products.” The Chinese government’s lawyers, such as they are, are not exactly combing through Griswold v. Connecticut to evaluate the parameters of its citizens’ right to privacy. 

Second, Chinese companies have the support of the state’s resources. The Chinese government is investing the equivalent of billions of dollars in research and AI startups, unconstrained by oversight or quarterly earnings reports. Industrial policy often tends to harm a nation’s economy and stifle innovation over the long haul, but in the short term, these state resources can give Chinese competitors a boost.

Finally, China has been willing to support cyberattacks, espionage, and outright theft to improve its AI capabilities. Researchers estimate that China steals $500 billion in intellectual property annually. In a recent interview, FBI Director Wray said China is running “the biggest hacking program in the world by far, bigger than every other major nation combined.” According to Representative Darrell Issa (R-CA), “If China wins the AI arms race, their ability to steal technology and harm not just our country but the free world will be permanent.”

For these reasons, Chinese AI primacy could undermine freedom and democracy around the globe. China “can use AI to increase its authoritarian hold of people, advance its cyber espionage strategy and interfere with elections,” as they recently attempted to do in Taiwan. The US should want AI to develop “according to our norms and ethics, which is the antithesis of how China is using it against their citizens … through surveillance [and] oppression of their minority groups.” 

In her optimistic essay, Lomasky outlines an AI governance framework consistent with Western values, a “Constitutional AI” that contains a series of ethical guidelines, which one could easily imagine evolving into a version of Isaac Asimov’s genial Three Laws of Robotics, designed to protect mankind. An AI governance framework designed by the Chinese Communist Party would look very different.

The Domestic Dilemma

Although Chinese competition suggests regulatory caution, many policymakers are already seeking to create new rules even in the absence of regulatory shortcomings. In Congress, several senators introduced a bill to create a new regulatory agency. According to them, AI is a threat to be managed, rather than an opportunity to be nourished: Congress “must create a new agency with … meaningful enforcement authority to regulate these firms … [to] mitigate the risks of AI while simultaneously addressing the harms American families and businesses experience every day in our digital world.” In its rush to regulate, the White House relied on the Defense Production Act to issue an executive order regarding AI, even though that Act was intended only for national emergencies.

Federal agencies are falling in line, to the detriment of innovation. The Commerce Department’s new reporting rules will add time and expense to the development of new language models and could deter some experimentation altogether. The Federal Trade Commission, which is battling with the Department of Justice for the law enforcement lead, has already signaled its desire to find competitive problems in the AI space. In a lengthy discussion, the FTC explained that the investment of large tech companies creates a series of competitive problems, from access to data and engineering expertise to network effects and noncompete clauses. The FTC also declared open AI models problematic because, at some future point, a company could choose to close the model. 

This regulatory rush begs a series of questions. Are there gaps in existing laws and regulations that cannot address the most common and immediate challenges, such as consumer scams, discrimination, and copyright abuse? Many officials think not. In a joint statement from the FTC, DOJ, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission, officials stated that they have sufficient authority to “protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” Moreover, generative AI exploded in popularity only recently and the court system has barely begun to process the myriad of associated issues and to apply existing doctrines to these new technologies. Perhaps time and experience will reveal a strong need for new rules, or perhaps not, but premature regulation carries risks of its own.

Is there a concern about a lack of investment, innovation, or market competition that would necessitate antitrust scrutiny? In the US alone, dozens of companies are investing billions of dollars into AI, from the biggest tech companies to midsize players to smaller companies that are seeking a foothold, such as Elon Musk’s firm xAI. Moreover, open AI architectures provide plenty of opportunities for smaller companies to develop new products. Outside our borders, China and other parts of the world, including Europe, are also investing heavily in AI. 

So why should policymakers intervene in this seemingly competitive and innovative market? In some respects, many of today’s regulators resemble Isaiah Berlin’s famous hedgehog, who knows only one thing, and that one thing is that America’s tech sector is responsible for many of the world’s ills. But perhaps policymakers should follow military strategists who recognize that the world is changing rapidly and that the pace of change calls for more fox-like skills and child-like creativity. AI resembles the fox, a revolutionary technology that adjusts and adapts with each new input. Let’s give AI time and space to grow and mature, properly nourished and constrained by the same legal and regulatory frameworks that have helped to create this technology in the first place, as even the most unruly children have the potential to become mature and productive adults.