fbpx

AI's Future: Liberty or License?

In his recent congressional testimony, Sam Altman, the CEO of the company that created ChatGPT, called for the establishment of a new government agency to regulate artificial intelligence. According to Altman, such an agency would require AI companies to obtain a license before developing AI products on a significant scale, with a stringent focus on demonstrating safety. Altman got a good reception on Capitol Hill from both parties.

But establishing a federal AI licensing agency would be harmful. It would retard AI research because investors would hesitate to back companies that might fail to get a license. Given the speculative nature of the risks associated with novel AI technologies, granting significant discretion to government bureaucrats through the licensing process would also open doors for companies to lobby the government to suppress the competition. Altman’s company as well as others already established in the field would be better able to navigate the government bureaucracy than startups. Decreasing the number of companies going into AI exacerbates the risks we face from problems that AI may help address, such as climate change. It also aids our geopolitical adversaries by limiting American advances in AI. Ironically, an agency with remit to license only AI firms it believes are safe will make Americans less safe.

An AI Agency and the Precautionary Principle

The idea of an agency that must license AI companies in advance is an application of the precautionary principle often advocated by environmentalists. The precautionary principle requires the government to take preventive action in the face of uncertainty, shifting the burden of proof to those who want to undertake an innovation to show that it does not cause harm. It holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative.

The precautionary principle has been rightly criticized because it does not sufficiently consider the benefits of innovation that the regulation will prevent. Why should these be discounted more than the risks of harm? Doing so creates obstacles to progress and may create harm itself. Indeed, the precautionary principle becomes self-refuting, because it introduces its own risks by impeding innovation and reducing the wealth necessary to mitigate harm.

But requiring an agency to license only those AI projects that can be shown to be safe before work begins is a bureaucratic instantiation of the precautionary principle. First, the dangers of new AI creations are inherently speculative. What exactly an AI project’s emergent properties will be are hard to know before it is created. Even the creators of ChatGPT were surprised by its capabilities. Second, a licensing regime inevitably prompts bureaucrats to be exceedingly risk-averse, just as the precautionary principle advises. An official who licenses an AI company that makes a product causing any trouble will be subject to opprobrium. But an official who fails to authorize a new company that would have provided a great innovation without danger will face little blowback because our political radar does not register a company’s absence.

Threats Mitigated

Applying this precautionary principle to AI research is particularly problematic. Progress in AI is already delivering widespread benefits. Consider the Covid pandemic. AI was crucial in every aspect of the pandemic—in discovering vaccines that may end it, in improving projections of its courses for establishing better policy, in developing medical treatments to save lives, and in creating ways of living that keep up productivity during the time of crisis. If Covid had happened just twenty years ago before the intervening progress in AI, vaccines would not have been deployed as quickly as they were and treatments would not have been improved as rapidly. Without access to virtual ways of living, we would have been forced to the bitter choice of losing far more productivity or enduring many more deaths.

Distance compounds the problems of bureaucratic regulation. If an AI threat is not imminent, the mechanisms that lead to it will be opaque to contemporary regulators, and they will not be able to prohibit them.

Further advances may temper other existential risks, like climate change or asteroids hitting the Earth. Regulation that slows AI down will increase mortal dangers outside the area of AI.

Yet another problem is geopolitical. AI is now the most important tool for improving the military. Drones, satellites, and both offensive and defensive missiles are all under continuous improvement by AI. One of the greatest risks of AI is that other nations will use AI militarily against us. The licensing of AI companies based on bureaucratic assessments of safety increases that risk by discouraging homegrown companies. While Altman suggests an international AI agency for regulation, such a body would face considerable difficulties in achieving effectiveness. As Tyler Cowen has noted, the International Atomic Energy Agency has had difficulty in preventing nuclear proliferation. An international AI agency, even if created, would surely be less successful than the IAEA: it would be harder to verify what nations are undertaking the risky lines of AI research than it is to verify what nations are undertaking nuclear research.

Bureaucratic Dilemmas

Finally, any regulation in this field faces crippling legal and bureaucratic dilemmas. Given the speed and unpredictability of developments in this field of accelerating change, it would be impossible to screen AI research for safety with a complex code. The code would be rapidly outdated. But if the regulation takes the form of a standard, the necessary vagueness of the standard—to prevent any research that would be unsafe—will provide enormous discretion to regulators. It will thus also deter research and investment in AI, out of fear that lines of research may be unpredictably shut down. Additional obstacles to implementing AI regulation would emerge from the challenge of attracting experts as regulators, given the high remuneration and knowledge demands of positions in the AI field. Bureaucratic understanding will thus always lag behind AI developments.

The government can better regulate AI within already existing legal frameworks. For instance, privacy law can be made to apply to AI. Those owning AI technology can be made liable for any discrimination which it facilitates. There is no need for a specialized AI agency with a roving commission that can stop AI research before it is proven harmful.

Government regulation is also not the only way to address potential harm from AI. The industry can suggest best practices. These would be imperfect guardrails, of course, but given insiders’ superior knowledge, they would prove better than government edicts. Moreover, AI itself can address dangers from AI. For instance, many companies are now developing AI that can spot disinformation generated by other AI programs.

It is true that we do not have existing law to address what some fear is the unique existential threat from AI—that it could slip the command of humans and then destroy or control humanity. But there is good reason to doubt an existential threat from AI.

We can divide potential threats into those from AI that are malevolent and from AI that are indifferent. A malevolent AI would seek to destroy humanity or subjugate it. But it is hard to understand why it would have such a motiveless malignancy. Positing a will to power, as Stephen Pinker has noted, confuses intelligence with dominance. To be sure, humans embody a will to power but that trait is from an evolutionary process, not rational design. Intelligence and dominance appear together in humans, but there is no logical reason that they must be conjoined. Only if they are conjoined, do we need to worry whether the machines have a conscience to restrain them. Even stranger is the idea of a blundering, indifferent AI that wipes out humanity. This would represent a failure not of morality, but of understanding the larger context in which it was carrying out its tasks. How could an AI simultaneously have a superintelligence and yet be so ignorant?

Perhaps even more important is the fact that the threat is not imminent. Temporal distance is not a reason to discount an existential threat, of course. We owe equal concern to our children and grandchildren. But distance compounds the problems of bureaucratic regulation. If an AI threat is not imminent, the mechanisms that will lead to it are not imminent either. Given that those future mechanisms are opaque to contemporary regulators, they will not be able to prohibit them. Moreover, we will probably have a better idea of what lines of research are likely to lead to an existential AI threat closer in time to that threat’s existence. Thus, any regulation focused on such a threat should be promulgated only when regulators enjoy greater knowledge.

Advances in AI will continue to reshape our world. Increased machine intelligence may displace jobs, requiring societies to consider how the resulting gains in productivity can be shared. Perhaps more importantly, as with the discoveries of the heliocentric solar system and evolution, the rise of an intelligence even more powerful than our own will raise further questions about man’s place in the universe and our purpose in living. But we are more likely to stay around to answer those questions if we allow AI to develop in America without bureaucratic hindrance so that it is available to address existential threats, including most of all those posed by our all too human adversaries in authoritarian and totalitarian regimes.