Accelerate Rather than Regulate Artificial Intelligence

artificial intelligence hand

Elon Musk is a visionary entrepreneur but a bad social planner. Over the weekend he addressed the Governors’ Association and called on their members to sponsor regulatory bodies to direct the development of artificial intelligence. He argued that AI is the “biggest risk we face as a civilization.”  But our AI policy should be the opposite of what Musk supports.  Federal and state governments should not regulate AI, but should help accelerate it. That is essential to our national security and offers the best hope of stopping malevolent AI, not that I believe the risk is as great as Musk apparently does.

Musk’s central premise is correct: AI is now making huge progress.  In 2011 IBM’s Watson beat the best players at Jeopardy, showing that AI can now play in the more fluid world of natural language, not just in games with very formal moves.  Just this year, Google’s AlphaGo beat the world’s best Go player. This is startling development, occurring long before most predictions. Unlike chess, Go does not have clear strategies that can be programmed: even great players have a hard time explaining why they move as they do.  Google did not program in  strategic heuristics, but learned from 30 million Go games and simulations of games how to play better than champions. Thus, as Andrew McAfee and Erik Brynjolfsso note,  the victorious program reflected Michael Polyani’s famous paradox about humans: We know more than we can tell. And this kind of data mining can give AI an intuitive, not a formally rule-based judgment in many other areas.   Lawyers, beware: the machines are coming! 

But trying to slow down or have the government direct and restrict AI (which is much the same thing) in the United States would only allow other nations to advance AI faster. And since AI is at the heart of modern military operations,  the United States would lose its essential military advantage.  If the United States remains the best hope for freedom for mankind, certainly as compared to China, our greatest competitor in AI, that is a disastrous geopolitical policy.

Indeed, even without regulation my great fear is that the United States will fall behind China in developing AI. Given that data is what trains modern AI, China’s sheer size gives it an advantage because it generates more data.  And even beyond its potentially larger pool of researchers, its universities are more geared to the sciences than are ours. Of course the United States does have advantages, such as finer top universities and a more attractive, more free society.  Thus, what the United States can best do to accelerate AI here is to give after an appropriate security vetting a green card to any Ph.D from a bona fide university or to any student who has been accepted here to a doctorate program in computer science.  And as I have suggested, it should also accelerate government grants to encourage the development of a friendly AI–one that is not dangerous to humans.

These policies would not only help maintain the security of the United States, but would give us the best chance of forestalling malevolent AI. That kind of AI is more likely to be developed in less free societies, because the social norms of those society will subject researchers to less criticism for such development. Moreover, accelerating the development of friendlier AI would create better machine intelligence to help forestall the less friendly kind.

Ever stronger AI is on the horizon.  The only question is where it will be developed most quickly. The world will be better off if that place is the United States.     

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on July 19, 2017 at 17:28:39 pm

Make protection of humans from the negative effects a problem for AI to solve. It will undoubtedly yield a better solution that a bunch of policymakers, most of whom don't know how to code.

read full comment
Image of Jon Roland
Jon Roland
on July 20, 2017 at 16:39:49 pm

I am sorry but ( I fear) that history has shwon that we may have more to fear from the noble intentions of "do-gooders" and other *visionaries* (not the AI folks themselves, BTW) and their "patrons" in guvmn't, etc. than from some noted evil-doer.

read full comment
Image of gabe
on July 25, 2017 at 14:48:29 pm

I can only assume that AI will eventually be used to support decision making by legislatures, judges and presidents. Because AI tools can be better chess and Go players, I for one look forward to a day when AI supports government office holders. AI will bring about more rational government, even for nations like China, Cuba and North Korea.

I am considering what would be involved in making an educational kid's computer game that had the players create different forms of government and watch what happens to their fictitious society as their government operated. A lot of good economic and political theory would have to go into it. Even though it would only be a game, it would be based on the foundations that an AI program would use to support government decision makers.

Meanwhile don't discount the ability of people to make interesting decisions. These guys are doing some interesting stuff with predictions and better decision making using human group intelligence. They are certainly worth a moment to check out:

read full comment
Image of Scott Amorian
Scott Amorian

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.