fbpx

The Most Important Charitable Initiative of the Year–OpenAI

A group of billionaires, including Peter Thiel and Elon Musk, have established a new initiative called OpenAI. It  will attempt to accelerate research into artificial intelligence (AI) but in way that assures that the resulting AI will be “friendly.” In my view, this is the most important philanthropic initiative of the year, perhaps of the decade, because it addresses a crucial issue of our time—dangers from the accelerating pace of technological change.

The development of AI can help navigate the rapids ahead, because progress in artificial intelligence can aid in assessing the consequences of social policy for other forms of accelerating technology, such as nanotechnology and biotechnology, more accurately and quickly. More substantial machine intelligence can process data, simulate the world to test the effects of future policy, and offer hypotheses about the effects of past policy.

But as Musk and Stephen Hawking have argued, strong AI– defined as a general purpose intelligence that approximates that of humans—also could threaten humanity, because it might be unable to be controlled. Man will be in the unhappy position of the sorcerer’s apprentice—too weak to master the master machines. No  amount of government regulation will be able to avoid this risk, given that the economic and national security returns to making stronger AI are enormous. Moreover, research into AI is hard to detect and prevent, because it does not require much infrastructure.

Thus, the only way to forestall malevolent AI is to accelerate research into so-called friendly AI—AI that is designed in ways that will make it live peaceably in the human community. If friendly artificial intelligence maintains a head start, it can help prevent the possible dangers that could emerge from other kinds of artificial intelligence. To be sure, this approach is not a sure route to success, but it seems much more fruitful than any kind of government regulation.

And it is best that this initiative be taken privately rather than by the government. Because governments are naturally focused on using AI for national security, we can never be sure that that research they direct into friendly AI will not be distorted by that objective.

There is a political lesson in this as well. Private charity can carry out some projects that government cannot. And some of these projects are of such a scale that only the very rich can fund them. We have yet another reason to be grateful to the one percent.

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on December 14, 2015 at 11:11:43 am

What makes something good or bad is for the most part, a value judgment. For all its dazzling potential, artificial intelligence is still pretty much curve fitting, pattern recognition and probabilistic extrapolation. It is quite possible that AI systems will do these things much better than humans, and as a result have some undesirable consequences, but this is no different than any number of technologies; gun powder, nuclear weapons, nerve gas, psychoactive drugs, etc. Whether these are good or bad in a given context depends not so much on the technologies themselves, but what human beings, with their own complex psychologies, do with them.

The distinction between "strong" and friendly artificial intelligence is, well, artificial. Can an AI application be developed to discern the distinction between the two? Which group of AI would this application belong to? Would the difference between "friendly" and "strong" AI be that the former lacks something that is found in the latter, or the other way around? Is friendly AI an amputated version of a more sinister counterpart? And if so does it matter? In what circumstances would application of the two versions of AI yield different results? What would happen if either version of AI were to be presented with what Nazi Germany referred to as the "Jewish Question?" Would Reinhard Heydrich have been able to use "friendly" AI in such a manner as to make any inherent distinction in AI types superfluous? Will human malevolence bridge the gap between good and bad technology?

There may be some theoretical concerns that some AI thingy or other will run amok, but this is no less a concern that a particular person with control of other frightening technologies may run amok as well. Many technologies are Pandora's boxes of potential terrors that appeal to imagination--ghost stories for eggheads. But the real evils that lurk in human life come from the same place they always have--evil people.

read full comment
Image of z9z99
z9z99
on December 18, 2015 at 10:17:59 am

[…] Read the full post. […]

read full comment
Image of Law & Speculative Fiction Round-up | Every Day Should Be Tuesday
Law & Speculative Fiction Round-up | Every Day Should Be Tuesday

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.