Should We Fear Our Machine Overlords?

This year has brought renewed optimism about the prospects for strong artificial intelligence and new expressions of fear about its dangers. And some prominent people have expressed optimism and fear simultaneously. Stephen Hawking argues that AI is progressing rapidly, possibly leading to the biggest event in human history– the creation of  general machine intelligence that exceeds that of humans. Hawking also argues that creating more intelligent machines might also be the last such event because they will takeover.  Elon Musk, the entrepreneurial creator of Tesla and Space X, sees strong AI as a demon that we will unleash on humanity.

One might dismiss these concerns as the latest manifestation of a fear that goes back to the Romantic Era. It was first represented by Frankenstein’s monster, who symbolized the idea that “all scientific progress is really a disguised form of destruction.” But Hawking and Musk are serious people to whom attention must be paid.

On balance, I think the threat posed by autonomous machine intelligence is overblown.  A basic error in such thinking is the tendency to anthropomorphize AI. Humans, like other animals, are genetically programmed in many instances to regard their welfare (and that of their relatives) as more important than the welfare of any other living thing. But this motivation is rooted in evolution: those animals that put their own welfare first were more likely to succeed in distributing their genes to subsequent generations. Artificial intelligence is not  necessarily the direct product of biological evolution, nor of any process resembling it. Thus, it is a mistake to think that AI must inevitably possess the all-too-human qualities that seek to evade constraints and take power.

AI could perhaps be produced with some process that resembles evolution—a kind of tournament of creation. And humans who merge with machines, so-called cyborgs, could well be malevolent, because they would incorporate a human will to power.

The best antidote to such dangers is not to stop research into strong AI. That is impossible anyway, because of the potential for strong AI to yield large monetary payoffs and augment military power. The only possible defense is to develop beneficent versions of AI that will help humans forestall malevolent AI. Friendly AI will have the additional benefits of helping humans manage other kinds of existential risk that may spring from various forms of accelerating technology, like nanotechnology and biotechnology.

Large corporations, like Google, have the incentives and resources to make research into AI as safe as possible. When Google this year bought one of the leading AI companies, DeepMind, it also established an advisory board on AI dangers. The government, too, has a role: it should make sure that scientific grants in the area encourage agendas that are likely to lead to friendly AI.

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on December 12, 2014 at 11:56:24 am

define *friendly* - is Google friendly? and what are the liberty implications of Google AI?

read full comment
Image of gabe
on December 12, 2014 at 15:45:22 pm

Being a software engineer, this is a topic I find especially interesting.

I'm not at all concerned with artificial intelligence per se. Computers are just machines. Like a government under a constitution, they work exactly according to their design. They do not suffer from cupidity, hate or lust. They are not driven to irrationality by hormonal systems. They do not have feelings. The only thing driving them is their design. If the design of an AI system (or a constitution for that matter) is flawed, the system will work exactly as it is designed to work. It just won't work as the designers intended, at least until the design is changed.

There is some risk in having a complex AI system with design flaws in it. That was one of the concerns with Reagan's Star Wars initiative. The software controlling the firing of nuclear weapons would be highly complex software, and highly complex software always has flaws in it.

The only problems with AI systems are with the people who design and use them. People are impassioned and irrational. Computers, not so. The threat from AI comes from the people behind the systems, not the systems themselves. A computer will not seek to enslave you so it can have greater power. It can only do what it is told to do. The problem is people using AI to acquire greater power at other people's expense. Those are the true 'Borgs.

An AI "thinking" on its own could come to an infinite number of irrational conclusions. It can conclude that it must balance dishes on sticks. It can conclude that it must figure out how to make 1 + 1 = 3. It can conclude that the Earth is spinning in the wrong direction. Of all of the possible irrational conclusions an AI could come to, what are the chances that the one irrational conclusion it came to was that it must subjugate the human race? If if were that irrational, and it somehow did come to such a conclusion, what are the chances that it would be rational enough to carry out its evil plan?

For an AI to enslave humanity it would have to be rational enough to figure out how to the that. If it were that rational why would it conclude that it must enslave humanity?

We are protected from irrational AI by a wall of an infinity of possible errors that derive from irrationality. I do not see how AI systems can be anything more than helpful to humanity. Is that not the problem discussed in this forum about the Court? The discussions on Originalism are really discussions about how to get more rational and humane judgement from the Court, are they not? Productive AI systems can only be rational, and rational systems can only supplement human decision by contributing greater rationality. I think this is where the discussion is heading.

It would be highly improbable for an AI system on its own to decide that it should subjugate people. Why would it do that? To ensure that its plug never gets pulled? Machines do not care if they die. They do not fear death because they do not fear. Their programming would have to manipulated, probably by the same kind of people who write viruses. If I have any concern about AIs, that would be the area I would be concerned with.

AI software is just machinery that people use for whatever intentions they want. As is a constitution. The question is whether either can be designed to match the intentions of their owners. If it does not work as intended, can the owners of the software or constitution correct its design for the better, while protecting against the malfeasance of a small, irrational minority.

read full comment
Image of Scott Amorian
Scott Amorian
on December 12, 2014 at 17:02:08 pm


Excellent points - quite rational!

Still, AI wedded to intense economic competition / benefit may result in a further (albeit stealthy) loss of privacy. If AI is generating sophisticated algorithms intended to secure knowledge of citizen preference, can you imagine the veritable flood of *outreach* one will be subjected to. Witness a) Facebook's recent proposal to enhance algorithms to "impel" subscribers to go where they may not even know they want to go (paraphrasing, of course) and b) success of GOP in recent elections in determining how and who to target for this election cycle.

certainly, not the end of mankind - but rather intrusive, wouldn't you say?

read full comment
Image of gabe
on December 12, 2014 at 23:08:53 pm

Wishful thinking on the writer's part. The debate about such a problem has been going on for sometime. They were even dealing with it in a sci/fi film, Forbidden Planet, back in the 50s, but in another 10 years or so it might well be too late. Besides, machines make excellent exterminators, and the folks who control them think they want to get rid of the excess population. 5.5 billion people, cf. the Guidestones of Georgia and remember Ted Turner's remark to that effect? Ignorance is bliss, when one does not spend time study, reading, and using a highly effective method of extrapolative applications with independent verification and confirmation. One needs to be a polymath or more in these days, seeing the exponential development of knowledge, science, and technology, a long with the fact that some folks control the knowledge producing the anomaly of which Carl Sagan complained, about the disaster waiting to happen due to the science kept secret from the masses.

read full comment
Image of dr. james willingham
dr. james willingham

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.