fbpx

Regulating Autonomous Weapons on the Battlefield

A recent meeting in Geneva on the implementation of the Convention on Certain Conventional Weapons focused on regulating autonomous weapons. Autonomous weapons are systems that decide to deploy lethal force without direct human control. Imagine, for instance, drones guided by sensors and preprogrammed algorithms that would choose for themselves the time and place to release their deadly missiles.

There was substantial sentiment at the meeting for banning such weapons. Such a ban would prove an enormous mistake. It would harm the interests of the United States and make for a less peaceful world.

The first problem with such a ban is that it is difficult, if not impossible, to verify.  First, autonomous systems depend on AI programs, which, unlike nuclear weapons, are very easy to hide. Second, autonomy is a matter of degree: limited human oversight would be hard to distinguish from full autonomy. The lack of verifiability will empower rogue nations in the arms race that has characterized military competition from the beginning of civilization. In the world of tomorrow that arms race will be paced by robotics and machine intelligence.

Second, because of the West’s technological superiority, the West in general and particularly the United States have an advantage in developing these weapons. Robotic weapons, even if not yet autonomous, have been among the most successful in the fight against Al-Qaeda and other groups waging asymmetrical warfare against the United States. The predator, for instance, has been successfully targeting terrorists throughout Afghanistan and Pakistan, and more technologically advanced versions are being rapidly developed. This advantage may grow as weapons become more and more infused with the latest developments in artificial intelligence.

If the United States is the best enforcer of rules of conduct that make for a peaceful and prosperous world, this development must also be counted as an advantage. And there are reasons other than national pride for this belief. The United States is both a flourishing commercial republic benefitting from global peace and a hegemon uniquely capable of supplying that public good.  Because it gains a greater share of the prosperity that is afforded by peace than other nations, it has incentives to shoulder the burdens of maintaining world security. Thus, we should be very hesitant to curtail the military reach of the United States conferred by applying advances in AI to the battlefield.

The better course would be to apply the laws of war to autonomous weapons. They should be as liable as humans for indiscriminate killing. In the long run, since they are driven by sensors and dispassionate software, autonomous weapons should be able to discriminate better than weapons under human control. They could then be held to  a higher standard for avoidance of civilian deaths. Because they are robots, attacks on them should elicit a less substantial response that if the attack were on humans, thus often decreasing levels of force deemed proportionate on the battlefield. This course may well result in better outcomes for civilians as well as for civilization.

The lesson here is a more general one. Technological advances even in war have benefits as well as costs. Complete bans on the technology are often based of fear of the unknown and will rarely be the way to balance their costs and benefits.

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on May 28, 2014 at 12:46:08 pm

My Dear Professor:

You obviously are not watching this seasons showing of "24"!!!!

"They could then be held to a higher standard for avoidance of civilian deaths. Because they are robots, attacks on them should elicit a less substantial response that if the attack were on humans, thus often decreasing levels of force deemed proportionate on the battlefield. This course may well result in better outcomes for civilians as well as for civilization."

What are you arguing here? A machine will be held accountable? and,
we can then "punish" it by attacking it - heck, this option is already open to all with the technology, and,
this will lead to better outcomes? -I thought earlier you asserted that the use of drones was going to lead to better outcomes - so now we are saying that destroying drones will lead to better outcomes - which shall it be?

read full comment
Image of gabe
gabe
on May 28, 2014 at 13:48:12 pm

Western Civilization has just completed a century of conflict that is only now slowly ebbing to more limited disturbances on his periphery.

While the movements of peoples and the blending and ending of cultures into the Western Experience may be indicative of the formation of some successive form of civilization, there is also the possibility of renewed extensive conflicts during that blending process arising from conflicting human motivations.

The scholars who have studied and reported on the effects of the changes in the technologies of warfare and the applications of violence in the development of civilizations and their destruction have never minimized the importance of human motivation.

The concepts of weaponry that can filter out in human motivation will prove to be invalid. So-called, artificial intelligence is an attempt to mimic human mental procedures without the moderating effects of motivations. In these considerations of new technologies of weaponry, it will not be the details of technology – the devil will be in the applications.

read full comment
Image of R Richard Schweitzer
R Richard Schweitzer

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.