The future of drone warfare: autonomy

An article from the Washington Post that looks into the future of drone warfare, and the ethics of autonomy.

At Fort Benning, GA, automated and unpiloted planes were able to search for, detect, and zero in on targets. One of the aircraft then signaled to an unmanned car to take a final closer look.

The Fort Benning [target] “is a rather simple target, but think of it as a surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech Research Institute, which developed the software to run the demonstration. “You can imagine real-time scenarios where you have 10 of these things up in the air and something is happening on the ground and you don’t have time for a human to say, ‘I need you to do these tasks.’ It needs to happen faster than that.”

The demonstration laid the groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software. Once a match was made, a drone could launch a missile to kill the target.

The prospect of machines able to perceive, reason and act in unscripted environments presents a challenge to the current understanding of international humanitarian law. The Geneva Conventions require belligerents to use discrimination and proportionality, standards that would demand that machines distinguish among enemy combatants, surrendering troops and civilians.

“The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities,” Jakob Kellenberger, president of the International Committee of the Red Cross, said at a conference in Italy this month. “It would also raise a range of fundamental legal, ethical and societal issues, which need to be considered before such systems are developed or deployed.”

In Berlin last year, a group of robotic engineers, philosophers and human rights activists formed the International Committee for Robot Arms Control (ICRAC) and said such technologies might tempt policymakers to think war can be less bloody. “The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”

“Lethal autonomy is inevitable,” said Ronald C. Arkin, the author of “Governing Lethal Behavior in Autonomous Robots,” a study that was funded by the Army Research Office.

“How a war-fighting unit may think — we are trying to make our systems behave like that,” said Lora G. Weiss, chief scientist at the Georgia Tech Research Institute.

 

Advertisements
This entry was posted in Drone Technology and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s