Last week it came to light that Cambridge researchers are busy investigating whether technology could one day end up destroying civilization. Could robots wipe out mankind?
It’s a question that science-fiction has sunk its teeth into for decades, if not centuries. Popular images of Terminator-style robots scouring the planet and eliminating a ravaged humanity stir the imagination.
But the Centre for the Study of Existential Risk (CSER) is no fiction, and will study the dangers associated with biotechnology, artificial life, nanotechnology and climate change.”The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” according to their website set up for the centre.
“It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Prof Price told the AFP news agency. He added that as robots (and also computers) become more intelligent than humans, mankind could find itself at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.
This dystopian future is similarly pre-empted in a recently-released report released by Human Rights Watch and the Harvard Law School International Human Rights Clinic. Titled “Losing Humanity: The Case against Killer Robots”, the 50-page document is something of a watershed moment. It’s the first publication by a nongovernmental organization about the legality and ethics of “fully autonomous weapons”. The report calls for governments sign an international treaty to prevent their development and procurement.
Furthermore, according Spencer Ackerman, it was also last month that Deputy Defense Secretary Ashton Carter signed a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements.”
Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’a many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.
The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.
While these reports are hardly off the mark, they miss a far more interesting question: are drones, or unmanned aerial vehicles, already autonomous? Not in the sense of their ability to think and act themselves, but their ability to influence, impact, and change the future. Terminator Planet or not, the fact is that Predators and Reapers are changing the foundation of geopolitics and producing a new geography of assassination that cannot be “undone”.
And of course, drones are already an existential threat for thousands upon thousands of people across the globe in Pakistan, Yemen, and Somalia (and who knows where else).
The genie left the bottle a long time ago, and we don’t need to peer into a crystal ball to bear witness to a science-fiction present that would once have amazed even the most imaginative of writers.