Robot Rules of War

FacebookXPinterestEmailEmailEmailShare

robart.jpg
Legally-speaking, the business of killing even in war can be quite tricky.

Consider that the military now operates dozens of armed unmanned vehicles -- in the air, on land and in the water. That number is expected to rise exponentially in the near future.


The Law of Armed Conflict dictates that unmanned systems cannot fire their weapons without a human operator in the loop. As new generations of armed robots proliferate, the pressure will inevitably increase to automate the process of selecting -- and destroying -- targets.


Now comes the weird part.


A new legal interpretation has been proposed within the military to deal with the thorny issue of removing humans from the trigger-end of the killing process.


Here's how it works: program all armed robotic vehicles to aim only at weapons, not humans. For example, an autonomous vehicle spots an insurgent with an AK-47. The robotic vehicle is authorized to destroy the AK-47. If the human is killed in the process, that's what's called "collateral damage."


This particular legal theory is the brainchild of John S. Canning, chief engineer at the Naval Surface Warfare Center. His presentation on the subject can be downloaded here:


Download armeduavconops.pdf

I have written about Canning's proposal in Jane's Defence Weekly. For its part, legal representatives in the Office of the Secretary of Defense has disavowed any knowledge of or interest in Canning's proposal.


(Image: Robart 3, SPAWAR)


-- Stephen Trimble


Story Continues
DefenseTech