US Won't Use 'Terminator' That Makes Lethal Decisions: Four-Star

FacebookXPinterestEmailEmailEmailShare

Scientists estimate the world is about a decade from the development of an autonomous weapon that can decide what and when to kill. But the U.S. military has already decided it doesn't want to fight that way.

That's according to Air Force Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, who spoke to an audience on Thursday at the Center for Strategic and International Studies in Washington, D.C.

Selva said the Defense Department was keenly interested in developing its man-machine teaming capabilities. But when it comes to warfighting, humans will always be part of the equation, he added.

While autonomous and artificial intelligence, or "AI," technology is still maturing when it comes to warfighting, the ethical questions it spurs -- about whether to enable killing without human input -- are already here, Selva said.

In light of the so-called "Terminator Conundrum," spurred by increasingly complex autonomous and artificial intelligence warfighting technology, the military brass are bringing in civilian and military ethical and legal experts to advise on the "potential pitfalls of incorporating technology in the execution of warfare," Selva said.

"I'm not bashful about what we do; my job as a military leader is to witness unspeakable violence on an enemy," Selva said. "[But] inside of that context, the methods by which we execute our mission are governed by law and convention. One of the places we spend a great deal of time is determining whether or not the tools we are developing absolve humans of the decision to inflict violence on the enemy."

That's a fairly bright line, and one that military leaders are unwilling to cross, Selva said.

Even so, not all nations, including rogue actors, may adopt similar standards, Selva noted. He added that he saw a need for rules to govern autonomous warfare, but was aware that not all global powers would abide by them.

"I think we do need to examine the bodies of law and convention that might constrain anyone in the world from building that kind of [completely autonomous warfighting] system," he said. "But I'm wholly conscious of the fact that, even if we do that, there will be violators."

That doesn't make the creation of rules and conventions meaningless or irrelevant, however, he said.

"Until we understand what we want the limits to be, we won't have a baseline to determine if someone is moving down the path of … creating something like a Terminator," Selva said.

Story Continues
DefenseTech