5 Scary Things About Artificial Intelligence That Worry Military Brass

FacebookXPinterestEmailEmailEmailShare
Lance Cpl. Timothy Knaggs (center), a team leader with India Company, 3rd Battalion, 3rd Marine Regiment, walks ahead of the Legged Squad Support System, acting as the "follow" for the machine at Marine Corps Training Area Bellows, June 19, 2014. (U.S. Marine Corps/Matthew Callahan)
Lance Cpl. Timothy Knaggs (center), a team leader with India Company, 3rd Battalion, 3rd Marine Regiment, walks ahead of the Legged Squad Support System, acting as the "follow" for the machine at Marine Corps Training Area Bellows, June 19, 2014. (U.S. Marine Corps/Matthew Callahan)

Robots with minds of their own, bad guys manipulating data, and getting sorely outpaced by the Chinese. Those are just a few concerns shared by leaders tasked with weaving machine learning into military operations.

Exactly how artificial intelligence will play out on the battlefield remains to be seen. Defense leaders and researchers are on the hunt for ways to free up troops' time so they can address more pressing issues.

"How can we lighten the load for soldiers, sailors, airmen and Marines?" Maj. Gen. William Cooley, head of the Air Force Research Laboratory, asked during a Wednesday conference hosted by Defense News. "How can AI make us more efficient, effective and capable?"

Think more computers sifting through complex data instead of an officer having to assess it, or technology that can recognize preventative maintenance needs on troops' equipment.

There are plenty of challenges that come along with using artificial intelligence, though. Here are just a few worries they shared about machines doing the thinking for troops.

 

1. Killer robots.

We might be a ways off from a "Terminator"-style nightmare in which a self-thinking computer wages war on the planet. But as the military experiments with more autonomous vehicles and robots, experts are thinking about ways to keep them in check.

Balancing the desire to make life easier for the warfighter while protecting humans from machines is an "incredibly difficult" topic, said Rear Adm. David Hahn, chief of naval research and director of Innovation, Technology Requirements and Test and Evaluation. What's important to remember, he added, is that accepting some risk and knowingly taking a gamble are two very different things.

 

2. More machines, fewer humans making decisions.

Warfare will always involve human beings, said Alexander Kott, chief scientist with the Army Research Laboratory. One of his lab's key efforts is thinking about how humans and artificial intelligence can best fight as a team.

Researchers first need to think about what's best done by a machine versus a human, Cooley added.

"How can we apply these tools to a more unpredictable environment to safeguard humans so they're in control, but they have help?" he said.

 

3. Bad or tainted data.

A lot of machine-based-learning software used in the civilian world is open source, which means lots of different people can affect the artificial intelligence. In the military where national security risks are at stake, that approach isn't likely to work.

Kott called data the "greatest challenge in terms of applying artificial intelligence in the military."

"There's not enough of it, it's distorted, it's dynamic, it's rapidly changing," he said. "Data can and will be used against you."

That not only means they need to be clever about how they teach artificial intelligence to work using limited data, but also how to protect that data being manipulated by an adversary.

 

4. Limited platforms.

Another limitation the military faces is having to integrate artificial intelligence capabilities into some of its complex existing platforms. It's not always feasible for the U.S. military to design entirely new systems built just for AI capabilities -- especially something with complicated security needs.

"The legacy systems we start with are significantly different than those our adversaries start with," Hahn said.

 

5. The next space race.

There's a global power competition happening in artificial intelligence. China has openly declared AI to be the next space race, Hahn said. And the country plans on winning that race.

That means the military must look to industry and academic experts who can help the services build capabilities rapidly, Cooley said. Countries like China don't deal with the rigid contract structures and other rules that can slow the developmental process significantly.

"We must be innovative and change the dynamic of how we're going to get at this or we're not going to get the capability for the U.S. military," he said.

--Gina Harkins can be reached at gina.harkins@military.com. Follow her on Twitter at @ginaaharkins.

Story Continues