Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Video – Credit Models, Microfinance, and Improving the Lives of Families in the Developing World
 Event: Machine Learning Week 2021 Keynote: Credit Models, Microfinance, and...
Video – Identifying Program Effectiveness for Survivors of Human Trafficking from Muneeb Alam of QuantumBlack
 Event: Machine Learning Week 2021 Keynote: Identifying Program Effectiveness for Survivors...
Video – How to Use AI Ethically from Natalia Modjeska of Omdia
 Event: Machine Learning Week 2021 Keynote: How to Use AI...
Video – Alexa On The Edge – A Case Study in Customer-Obsessed Research from Susanj of Amazon
 Event: Machine Learning Week 2021 Keynote: Alexa On The Edge...
SHARE THIS:

3 weeks ago
The Pentagon Inches Toward Letting AI Control Weapons

 
Originally published in Wired, Oct 5, 2021

Drills involving swarms of drones raise questions about whether machines could outperform a human operator in complex scenarios.

Last August, several dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.

The drill was one of several conducted last summer to test how artificial intelligence could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision. The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed.

General John Murray of the US Army Futures Command told an audience at the US Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems. Murray asked: “Is it within a human’s ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added.

Other comments from military commanders suggest interest in giving autonomous weapons systems more agency. At a conference on AI in the Air Force last week, Michael Kanaan, director of operations for the Air Force Artificial Intelligence Accelerator at MIT and a leading voice on AI within the US military, said thinking is evolving. He says AI should perform more identifying and distinguishing potential targets while humans make high-level decisions. “I think that’s where we’re going,” Kanaan says.

To continue reading this article, click here.

Leave a Reply