Machine Learning Times
EXCLUSIVE HIGHLIGHTS
For Managing Business Uncertainty, Predictive AI Eclipses GenAI
  Originally published in Forbes The future is the ultimate...
AI Business Value Is Not an Oxymoron: How Predictive AI Delivers Real ROI for Enterprises
  Originally published in AI Realized Now “Shouldn’t a great...
How To Un-Botch Predictive AI: Business Metrics
  Originally published in Forbes Predictive AI offers tremendous potential...
2 More Ways To Hybridize Predictive AI And Generative AI
  Originally published in Forbes Predictive AI and generative AI...
SHARE THIS:

4 years ago
The Pentagon Inches Toward Letting AI Control Weapons

 
Originally published in Wired, Oct 5, 2021

Drills involving swarms of drones raise questions about whether machines could outperform a human operator in complex scenarios.

Last August, several dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.

The drill was one of several conducted last summer to test how artificial intelligence could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision. The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed.

General John Murray of the US Army Futures Command told an audience at the US Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems. Murray asked: “Is it within a human’s ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added.

Other comments from military commanders suggest interest in giving autonomous weapons systems more agency. At a conference on AI in the Air Force last week, Michael Kanaan, director of operations for the Air Force Artificial Intelligence Accelerator at MIT and a leading voice on AI within the US military, said thinking is evolving. He says AI should perform more identifying and distinguishing potential targets while humans make high-level decisions. “I think that’s where we’re going,” Kanaan says.

To continue reading this article, click here.

One thought on “The Pentagon Inches Toward Letting AI Control Weapons

  1. With stunning real-world images and Google Street View integration, Worldguessr takes you on a global journey. From bustling city streets to hidden gems, each round offers a new discovery.