Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The AI Paradox: More Humanlike Means Less Autonomous
  Originally published in Forbes The AI executives are at...
How To Overcome The Confidence-Killer That Destroys Most Predictive AI Projects
  Originally published in Forbes When Henry Castellanos first presented...
You Must Address These 4 Concerns To Deploy Predictive AI
 Originally published in Forbes Most predictive AI projects fail to launch into production. The...
Hybrid AI: Industry Event Signals Emerging Hot Trend
 Originally published in Forbes After decades chairing and keynoting myriad...
SHARE THIS:
  • Mar 15, 2021
  • Comments Off on Algorithm Helps Artificial Intelligence Systems Dodge “Adversarial” Inputs
  • Industry News, Left-hand
  • 2000 Views

5 years ago
Algorithm Helps Artificial Intelligence Systems Dodge “Adversarial” Inputs

 
Originally published in Massachusetts Institute of Technology, March 8, 2021. 

Method builds on gaming techniques to help autonomous vehicles navigate in the real world, where signals may be imperfect.

In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward.

Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action — steer right, steer left, or continue straight — to avoid hitting a pedestrian that its cameras see in the road.

But what if there’s a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called “adversarial inputs,” it might take unnecessary and potentially dangerous action.

A new deep-learning algorithm developed by MIT researchers is designed to help machines navigate in the real, imperfect world, by building a healthy “skepticism” of the measurements and inputs they receive.

The team combined a reinforcement-learning algorithm with a deep neural network, both used separately to train computers in playing video games like Go and chess, to build an approach they call CARRL, for Certified Adversarial Robustness for Deep Reinforcement Learning.

The researchers tested the approach in several scenarios, including a simulated collision-avoidance test and the video game Pong, and found that CARRL performed better — avoiding collisions and winning more Pong games — over standard machine-learning techniques, even in the face of uncertain, adversarial inputs.

To continue reading this article, click here.

 

Comments are closed.