Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
Today’s AI Won’t Radically Transform Society, But It’s Already Reshaping Business
 Originally published in Fast Company, Jan 5, 2024. Eric...
Calculating Customer Potential with Share of Wallet
 No question about it: We, as consumers have our...
A University Curriculum Supplement to Teach a Business Framework for ML Deployment
    In 2023, as a visiting analytics professor...
SHARE THIS:

3 years ago
Algorithm Helps Artificial Intelligence Systems Dodge “Adversarial” Inputs

 
Originally published in Massachusetts Institute of Technology, March 8, 2021. 

Method builds on gaming techniques to help autonomous vehicles navigate in the real world, where signals may be imperfect.

In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward.

Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action — steer right, steer left, or continue straight — to avoid hitting a pedestrian that its cameras see in the road.

But what if there’s a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called “adversarial inputs,” it might take unnecessary and potentially dangerous action.

A new deep-learning algorithm developed by MIT researchers is designed to help machines navigate in the real, imperfect world, by building a healthy “skepticism” of the measurements and inputs they receive.

The team combined a reinforcement-learning algorithm with a deep neural network, both used separately to train computers in playing video games like Go and chess, to build an approach they call CARRL, for Certified Adversarial Robustness for Deep Reinforcement Learning.

The researchers tested the approach in several scenarios, including a simulated collision-avoidance test and the video game Pong, and found that CARRL performed better — avoiding collisions and winning more Pong games — over standard machine-learning techniques, even in the face of uncertain, adversarial inputs.

To continue reading this article, click here.

 

Leave a Reply