Predictive Analytics Times
Predictive Analytics Times
EXCLUSIVE HIGHLIGHTS
Data Reliability and Validity, Redux: Do Your CIO and Data Curators Really Understand the Concepts?
 Here are two recent entries on...
On Variable Importance in Logistic Regression
 The model looks good. It’s parsimonious,...
Data-Driven Decisions for Law Enforcement in Toronto
 For today’s leading deep learning methods...
AI, Machine Learning, and the Basics of Predictive Analytics for Process Management
 APQC Chair Carla O’Dell interviews Predictive...
SHARE THIS:

3 months ago
Attacks Against Machine Learning — An Overview

 

Originally published in elie.net, May, 2018

 

This blog post survey the attacks techniques that target AI (artificial intelligence) systems and how to protect against them.

At a high level, attacks against classifiers can be broken down into three types:

  • Adversarial inputs, which are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. Adversarial inputs include malicious documents designed to evade antivirus, and emails attempting to evade spam filters.
  • Data poisoning attacks, which involve feeding training adversarial data to the classifier. The most common attack type we observe is model skewing, where the attacker attempts to pollute training data in such a way that the boundary between what the classifier categorizes as good data, and what the classifier categorizes as bad, shifts in his favor. The second type of attack we observe in the wild is feedback weaponization, which attempts to abuse feedback mechanisms in an effort to manipulate the system toward misclassifying good content as abusive (e.g., competitor content or as part of revenge attacks).
  • Model stealing techniques, which are used to “steal” (i.e., duplicate) models or recover training data membership via blackbox probing. This can be used, for example, to steal stock market prediction models and spam filtering models, in order to use them or be able to optimize more efficiently against such models.

This post explores each of these classes of attack in turn, providing concrete examples and discussing potential mitigation techniques.

This post is the fourth, and last, post in a series of four dedicated to providing a concise overview of how to use AI to build robust anti-abuse protections. The first post explained why AI is key to building robust protection that meets user expectations and increasingly sophisticated attacks. Following the natural progression of building and launching an AI-based defense system, the second post covered the challenges related to training classifiers. The third one looked at the main difficulties faced when using a classifier in production to block attacks.

To continue reading this article in Elie, click here.

About the Author:

Elie Bursztein, leader of Google’s anti-abuse research team, which invents ways to protect users against cyber-criminal activities and Internet threats.

 

 

 

 

 

 

Leave a Reply