Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
BizML: Bridging the Gap Between Data Science and Business
  Eric Siegel, author of The AI Playbook, was...
The AI Hype Cycle Is Distracting Companies
 Originally published in Harvard Business Review. Machine learning has...
HR Analytics: Measuring Acquisition, Retention & Satisfaction
 Your firm is growing rapidly, and to maintain pace,...
To Avoid Wasting Money on Artificial Intelligence, Business Leaders Need More AI Acumen
 Originally published in Analytics Magazine, July 19, 2023. Business...
SHARE THIS:

3 years ago
Looking Inside The Blackbox — How To Trick A Neural Network

 

Neural networks get a bad reputation for being black boxes. And while it certainly takes creativity to understand their decision making, they are really not as opaque as people would have you believe.

In this tutorial, I’ll show you how to use backpropagation to change the input as to classify it as whatever you would like.

Follow along using this colab.

Let’s consider the case of humans. If I show you the following input:

there’s a good chance you have no idea whether this is a 5 or a 6. In fact, I believe that I could even make a case for convincing you that this might also be an 8.

Now, if you asked a human what they would have to do to make something more into a 5 you might visually do something like this:

And if I wanted you to make this more into an 8, you might do something like this:

Now, the answer to this question is not easy to explain in a few if statements or by looking at a few coefficients (yes, I’m looking at you regression). Unfortunately, with certain types of inputs (images, sound, video, etc…) explainability certainly becomes much harder but not impossible.

To continue reading this article, click here.

Leave a Reply