Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS

Neural Networks

Looking Inside The Blackbox — How To Trick A Neural Network

 Neural networks get a bad reputation for being black boxes. And while it certainly takes creativity to understand their decision making, they are really not as opaque as people would have you believe. In this tutorial, I’ll show you how to use backpropagation to change the input as to classify it as whatever you would

Dealing with Overconfidence in Neural Networks: Bayesian Approach

 Originally published in Jonathan Ramkissoon Blog, July 29, 2020. I trained a multi-class classifier on images of cats, dogs and wild animals and passed an image of myself, it’s 98% confident I’m a dog. The problem isn’t...