By: Eric Siegel, Predictive Analytics World

Watch the second episode of The Dr. Data Show, which answers the question, “How can you trust artificial intelligence?”

 

About the Dr. Data Show. This new web series breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.

Click here for prior episodes and to sign up for future episodes of The Dr. Data Show

Full Transcript of This Episode

Please note that viewing the video (above) is recommended, since it includes complementary visuals. Also, certain vocal inflections and gesticulations hold meaning. Some of the intented meaning is lost by reading this transcript rather watching the video.

Welcome to “The Dr. Data Show”! I’m Eric Siegel.

How can you trust AI? That’s a pretty important question. As we rely more and more on computers for tasks that are more and more important and complex, how do we know the machines won’t screw up?

After all, machines are really taking over. Computers decide — or at least help decide — who stays in jail, who gets a loan, who gets hired for a job, which tax refunds are legit, which patient has cancer, and which objects a self-driving car avoids.

Now, when our machines take on such responsibilities, we often call it artificial intelligence — but that’s a fuzzy term with no agreed definition and can pretty much mean whatever the heck you want it to mean…

Instead, there’s a better word for it, because, in actuality, for automating such challenging decision-making, the particular technology that’s deployed is machine learning: when computers learn from experience. For example, say you’re a bank and you want your computer to help decide which credit card applicants to approve. Suppose that, in recent times, you’ve accumulated the records of 50 thousand cardholders and you’ve noted which of those ended up defaulting and never paying their balance. You wanna avoid those kinds of cardholders in the future — they are money down the tube.

The list of records — that data — is experience. It’s history from which to learn. So now it’s time for your computer to do what computers do best: number-crunch and optimize. Uncover the patterns and trends so you can classify new incoming credit card requests as well as possible. That pattern discovery process is machine learning.

So the ultimate question is, how can we trust that the machine has discovered something valid, that what it learned will hold true in new situations never before seen? So for example say it finds that credit card holders who are head-of-household, subscribe to sailboat magazines, and go to the dentist are five times more reliable bill-payers than average. I like totally made that example up, but it’s similar to real insights found by banks. So, assume it has learned a bunch of insights like that, like a few dozen patterns like that. The system sees they hold true over the 50 thousand historical examples it’s been analyzing. But how does it know it didn’t just find peculiarities to that dataset? Trends that are particular to the long list of examples at hand? How does it know these patterns it’s discovered actually hold true in general, in situations that have never before been seen? I mean, how could such a thing possibly be scientifically validated? This is the ultimate question for machine learning.

This same question applies whether you’re trying to train the computer to categorize possible credit card holders, possible criminals, possible tax fraud, or possible objects a self-driving car must avoid.

So, the big daunting question is, does what’s been learned or discovered from the examples hold in general, or is the system actually “too good” at learning — so much so that it unearths apparent trends in the given data set that aren’t true in general. If it does that, this is a dysfunction known as overlearning. Overlearning is very bad, ‘cause it means basically it learned incorrectly.

So it turns out that overlearning is actually easy to detect. No fancy math, no advanced science, no deep theory. All you do is hold aside some data for testing. You randomly pull out a sample of data for this purpose and hide it, quarantining it so the machine learning process can’t access is, doesn’t see it, and can’t possibly cheat. Then, once the number-crunching is complete, the learning has stopped, and the insights are frozen, then they’re evaluated on the test data. This serves as an objective measure of how well the insights hold up in general, beyond the examples within the data used to ascertain them in the first place.

And that’s it. That’s the main way we check for reliability in machine learning, the main way we know we can trust its performance will hold strong.

I’m Eric Siegel; thanks for watching. Hit “like” and share this video if you think your friends were also wondering how you can trust AI. And for access to the entire web series, go to TheDoctorDataShow.com.

Excerpt of “Predict This!” rap during closing credits:

Who’s your data?

Provide me the data to improve
and I’ll apply the computation.

Predictive analytics can help you with decisions;
you can call, mail, credit, or hire with precision.
On law, love, and life, you can prognosticate
whom to investigate, incarcerate, set up on a date, or medicate.

Charlie Brown never gets his kicks;
that’s why every old dog needs a brand new trick.
If you get sick of chasing sticks or clicks with just a quick fix,
you need to learn to predict.

I can predict your every move;
just gimme all your information.
Who’s your data?
Provide me the data to improve
and I’ll apply the computation.

I love it when you call me big data.

To receive notifications of new webisodes of The Dr. Data Show as they are released, register for The Predictive Analytics Times – the machine learning professionals’ premier resource.

Click to view more episodes and for more information about The Dr. Data Show