By: Eric Siegel, Predictive Analytics World

This article is based on the transcript of one of 142 videos in Eric Siegel’s online course, Machine Learning Leadership and Practice – End-to-End Mastery.

Developing a good predictive model with machine learning isn’t the end of the story — you also need to use it. Predictions don’t help unless you do something about them. Your model may be elegant and brilliant, glimmering like the most polished of crystal balls, but displaying it in a report gains you nothing — it just sits there and looks smart.

Stagnation be damned — deployment to the rescue! Predictive models leap out of the laboratory and take action. In this way, machine learning stands above other forms of data science. It desires deployment and loves to be launched — because, in what a model foretells, it mandates movement.

By acting on the predictions produced by a model, the organization is now applying what’s been learned, modifying its everyday operations for the better. The word for this is deployment.

Deployment: The automation or support of operational decisions that is driven by the probabilistic scores output by a predictive model. Also known as model implementation or model operationalization.

Note: The definitions in this article are from my Machine Learning Glossary.

Deployment is where the rubber hits the road and value is realized. Your model is a shaker and a mover.

To make this point, we have mangled the English language: We say that machine learning (aka predictive modeling or predictive analytics) is the most actionable form of analytics. The model’s output directly informs actions, determining what’s done next. But I hope you’re not litigious because, with this use of vocabulary, we’ve stolen the word actionable from lawyers and mutated it. It originally meant, worthy of legal action. Something you can litigate on. Well, sue me.

With this word’s new meaning established, “your fly is unzipped” is actionable — it’s clear what action should be taken — but “you’re going bald” is not, since there’s no cure, nothing to be done.

There’s even been an intermittent movement to make sure predictive analytics informs actions by inventing a new term, prescriptive analytics. The idea is that predictive analytics foretells the future, but prescriptive analytics goes a step further to inform what you should do about it. But I gotta warn you, that’s not actually a real field or technology. Predictive analytics is already, by design, meant to drive actions and decisions, by way of each per-individual predictive score. It’s already intrinsically prescriptive. Introducing the term “prescriptive analytics” implies there is additional technology or new quantitative techniques.

Well, you do often need to incorporate business logic to translate predictive scores into actions. For example, a customer retention marketing campaign could offer a free wireless device to cell phone customers flagged by the model as likely to defect. But then line of business staff at your company might introduce a filter on this action, since some customers live in regions that do not support that device. Such business rules are layered manually. This process doesn’t imply some entire new field or technology exists. I suggest you stay away from the term prescriptive analytics.

Now, deployment doesn’t always mean your computer “runs the world” autonomously. Sometimes the predictive scores are offered up to humans to help support important decisions that they will continue to make manually. Let’s break this down into two forms of predictive model deployment: decision automation and decision support.

Decision automation: The deployment of a predictive model to drive a series of operational decisions automatically.

Response modeling to target marketing is one example. The model automatically determines which, say, 20% of a long list of customers should be contacted. Even if the marketing process is manual — like, you personally are literally licking each stamp before the direct mail goes to the post office — it still counts as decision automation, since the batch of yes-contact, no-contact decisions was ultimately made unilaterally by the model.

Other examples of decision automation include online ad targeting, product recommendations, and online search, e.g., Google’s ordering of search results.

Decision support: The deployment of a predictive model to inform operational decisions made by a person. The person’s informal decision-making process integrates or considers the predictive scores in whatever ad hoc manner the decision-making human sees fit.

There are many decisions in human resources, healthcare, and law enforcement that obviously shouldn’t be entirely left up to the computer alone. In these cases, complete automation isn’t even a consideration; it’s not on the table. Who to hire, which transaction is fraudulent, how to diagnose or treat a patient, and how long to sentence a convict or whether to parole an inmate. These are weighty decisions that affect people’s lives, for which it is necessary for humans to consider all kinds of informal intangibles that we can’t comprehensively form into a bunch of independent variables input into a model. In this way, no the robots are not taking over. Nor could they, in my opinion. I’ll address the whole “artificial intelligence” mythology later in this course.

Credit scoring can go either way. If you complete a credit card application online, and your credit ratings and history are sound, I believe some financial institutions have set up their systems to automatically approve your application. Likewise, if the data shows you’re a clearly high risk applicant, you may be automatically declined, with no human involvement on the bank’s side. But, humans are usually involved when a bank decides about larger loans.

Finally, how fast do deployed models do all their scoring? Well, it depends. Some need to operate in real time, and others are offline, batch jobs. Now, to be clear, the predictive modeling process is usually offline — rarely continuous or in real time. But, once the model has been developed and we’re deploying it, there are generally two ways to go:

Offline deployment: When scoring a batch job for which speed is less of an issue. For example, when selecting which customer to include for a direct marketing campaign, the computer can take its sweet time, relatively speaking. Milliseconds are usually not a concern.

Real-time deployment: When scoring as quickly as possible to inform an operational decision taking place in real time. For example, deciding which ad to show a customer at the moment a web page is loading means that the model must very quickly receive the customer variables as input and do its calculations — like, run down through a decision tree or perform the math encoded by a neural network — so that the score is then immediately available to the operational system.

By the way, here’s a fun example of a deployed predictive model: the game 20Q, an inexpensive toy the size and shape of a yoyo.

 

It asks you 20 yes/no questions and tries to guess the thing you’re thinking of, like any physical object like a nail clipper or whatever. This toy has a deployed neural network — not a decision tree, actually — since a neural network model is more robust against errors. Like, if you wrongly answer one of the questions, it may still guess correctly. And, besides, some of the questions are subjective, like, “Does it make you happy?” You can try playing it without buying one right on their website at www.20q.net.