By: Eric Siegel, Founder, Predictive Analytics World

In anticipation of their upcoming conference co-presentation, Predicting Readiness to Purchase, at Predictive Analytics World for Business Las Vegas, June 3-7, 2018, we asked Will Scheck, Analytics Manager at Caterpillar and Andy Jacob, Data Scientist at Caterpillar, a few questions about their work in predictive analytics.

Q: In your work with predictive analytics, what behavior or outcome do your models predict?

A:  Our work focuses on predicting behavior in the small construction equipment market place. We pull in sales and customer information about new and used machines from our dealer network, as well as external sources, to build models predicting when customers will buy. Our specific demographic includes “dormant” customers, who haven’t purchased from a Caterpillar dealer in several years, and customers who have never purchased a Caterpillar machine before.

Q: How does predictive analytics deliver value at your organization – what is one specific way in which it actively drives decisions or operations?

A: Our job is all about making things easier for our dealers while at the same time capturing revenue and new customers. Caterpillar’s dealers are limited in how many people they can call and meet with in a day. Because sales numbers are their main performance metric, the dealers are likely to call customers that they already know to meet their sales goals.  However, there is an obvious value to expanding our customer base while retaining current customers. Our predictive model serves to match up our dealers with those customers who are ready to buy and, in some cases, ready to buy a competitive machine. By using predictive modeling to prioritize our marketing lists, we allow the dealers to work more effectively while at the same time increasing our customer base and revenue. In other words, predictive modeling is making our dealer’s cold calls a little warmer.

Q: Can you describe a quantitative result, such as the predictive lift of your model or the ROI of an analytics initiative?

A:  When considering the performance of our classification models, we are most concerned with precision and recall. A high precision (who actually bought/ who we classified as ready to buy) means that our dealers can be more confident that who they are calling will be likely to be in the market for a machine. A high recall (who we correctly classified as ready to buy / everyone who actually bought) will ensure that we don’t miss out on sales by misclassifying our customers.

As these are equally important to the acceptance of our model, it is appropriate to use an F1 score (a type of average between precision and recall) as the performance metric for our models.  If we treat this list as “classified as ready to buy”, then the traditional campaign had an F1 score of .003. By introducing a predictive model, we were able to increase this F1 score to .36 in validation sets.

Because most of those purchasing will be competitive customers, it is hard to come up with a real dollar ROI. However, if our dealers are able to convert 25% of those classified as ready to buy, that would lead to $27 million in estimated sales while at the same time decreasing the cost of the marketing campaign by 97%.

Q: What surprising discovery or insight have you unearthed in your data?

A: The most surprising discovery was illustrated the one of the most basic concepts of working with data; that of garbage in, garbage out. When working with data from a previous iteration of the project, we found several ill-defined or unusable variables. This alone increased model performance.

The real discovery came when testing different models’ performance for the task of predicting sales. After cleaning the data, the F1 score showed that the logistic regression model that was built performed within .004 points of the gradient boosting model that we were comparing it against. This is surprising because gradient boosting classification models are much more complex than logistic regression models. The expectation was that the gradient boosting model would perform much better than the other models. As it turned out, once the data was clean, the performance of both models was similar.

Q: Sneak preview: Please tell us a take-away that you will provide during your talk at Predictive Analytics World.

A: The message that we would like to share is that domain expertise and knowledge sharing are essential to effective predictive analytics. Without an understanding out how our data is collected and cleansed by our CRM team, we wouldn’t have been able to use the data well enough to get the results that we are showing. The next step will be to share our expertise with our dealer network to help them understand what our predictive model is doing and what the prioritized list means. Ultimately, if our dealer network doesn’t trust our model, then they won’t use it and it won’t provide any value. We must communicate everything that we’ve learned and how our model is to be used to capture the values that our predictive efforts provide.

———————

Don’t miss Will and Andy’s conference presentation, Predicting Readiness to Purchase, on Wednesday, June 6, 2018 from 11:45 am to 12:05 pm, at Predictive Analytics World Las Vegas, 2018. Click here to register to attend.

By: Eric Siegel, Founder, Predictive Analytics World