
Originally published in Forbes
Predictive AI offers tremendous potential – but it has a notoriously poor track record. Outside Big Tech and a handful of other leading companies, most initiatives fail to deploy, never realizing value. Why? Data professionals aren’t equipped to sell deployment to the business. The technical performance metrics they typically report on do not align with business goals – and mean nothing to decision makers.
For stakeholders and data scientists alike to plan, sell and greenlight predictive AI deployment, they must establish and maximize the value of each machine learning model in terms of business outcomes like profit, savings – or any KPI. Only by measuring value can the project actually pursue value. And only by getting business and data professionals onto the same value-oriented page can the initiative move forward and deploy.
Given their importance, why are business metrics so rare? Research has shown that data scientists know better, but generally don’t abide: They rank business metrics as most important, but in practice focus more on technical metrics. Why do they usually skip past such a critical step – calculating the potential business value – much to the demise of their own projects?
That’s a damn good question.
The industry isn’t stuck in this rut for only psychological and cultural reasons – although those are contributing factors. After all, it’s gauche and so “on the nose” to talk money. Data professions feel compelled to stick with the traditional technical metrics that exercise and demonstrate their expertise. It’s not only that this makes them sound smarter – with jargon being a common way for any field to defend its own existence and salaries. There’s also a common but misguided belief that non-quants are incapable of truly understanding quantitative reports of predictive performance and would only be misled by reports meant to speak in their straightforward business language.
But if those were the only reasons, the “cultural inertia” would have succumbed years ago, given the enormous business win when ML models do successfully deploy.
The Credibility Challenge: Business Assumptions
Instead, the biggest reason is this: Any forecast of business value faces a credibility question because it must be based on certain assumptions. Estimating the value that a model would capture in deployment isn’t enough. The calculation has still got to prove its trustworthiness, because it depends on business factors that are subject to change or uncertainty, such as:
The next step is to make an existential decision: Do you avoid forecasting the business value of ML value altogether? This would prevent the opening of a can of worms. Or do you recognize ML valuation as a challenge that must be addressed, given the dire need to calculate the potential upside of ML deployment in order to achieve it? If it isn’t already obvious, my vote is for the latter.
To address this credibility question and establish trust, the impact of uncertainty must be accounted for. Try out different values at the extreme ends of the uncertainty range. Interact in that way with the data and the reports. Find out how much the uncertainty matters and whether it must somehow be narrowed in order to establish a clear case for deployment. Only with insight and intuition into how much of a difference these factors make can your project establish a credible forecast of its potential business value – and thereby reliably achieve deployment.
About the author
Eric Siegel is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice. You can follow him on LinkedIn.
The Machine Learning Times © 2026 • 1221 State Street • Suite 12, 91940 •
Santa Barbara, CA 93190
Produced by: Rising Media & Prediction Impact
