Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Survey: Machine Learning Projects Still Routinely Fail to Deploy
 Originally published in KDnuggets. Eric Siegel highlights the chronic...
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
SHARE THIS:

4 years ago
Re-examining Model Evaluation: The CRISP Approach

 The performance of prediction models can be judged using a variety of methods and metrics. Some years ago, I was challenged to arrive at a set of rules that would provide both the analyst and marketer guidance as to how to evaluate results of a predictive modeling exercise. “What?” you ask.  “Just look into a standard textbook, and a whole host of criteria is readily available.”  These provide value to a more quantitative oriented manager, but to the novice marketer, these evaluation tools can be intimidating. After all, a ROC curve, a  Kolmogorov Smirnov test, or a  Root

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Comments are closed.