Classification scorecards are a great way to predict things because the techniques used in the banking industry specialize in interpretability, predictive power, and ease of deployment. The banking industry has long used credit scoring to determine credit risk—the likelihood a particular loan will be paid back. A scorecard is a common way of displaying the patterns found in a classification model — typically a logistic regression model. However, to be useful the results of the scorecard must be easy to interpret. The main goal of a credit score and scorecard is to provide a clear and intuitive way of presenting regression model results. This article briefly discusses what scorecard analysis is and how it can be applied to score almost anything.
Scorecards are extremely successful in the consumer credit world because they:
Strict credit industry regulations protect consumers from loan rejections based on uninterpretable “black box” models. There are often also laws against using particular variables, such as race or zip code, in the credit decision. This has driven the banking industry to develop models with results that can be easily interpreted. However, the goal of interpretable model results goes well beyond just banking.
Let’s look at a credit scorecard model example that employs three variables: age, income, and home ownership. Each variable has a value, or level, that contributes scorecard points, as shown in Figure 1. The points are summed together and if they exceed the threshold the applicant is approved for a loan:
Credit Approval Threshold ≥ 500
AGE = 32 => 120 score
OWNERSHIP = OWN => 225 score
INCOME = $30,000 => 180 score
Credit Score = 525 => Loan Approved
AGE = 22 => 100 score
OWNERSHIP = OWN => 225 score
INCOME = $8,000 => 120 score
Credit Score = 445 => Loan Rejected
These simple variables provide clear guidelines for decision makers, making loan approval decisions transparent and easy to interpret and discuss or defend.
Data Scientists often build models where the client must interpret the results or understand the factors driving the model results. In these cases scorecard modeling provides an easy solution. With scorecards, there are no continuous variables – every variable is categorized. This is the most important step. The key to scorecard models is to categorize or “bin” the variables in a way that summarizes as much information as possible, and then build models that eliminate weak variables or those that do not conform to good business logic.
Binning simplifies many analysis issues that are complex for linear models, since:
As a starting point for binning, select a continuous variable and divide it into groups that are most alike. This can be done with decision trees, Gini statistics, Chi-square tests, random statistical buzzwords, etc. – whatever method you’d like to use to categorize individual continuous variables. Imagine starting with many bins for a continuous variable. Then look to see which bins can be combined statistically. Having a continuous variable assumes that all levels are different when predicting a target variable. Most of the time this is not true. Is there a difference between someone with an income of $38,000 and someone with $39,000? Most likely not, but treating income as a continuous variable makes this assumption. By categorizing we can let the computer decide if there is a statistical difference; if there isn’t, they can be combined in the same category.
Once everything is binned there may be a large number of categories – probably too many for modeling. To resolve this we calculate and examine the key assessment metrics using three tools:
Weight of Evidence is key in scorecard modeling projects because it helps to determine how well particular attributes are at separating good and bad accounts – our binary target variable. WOE measures this on a category-by-category basis, and compares the proportion of good accounts to bad accounts at each attribute level of the predictor variable.
Consider an example of a FICO score. Using decision trees the FICO score is divided into 10 groups that are “optimized” based on their ability to predict loan defaults in the same way as the previous examples. As shown in Figure 3, there are people who did and did not default in each group. WOE is a simple way of comparing these two groups by calculating the log of the ratio between the proportion of good loans and bad loans. In the example histogram (Figure 4), the red bars represent the proportion of bad loans within the total population. In this example, 14% of all people with a FICO score of less than 610 defaulted on a loan. In contrast, 4% of all people who did not default on a loan (blue bar) had a FICO score of less than 610.
With a WOE calculation, negative numbers demonstrate that bads outweigh goods, while positive numbers demonstrate that goods outweigh bads. The larger the WOE, the better that category will predict your target variable (e.g. good loans). This technique is also great for handling missing data, for example, if someone didn’t report their FICO score. With scorecards those individuals get their own category. In fact, as shown in Figure 5 we can say that people who did not report their credit score look a lot like those who have a FICO score around 630-653.
The scorecard is typically built on a logistic regression model where the original inputs are replaced by the WOE calculation. Essentially, the WOE column in Figure 3 now becomes the input to your model. This is done for every variable so your model is full of WOE representations of variables treated as continuous. You are converting a continuous variable into a categorical variable, assigning a value for each category, and then treating it as numeric variable again. But that numeric variable is no longer “age is 25” or “income is $20K”; rather, it is the propensity to default, represented by each individual category.
So why go through all of this math if we are still going to use a logistic regression? Because all of the variables are now on the same scale. Therefore, the coefficients from the logistic regression model can be used to directly compare which variables have more “influence” on the outcome. The further from zero, the more important the variable is to the model outcome.
The final step in our example is to determine the ranking system for each FICO group. Using the coefficients from the logistic regression plus the WOE we can calculate the scores (points) for each category. The points for each category of a variable are calculated by multiplying each variable’s coefficient by the WOE values for each category followed by a couple of adjustment factors to get the scaling of the scores down. These scaling factors help define the range of possible values for the scorecard. For example, a person with a FICO score of 757 would score 123 points as shown in Figure 6. This point value is added to the points for the other characteristics that comprise the overall credit score to determine their loan eligibility.
Although this process includes a lot of technical detail, the result is a model outcome that can be easily interpreted – the gold that we all search for in modeling. Empirical studies have shown that logistic regressions and scorecard models that use logistic regression have the same predictive power. You get improved interpretability, the same predictive power, and can easily handle outliers and missing values. Now that scores high in my book! Or model. Whichever you prefer.
About the Author:
Dr. Aric LaBarr is Director and Senior Scientist at Elder Research, Inc. Dr. Aric LaBarr is passionate about helping people solve challenges using their data. He mentors a team of data scientists to work closely with clients and partners to solve problems in predictive modeling, advanced analytics, forecasting, and risk management.
Prior to joining Elder Research, Aric was a faculty member at the Institute for Advanced Analytics at North Carolina State University, the nation’s first master of science in analytics degree program. There he helped design the innovative program to prepare a modern work force to wisely communicate and handle a data-driven future. Dr. Labarr developed and taught courses in statistics, mathematics, finance, risk management, and operations research, where his focus was to teach, mentor, and consult with students and businesses on modern techniques for making data-driven decisions.
Aric holds a Ph.D. in Statistics with a minor in Economics, an M.S. in Statistics, and B.S. degrees in Statistics and Economics — all from North Carolina State University. Aric likes spending time with his family, reading, and playing sports – mostly basketball and tennis/squash.