A Taiwan-based credit card issuer wants to better predict the likelihood of default for its customers, as well as identify the key drivers that determine this likelihood. This would inform the issuer’s decisions on who to give a credit card to and what credit limit to provide. It would also help the issuer have a better understanding of their current and potential customers, which would inform their future strategy, including their planning of offering targeted credit products to their customers.
(Data source: https://www.kaggle.com/uciml/default-of-credit-card-clients-dataset. We acknowledge the following: Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.)
The credit card issuer has gathered information on 30000 customers. The dataset contains information on 24 variables, including demographic factors, credit data, history of payment, and bill statements of credit card customers from April 2005 to September 2005, as well as information on the outcome: did the customer default or not?
Name | Description |
---|---|
ID | ID of each client |
LIMIT_BAL | Amount of given credit in NT dollars (includes individual and family/supplementary credit) |
SEX | Gender (1=male, 2=female) |
EDUCATION | (1=graduate school, 2=university, 3=high school, 4=others, 5=unknown, 6=unknown) |
MARRIAGE | Marital status (1=married, 2=single, 3=others) |
AGE | Age in years |
PAY_0 | Repayment status in September, 2005 (-2=no consumption, -1=pay duly, 0=the use of revolving credit, 1=payment delay for one month, 2=payment delay for two months, … 8=payment delay for eight months, 9=payment delay for nine months and above) |
PAY_2 | Repayment status in August, 2005 (scale same as above) |
PAY_3 | Repayment status in July, 2005 (scale same as above) |
PAY_4 | Repayment status in June, 2005 (scale same as above) |
PAY_5 | Repayment status in May, 2005 (scale same as above) |
PAY_6 | Repayment status in April, 2005 (scale same as above) |
BILL_AMT1 | Amount of bill statement in September, 2005 (NT dollar) |
BILL_AMT2 | Amount of bill statement in August, 2005 (NT dollar) |
BILL_AMT3 | Amount of bill statement in July, 2005 (NT dollar) |
BILL_AMT4 | Amount of bill statement in June, 2005 (NT dollar) |
BILL_AMT5 | Amount of bill statement in May, 2005 (NT dollar) |
BILL_AMT6 | Amount of bill statement in April, 2005 (NT dollar) |
PAY_AMT1 | Amount of previous payment in September, 2005 (NT dollar) |
PAY_AMT2 | Amount of previous payment in August, 2005 (NT dollar) |
PAY_AMT3 | Amount of previous payment in July, 2005 (NT dollar) |
PAY_AMT4 | Amount of previous payment in June, 2005 (NT dollar) |
PAY_AMT5 | Amount of previous payment in May, 2005 (NT dollar) |
PAY_AMT6 | Amount of previous payment in April, 2005 (NT dollar) |
default.payment.next.month | Default payment (1=yes, 0=no) |
Let’s look into the data for a few customers. This is how the first 10 out of the total of 30000 rows look like (transposed, for convenience):
01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | |
---|---|---|---|---|---|---|---|---|---|---|
ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
LIMIT_BAL | 20000 | 120000 | 90000 | 50000 | 50000 | 50000 | 500000 | 100000 | 140000 | 20000 |
SEX | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 2 | 2 | 1 |
EDUCATION | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 2 | 3 | 3 |
MARRIAGE | 1 | 2 | 2 | 1 | 1 | 2 | 2 | 2 | 1 | 2 |
AGE | 24 | 26 | 34 | 37 | 57 | 37 | 29 | 23 | 28 | 35 |
PAY_0 | 2 | -1 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | -2 |
PAY_2 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | -2 |
PAY_3 | -1 | 0 | 0 | 0 | -1 | 0 | 0 | -1 | 2 | -2 |
PAY_4 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -2 |
PAY_5 | -2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 |
PAY_6 | -2 | 2 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | -1 |
BILL_AMT1 | 3913 | 2682 | 29239 | 46990 | 8617 | 64400 | 367965 | 11876 | 11285 | 0 |
BILL_AMT2 | 3102 | 1725 | 14027 | 48233 | 5670 | 57069 | 412023 | 380 | 14096 | 0 |
BILL_AMT3 | 689 | 2682 | 13559 | 49291 | 35835 | 57608 | 445007 | 601 | 12108 | 0 |
BILL_AMT4 | 0 | 3272 | 14331 | 28314 | 20940 | 19394 | 542653 | 221 | 12211 | 0 |
BILL_AMT5 | 0 | 3455 | 14948 | 28959 | 19146 | 19619 | 483003 | -159 | 11793 | 13007 |
BILL_AMT6 | 0 | 3261 | 15549 | 29547 | 19131 | 20024 | 473944 | 567 | 3719 | 13912 |
PAY_AMT1 | 0 | 0 | 1518 | 2000 | 2000 | 2500 | 55000 | 380 | 3329 | 0 |
PAY_AMT2 | 689 | 1000 | 1500 | 2019 | 36681 | 1815 | 40000 | 601 | 0 | 0 |
PAY_AMT3 | 0 | 1000 | 1000 | 1200 | 10000 | 657 | 38000 | 0 | 432 | 0 |
PAY_AMT4 | 0 | 1000 | 1000 | 1100 | 9000 | 1000 | 20239 | 581 | 1000 | 13007 |
PAY_AMT5 | 0 | 0 | 1000 | 1069 | 689 | 1000 | 13750 | 1687 | 1000 | 1122 |
PAY_AMT6 | 0 | 2000 | 5000 | 1000 | 679 | 800 | 13770 | 1542 | 1000 | 0 |
It is important to remember that data analytics projects require a delicate balance between experimentation, intuition, and following a process. The value of following a process is so as to avoid getting fooled by randomness in data and finding “results and patterns” that are mainly driven by our own biases and not by the facts/data themselves.
There is no single best process for classification. However, we have to start somewhere, so we will use the following process:
Let’s follow these steps.
It is very important that you (or the data scientists working on the project) finally measure and report the performance of the models on data that have not been used at all during the analysis, called “out-of-sample” or test data (steps 2-5 above). The idea is that in practice we want our models to be used for predicting the class of observations/data we have not seen yet (i.e., “the future data”): although the performance of a classification method may be high in the data used to estimate the model parameters, it may be significantly poorer on data not used for parameter estimation, such as the out-of-sample (future) data.
This is why we split the data into an estimation sample and two validation samples - using some kind of randomized splitting technique. The second validation data mimic out-of-sample data, and the performance on this validation set is a better approximation of the performance one should expect in practice from the selected classification method. The estimation data and the first validation data are used during steps 2-5 (with a few iterations of these steps), while the second validation data is only used once at the very end before making final business decisions based on the analysis. The split can be, for example, 80% estimation, 10% validation, and 10% test data, depending on the number of observations - for example, when there is a lot of data, you may only keep a few hundreds of them for the validation and test sets, and use the rest for estimation.
While setting up the estimation and validation samples, you should also check that the same proportion of data from each class (i.e., customers who default versus not) are maintained in each sample. That is, you should maintain the same balance of the dependent variable categories as in the overall dataset.
For simplicity, in this note we will not iterate steps 2-5. In practice, however, we should usually iterate steps 2-5 a number of times using the first validation sample each time, and at the end make our final assessment of the classification model using the test sample only once.
We typically refer to the three data samples as estimation data (80% of the data in our case), validation data (10% of the data) and test data (the remaining 10% of the data).
In our case we use 24000 observations in the estimation data, 3000 in the validation data, and 3000 in the test data.
First, make sure the dependent variable is set up as a categorical 0-1 variable. In our illustrative example, we use the payment default (or no default) as the dependent variable.
The data however may not be always readily available with a categorical dependent variable. Suppose a retailer wants to understand what discriminates consumers who are loyal versus those who are not. If they have data on the amount that customers spend in their store or the frequency of their purchases, they can create a categorical variable (“loyal vs. not loyal”) by using a definition such as: “A loyal customer is one who spends more than X amount at the store and makes at least Y purchases a year”. They can then code these loyal customers as “1” and the others as “0”. They can choose the thresholds X and Y as they wish: a definition/decision that may have a big impact in the overall analysis. This decision can be the most crucial one of the whole data analysis: a wrong choice at this step may lead both to poor performance later as well as to no valuable insights. One should revisit the choice made at this step several times, iterating steps 2-3 and 2-5.
Carefully deciding what the dependent 0/1 variable is can be the most critical choice of a classification analysis. This decision typically depends on contextual knowledge and needs to be revisited multiple times throughout a data analytics project.
In our data the number of 0/1’s in our estimation sample is as follows:
Class 1 | Class 0 | |
---|---|---|
# of Observations | 5331 | 18669 |
while in the validation sample they are:
Class 1 | Class 0 | |
---|---|---|
# of Observations | 637 | 2363 |
Good data analytics start with good contextual knowledge as well as a simple statistical and visual exploration of the data. In the case of classification, one can explore “simple classifications” by assessing how the classes differ along any of the independent variables. For example, these are the statistics of our independent variables across the two classes in the estimation data, class 1 (“default”):
min | 25 percent | median | mean | 75 percent | max | std | |
---|---|---|---|---|---|---|---|
ID | 2 | 7467.5 | 14756 | 14781.09 | 21795.5 | 30000 | 8553.78 |
LIMIT_BAL | 10000 | 50000.0 | 90000 | 131206.15 | 200000.0 | 740000 | 116134.54 |
SEX | 1 | 1.0 | 2 | 1.57 | 2.0 | 2 | 0.50 |
EDUCATION | 1 | 1.0 | 2 | 1.89 | 2.0 | 6 | 0.73 |
MARRIAGE | 0 | 1.0 | 2 | 1.53 | 2.0 | 3 | 0.53 |
AGE | 21 | 28.0 | 34 | 35.77 | 42.0 | 75 | 9.72 |
PAY_0 | -2 | 0.0 | 1 | 0.66 | 2.0 | 8 | 1.39 |
PAY_2 | -2 | -1.0 | 0 | 0.46 | 2.0 | 7 | 1.50 |
PAY_3 | -2 | -1.0 | 0 | 0.36 | 2.0 | 8 | 1.50 |
PAY_4 | -2 | -1.0 | 0 | 0.25 | 2.0 | 8 | 1.51 |
PAY_5 | -2 | -1.0 | 0 | 0.16 | 0.0 | 8 | 1.48 |
PAY_6 | -2 | -1.0 | 0 | 0.11 | 0.0 | 8 | 1.48 |
BILL_AMT1 | -6676 | 3200.0 | 20235 | 49026.60 | 60143.5 | 581775 | 74210.08 |
BILL_AMT2 | -9119 | 2781.5 | 20334 | 47866.25 | 58373.5 | 572834 | 72351.47 |
BILL_AMT3 | -46127 | 2500.0 | 19862 | 45528.02 | 55020.0 | 578971 | 68776.03 |
BILL_AMT4 | -65167 | 2144.0 | 19253 | 42635.59 | 50737.0 | 548020 | 65208.50 |
BILL_AMT5 | -53007 | 1555.0 | 18669 | 40056.11 | 48474.5 | 547880 | 62194.53 |
BILL_AMT6 | -339603 | 1257.0 | 18223 | 38825.11 | 47743.5 | 514975 | 60507.70 |
PAY_AMT1 | 0 | 0.0 | 1641 | 3347.45 | 3492.5 | 300000 | 9204.64 |
PAY_AMT2 | 0 | 0.0 | 1580 | 3350.97 | 3400.0 | 344467 | 10026.24 |
PAY_AMT3 | 0 | 0.0 | 1250 | 3453.82 | 3000.0 | 508229 | 13176.50 |
PAY_AMT4 | 0 | 0.0 | 1000 | 3123.42 | 2912.5 | 292462 | 10186.03 |
PAY_AMT5 | 0 | 0.0 | 1000 | 3189.35 | 3000.0 | 332000 | 11904.56 |
PAY_AMT6 | 0 | 0.0 | 1000 | 3448.10 | 2987.0 | 345293 | 13272.63 |
and class 0 (“no default”):
min | 25 percent | median | mean | 75 percent | max | std | |
---|---|---|---|---|---|---|---|
ID | 3 | 7540 | 15047 | 15065.42 | 22658 | 29997 | 8671.77 |
LIMIT_BAL | 10000 | 60000 | 150000 | 177782.74 | 250000 | 800000 | 132213.16 |
SEX | 1 | 1 | 2 | 1.61 | 2 | 2 | 0.49 |
EDUCATION | 0 | 1 | 2 | 1.84 | 2 | 6 | 0.80 |
MARRIAGE | 0 | 1 | 2 | 1.56 | 2 | 3 | 0.52 |
AGE | 21 | 28 | 34 | 35.44 | 41 | 79 | 9.10 |
PAY_0 | -2 | -1 | 0 | -0.21 | 0 | 8 | 0.95 |
PAY_2 | -2 | -1 | 0 | -0.30 | 0 | 7 | 1.03 |
PAY_3 | -2 | -1 | 0 | -0.31 | 0 | 8 | 1.05 |
PAY_4 | -2 | -1 | 0 | -0.35 | 0 | 8 | 1.01 |
PAY_5 | -2 | -1 | 0 | -0.39 | 0 | 7 | 0.98 |
PAY_6 | -2 | -1 | 0 | -0.40 | 0 | 7 | 1.00 |
BILL_AMT1 | -165580 | 3640 | 23064 | 51696.23 | 67925 | 746814 | 73417.01 |
BILL_AMT2 | -69777 | 3052 | 21624 | 49530.07 | 65002 | 743970 | 70889.11 |
BILL_AMT3 | -157264 | 2711 | 20179 | 47435.68 | 61257 | 1664089 | 69962.40 |
BILL_AMT4 | -170000 | 2429 | 18983 | 43481.10 | 55472 | 628699 | 64220.53 |
BILL_AMT5 | -81334 | 1852 | 17987 | 40340.84 | 50836 | 551702 | 60071.12 |
BILL_AMT6 | -209051 | 1261 | 16624 | 38869.62 | 49584 | 699944 | 59282.31 |
PAY_AMT1 | 0 | 1160 | 2451 | 6275.78 | 5620 | 505000 | 17196.09 |
PAY_AMT2 | 0 | 1002 | 2203 | 6571.08 | 5275 | 1684259 | 24867.28 |
PAY_AMT3 | 0 | 600 | 2000 | 5626.60 | 5000 | 417588 | 16465.90 |
PAY_AMT4 | 0 | 390 | 1713 | 5270.37 | 4600 | 497000 | 15845.50 |
PAY_AMT5 | 0 | 365 | 1729 | 5233.70 | 4520 | 426529 | 16269.20 |
PAY_AMT6 | 0 | 300 | 1700 | 5624.07 | 4500 | 528666 | 18692.94 |
The purpose of such an analysis by class is to get an initial idea about whether the classes are indeed separable as well as to understand which of the independent variables have most discriminatory power.
Notice however that
Even though each independent variable may not differ across classes, classification may still be feasible: a (linear or nonlinear) combination of independent variables may still be discriminatory.
A simple visualization tool to assess the discriminatory power of the independent variables are the box plots. A box plot visually indicates simple summary statistics of an independent variable (e.g. mean, median, top and bottom quantiles, min, max, etc.). For example consider the box plots for our estimation data for the repayment status variables, for class 1
and class 0:
Questions:
Answers:
Once we decide which dependent and independent variables to use (which can be revisited in later iterations), one can use a number of classification methods to develop a model that discriminates the different classes.
Some of the widely used classification methods are: classification and regression trees (CART), boosted trees, support vector machines, neural networks, nearest neighbors, logistic regression, lasso, random forests, deep learning methods, etc.
In this note we will consider for simplicity only two classification methods: logistic regression and classification and regression trees (CART). However, replacing them with other methods is relatively simple (although some knowledge of how these methods work is often necessary - see the R help command for the methods if needed). Understanding how these methods work is beyond the scope of this note - there are many references available online for all these classification methods.
Logistic Regression: Logistic Regression is a method similar to linear regression except that the dependent variable is discrete (e.g., 0 or 1). Linear logistic regression estimates the coefficients of a linear model using the selected independent variables while optimizing a classification criterion. For example, this is the logistic regression parameters for our data:
Estimate | Std. Error | z value | Pr(>|z|) | |
---|---|---|---|---|
(Intercept) | -0.7 | 0.1 | -5.2 | 0.0 |
ID | 0.0 | 0.0 | -1.1 | 0.3 |
LIMIT_BAL | 0.0 | 0.0 | -3.4 | 0.0 |
SEX | -0.1 | 0.0 | -3.1 | 0.0 |
EDUCATION | -0.1 | 0.0 | -4.1 | 0.0 |
MARRIAGE | -0.1 | 0.0 | -3.9 | 0.0 |
AGE | 0.0 | 0.0 | 3.8 | 0.0 |
PAY_0 | 0.6 | 0.0 | 28.7 | 0.0 |
PAY_2 | 0.1 | 0.0 | 4.0 | 0.0 |
PAY_3 | 0.1 | 0.0 | 2.8 | 0.0 |
PAY_4 | 0.0 | 0.0 | 1.2 | 0.2 |
PAY_5 | 0.0 | 0.0 | 0.6 | 0.6 |
PAY_6 | 0.0 | 0.0 | 0.5 | 0.6 |
BILL_AMT1 | 0.0 | 0.0 | -4.7 | 0.0 |
BILL_AMT2 | 0.0 | 0.0 | 2.2 | 0.0 |
BILL_AMT3 | 0.0 | 0.0 | 0.0 | 1.0 |
BILL_AMT4 | 0.0 | 0.0 | 0.1 | 0.9 |
BILL_AMT5 | 0.0 | 0.0 | 0.3 | 0.8 |
BILL_AMT6 | 0.0 | 0.0 | 0.8 | 0.4 |
PAY_AMT1 | 0.0 | 0.0 | -5.9 | 0.0 |
PAY_AMT2 | 0.0 | 0.0 | -4.2 | 0.0 |
PAY_AMT3 | 0.0 | 0.0 | -1.2 | 0.2 |
PAY_AMT4 | 0.0 | 0.0 | -2.7 | 0.0 |
PAY_AMT5 | 0.0 | 0.0 | -2.1 | 0.0 |
PAY_AMT6 | 0.0 | 0.0 | -1.2 | 0.2 |
Given a set of independent variables, the output of the estimated logistic regression (the sum of the products of the independent variables with the corresponding regression coefficients) can be used to assess the probability an observation belongs to one of the classes. Specifically, the regression output can be transformed into a probability of belonging to, say, class 1 for each observation. The estimated probability that a validation observation belongs to class 1 (e.g., the estimated probability that the customer defaults) for the first few validation observations, using the logistic regression above, is:
Actual Class | Predicted Class | Probability of Class 1 | |
---|---|---|---|
Obs 1 | 0 | 0 | 0.16 |
Obs 2 | 1 | 0 | 0.20 |
Obs 3 | 0 | 0 | 0.25 |
Obs 4 | 0 | 0 | 0.21 |
Obs 5 | 0 | 0 | 0.05 |
Obs 6 | 0 | 0 | 0.15 |
Obs 7 | 0 | 0 | 0.17 |
Obs 8 | 1 | 0 | 0.46 |
Obs 9 | 0 | 0 | 0.23 |
Obs 10 | 0 | 0 | 0.09 |
The default decision is to classify each observation in the group with the highest probability - but one can change this choice, as we discuss below.
Selecting the best subset of independent variables for logistic regression, a special case of the general problem of feature selection, is an iterative process where both the significance of the regression coefficients as well as the performance of the estimated logistic regression model on the first validation data are used as guidance. A number of variations are tested in practice, each leading to different performance.
CART: CART is a widely used classification method largely because the estimated classification models are easy to interpret. This classification tool iteratively “splits” the data using the most discriminatory independent variable at each step, building a “tree” - as shown below - on the way. The CART methods limit the size of the tree using various statistical techniques in order to avoid overfitting the data. For example, using the rpart and rpart.control functions in R, we can limit the size of the tree by selecting the functions’ complexity control parameter cp. (What this parameter does exactly is beyond the scope of this note. For the rpart and rpart.control functions in R, smaller values, e.g. cp=0.0001, lead to larger trees, as we will see next.)
One of the biggest risks when developing classification models is overfitting: while it is always trivial to develop a model (e.g., a tree) that classifies any (estimation) dataset with no misclassification error at all, there is no guarantee that the quality of a classifier in out-of-sample data (e.g., in the validation data) will be close to that in the estimation data. Striking the right balance between “overfitting” and “underfitting” is one of the most important aspects in data analytics. While there are a number of statistical techniques to help us strike this balance - including the use of validation data - it is largely a combination of good statistical analysis and qualitative criteria (such as the interpretability and simplicity of the estimated models) that leads to classification models that work well in practice.
Running a basic CART model with complexity control cp=0.0025, leads to the following tree (NOTE: for better readability of the tree figures below, we will rename the independent variables as IV1 to IV24 when using CART):
The leaves of the tree indicate the number of estimation data observations that “reach that leaf” that belong to each class. A perfect classification would only have data from one class in each of the tree leaves. However, such a perfect classification of the estimation data would most likely not be able to classify well out-of-sample data due to overfitting of the estimation data.
One can estimate larger trees through changing the tree’s complexity control parameter (in this case the rpart.control argument cp). For example, this is how the tree would look like if we set cp=0.00068:
One can also use the percentage of data in each leaf of the tree to get an estimate of the probability that an observation (e.g., customer) belongs to a given class. The purity of the leaf can indicate the probability that an observation that “reaches that leaf” belongs to a class. In our case, the probability our validation data belong to class 1 (i.e., a customer’s likelihood of default) for the first few validation observations, using the first CART above, is:
Actual Class | Predicted Class | Probability of Class 1 | |
---|---|---|---|
Obs 1 | 0 | 0 | 0.17 |
Obs 2 | 1 | 0 | 0.17 |
Obs 3 | 0 | 0 | 0.17 |
Obs 4 | 0 | 0 | 0.17 |
Obs 5 | 0 | 0 | 0.17 |
Obs 6 | 0 | 0 | 0.17 |
Obs 7 | 0 | 0 | 0.17 |
Obs 8 | 1 | 0 | 0.17 |
Obs 9 | 0 | 0 | 0.17 |
Obs 10 | 0 | 0 | 0.17 |
The table above assumes that the probability threshold for considering an observations as “class 1” is 0.5. In practice we need to select the probability threshold: this is an important choice that we will discuss below.
Question:
Run CART for complexity parameter cp=0.0001 or smaller. Is it practical to run? Is it practical to interpret? Do you trust the classifier?
Answer:
We have already discussed feature selection and complexity control for classification methods. In our case, we can see the relative importance of the independent variables using the variable.importance
of the CART trees (see help(rpart.object)
in R) or the z-scores from the output of logistic regression. For easier visualization, we scale all values between -1 and 1 (the scaling is done for each method separately - note that CART does not provide the sign of the “coefficients”). From this table we can see the key drivers of the classification according to each of the methods we used here.
Logistic Regression | CART 1 | CART 2 | |
---|---|---|---|
ID | -0.04 | 0.00 | -0.01 |
LIMIT_BAL | -0.12 | 0.00 | 0.00 |
SEX | -0.11 | 0.00 | -0.01 |
EDUCATION | -0.14 | 0.00 | -0.01 |
MARRIAGE | -0.14 | 0.00 | 0.00 |
AGE | 0.13 | 0.00 | 0.01 |
PAY_0 | 1.00 | 1.00 | 1.00 |
PAY_2 | 0.14 | 0.03 | 0.23 |
PAY_3 | 0.10 | 0.04 | 0.06 |
PAY_4 | 0.04 | 0.04 | 0.05 |
PAY_5 | 0.02 | 0.04 | 0.05 |
PAY_6 | 0.02 | 0.03 | 0.05 |
BILL_AMT1 | -0.16 | 0.00 | -0.03 |
BILL_AMT2 | 0.08 | 0.01 | 0.03 |
BILL_AMT3 | 0.00 | 0.00 | 0.00 |
BILL_AMT4 | 0.00 | 0.00 | 0.02 |
BILL_AMT5 | 0.01 | 0.00 | 0.02 |
BILL_AMT6 | 0.03 | 0.00 | 0.02 |
PAY_AMT1 | -0.21 | 0.00 | -0.01 |
PAY_AMT2 | -0.15 | 0.00 | 0.00 |
PAY_AMT3 | -0.04 | 0.00 | -0.01 |
PAY_AMT4 | -0.09 | 0.00 | -0.01 |
PAY_AMT5 | -0.07 | 0.00 | -0.01 |
PAY_AMT6 | -0.04 | 0.00 | -0.01 |
In general it is not necessary for all methods to agree on the most important drivers: when there is “major” disagreement, particularly among models that have satisfactory performance as discussed next, we may need to reconsider the overall analysis, including the objective of the analysis as well as the data used, as the results may not be robust. As always, interpreting and using the results of data analytics requires a balance between quantitative and qualitative analysis.
Using the predicted class probabilities of the validation data, as outlined above, we can generate some measures of classification performance. Before discussing them, note that given the probability an observation belongs to a class, a reasonable class prediction choice is to predict the class that has the highest probability. However, this does not need to be the only choice in practice.
Selecting the probability threshold based on which we predict the class of an observation is a decision the user needs to make. While in some cases a reasonable probability threshold is 50%, in other cases it may be 99.9% or 0.1%.
Question:
Answer:
For different choices of the probability threshold, one can measure a number of classification performance metrics, which are outlined next.
This is the percentage of the observations that have been correctly classified (i.e., the predicted class and the actual class are the same). We can just count the number of the validation data correctly classified and divide this number with the total number of the validation data, using the two CART and the logistic regression above. These are as follows for probability threshold 50%:
Hit Ratio | |
---|---|
Logistic Regression | 82.23333 |
First CART | 82.93333 |
Second CART | 82.46667 |
For the estimation data, the hit rates are:
Hit Ratio | |
---|---|
Logistic Regression | 80.90417 |
First CART | 81.97917 |
Second CART | 82.98750 |
A simple benchmark to compare the hit ratio performance of a classification model against is the Maximum Chance Criterion. This measures the proportion of the class with the largest size. For our validation data the largest group is customers who do not default: 2363 out of 3000 customers). Clearly, if we classified all individuals into the largest group, we could get a hit ratio of 78.77% without doing any work. One should have a hit rate of at least as much as the Maximum Chance Criterion rate, although as we discuss next there are more performance criteria to consider.
The confusion matrix shows for each class the number (or percentage) of the data that are correctly classified for that class. For example, for the method above with the highest hit rate in the validation data (among logistic regression and the 2 CART models), and for probability threshold 50%, the confusion matrix for the validation data is:
Predicted 1 (default) | Predicted 0 (no default) | |
---|---|---|
Actual 1 (default) | 31.40 | 68.60 |
Actual 0 (no default) | 3.17 | 96.83 |
Questions:
Answers:
Remember that each observation is classified by our model according to the probabilities Pr(0) and Pr(1) and a chosen probability threshold. Typically we set the probability threshold to 0.5 - so that observations for which Pr(1) > 0.5 are classified as 1’s. However, we can vary this threshold, for example if we are interested in correctly predicting all 1’s but do not mind missing some 0’s (and vice-versa).
When we change the probability threshold we get different values of hit rate, false positive and false negative rates, or any other performance metric. We can plot for example how the false positive versus true positive rates change as we alter the probability threshold, and generate the so called ROC curve.
The ROC curves for the validation data for the logistic regression as well as both the CARTs above are as follows:
How should a good ROC curve look like? A rule of thumb in assessing ROC curves is that the “higher” the curve (i.e., the closer it gets to the point with coordinates (0,1)), hence the larger the area under the curve, the better. You may also select one point on the ROC curve (the “best one” for our purpose) and use that false positive/false negative performances (and corresponding threshold for P(1)) to assess your model.
Questions:
Answers:
The gains chart is a popular technique in certain applications, such as direct marketing or credit risk.
For a concrete example, consider the case of a direct marketing mailing campaign. Say we have a classifier that attempts to identify the likely responders by assigning each case a probability of response. We may want to select as few cases as possible and still capture the maximum number of responders possible.
We can measure the percentage of all responses the classifier captures if we only select, say, x% of cases: the top x% in terms of the probability of response assigned by our classifier. For each percentage of cases we select (x), we can plot the following point: the x-coordinate will be the percentage of all cases that were selected, while the y-coordinate will be the percentage of all class 1 cases that were captured within the selected cases (i.e., the ratio true positives/positives of the classifier, assuming the classifier predicts class 1 for all the selected cases, and predicts class 0 for all the remaining cases). If we plot these points while we change the percentage of cases we select (x) (i.e., while we change the probability threshold of the classifier), we get a chart that is called the gains chart.
In the credit card default case we are studying, the gains charts for the validation data for our three classifiers are the following:
Notice that if we were to examine cases selecting them at random, instead of selecting the “best” ones using an informed classifier, the “random prediction” gains chart would be a straight 45-degree line.
Question:
Why?
Answer:
So how should a good gains chart look like? The further above this 45-degree reference line our gains curve is, the better the “gains”. Moreover, much like for the ROC curve, one can select the percentage of all cases examined appropriately so that any point of the gains curve is selected.
Question:
Which point on the gains curve should we select in practice?
Answer:
Finally, we can generate the so called profit curve, which we often use to make our final decisions. The intuition is as follows. Consider a direct marketing campaign, and suppose it costs $1 to send an advertisement, and the expected profit from a person who responds positively is $45. Suppose you have a database of 1 million people to whom you could potentially send the promotion. Typical response rates are 0.05%. What fraction of the 1 million people should you send the promotion to?
To answer this type of questions, we need to create the profit curve. We can measure some measure of profit if we only select the top cases in terms of the probability of response assigned by our classifier. We can plot the profit curve by changing, as we did for the gains chart, the percentage of cases we select, and calculating the corresponding total estimated profit (or loss) we would generate. This is simply equal to:
Total Estimated Profit = (% of 1’s correctly predicted)x(value of capturing a 1) + (% of 0’s correctly predicted)x(value of capturing a 0) + (% of 1’s incorrectly predicted as 0)x(cost of missing a 1) + (% of 0’s incorrectly predicted as 1)x(cost of missing a 0)
Calculating the expected profit requires we have an estimate of the four costs/values: the value of capturing a 1 or a 0, and the cost of misclassifying a 1 into a 0 or vice versa.
Given the values and costs of correct classifications and misclassifications, we can plot the total estimated profit (or loss) as we change the percentage of cases we select, i.e., the probability threshold of the classifier, like we did for the ROC and the gains chart.
In our credit card default case, we consider the following business profit and loss to the credit card issuer for the correctly classified and misclassified customers:
Predict 1 (default) | Predict 0 (no default) | |
---|---|---|
Actual 1 (default) | 0 | -1e+05 |
Actual 0 (no default) | 0 | 2e+04 |
Based on these profit and cost estimates, the profit curves for the validation data for the three classifiers are:
We can then select the percentage of selected cases that corresponds to the maximum estimated profit (or minimum loss, if necessary).
Question:
Which point on the profit curve would you select in practice?
Answer:
Notice that to maximize estimated profit, we need to have the cost/profit for each of the four cases! This can be difficult to assess, hence typically we want to do a sensitivity analysis to our assumptions about the cost/profit. For example, we can generate different profit curves (i.e., worst case, best case, average case scenarios) and see how much the best profit we get varies, and most importantly how our selection of the classification model and of the probability threshold corresponding to the best profit vary, as the classifier and the percentage of cases are what we need to decide on eventually.
Having iterated steps 2-5 until we are satisfied with the performance of our selected model on the validation data, in this step the performance analysis outlined in step 5 needs to be done with the test sample. This is the performance that best mimics what one should expect in practice upon deployment of the classification solution, assuming (as always) that the data used for this performance analysis are representative of the situation in which the solution will be deployed.
Let’s see in our case how the hit ratio, confusion matrix, ROC curve, gains chart, and profit curve look like for our test data. For the hit ratio and the confusion matrix we use 50% as the probability threshold for classification.
Hit Ratio | |
---|---|
Logistic Regression | 80.60000 |
First CART | 81.36667 |
Second CART | 81.70000 |
The confusion matrix for the model with the best test data hit ratio above:
Predicted 1 (default) | Predicted 0 (no default) | |
---|---|---|
Actual 1 (default) | 34.13 | 65.87 |
Actual 0 (no default) | 4.67 | 95.33 |
ROC curves for the test data:
Gains chart for the test data:
Finally the profit curves for the test data, using the same profit/cost estimates as we did above:
Questions:
Answers: