ROC-Receiver Operating Characteristic

Two days ago, myself and the person whom I had no idea had a healthy conversion on classification models, interesting we were discussing about potential techniques and approaches that should be followed to select an optimal classification model for better results. At some point I suggested him ROC curves for the same reason.

Receiver operating characteristic, Roc curve in short is a graphical chart representing the performance of classifiers on varying threshold cut offs.  In general ROC curves are used to compare classification models and help to select the best model that cause better results over the other.

roc-curves

Before attempting how to make such ROC curves, let’s first understand how to interpret those graphs and choose the best models.  Area under the curve, AUC indicated the area below the curve of the model.  In practice a model that has higher area is considered to be a good model.

ROC Graph is nothing but a plot between TPR and FPR. TPR runs along the y-axis and FPR runs along the x-axis.  TPR (True positive rate) also know as sensitivity or Recall. FPR (False positive rate) is equivalent to what is known as 1-specificity.

Concept of TPR and FPR can be easily understood by this simple example. Say you have a classification model to classify good customers and bad customers based on some business data.  With respect to the example, TPR is something that talks about the number of good customers predicted same as good customers by the model.  In converse FPR is something that talks about the number of bad customers predicted as good customer by the model.

In practice, we expect a good model to have high TPR value and less FPR.

Drawing ROC graph.

Let’s say we have two models, M1 and M2 out of these we would like to find better models that would have better TPR and FPR values which influence the accuracy of the result.

confusion-matrix

Drawing ROC curves start with computing confusion matrix.  Let say for model M1 our initial cut off threshold be 0.5 meaning hypothesis result producing value less than 0.5 will be consider as class 0 (eg. bad customer) or hypothesis result producing value less than 0.5 will be considered as class 1 (eg. good customer). Based on the predicted class and actual class confusion matrix is built.

Formula for TPR and FPR would be

TPR = TP / TP + FN

FPR = FP / FP + TN

roc-curves

From the confusion matrix about, we have all information required to calculate TPR and FPR. From there we again change the cut off threshold (say 0.6) and again compute confusion matrix for TPR and FPR values.  And then computed TPR and FPR values are plotted onto a graph for model M1.


2 Comments

  1. x wrote
    at 7:48 AM - 19th February 2015 Permalink

    “our initial cut off threshold be .05 ”

    you mean .5 right?

  2. shakthydoss wrote
    at 7:57 AM - 19th February 2015 Permalink

    yes, you are right. It should be 0.5.

Post a Comment

Your email is never published nor shared. Required fields are marked *