Skip to content

Using my skills in data preparation, statistical reasoning, and machine learning I employed different techniques to train and evaluate models with unbalanced classes.

Notifications You must be signed in to change notification settings

annakthrnlee/Credit_Risk_Analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Credit_Risk_Analysis

Overview:

Credit risk is an inherently unbalanced classification problem, as good loans easily outnumber risky loans. Using my skills in data preparation, statistical reasoning, and machine learning I employed different techniques to train and evaluate models with unbalanced classes.

  • Using the credit card credit dataset from LendingClub (can be found under my resources folder), a peer-to-peer lending services company, I oversampled the data using the RandomOverSampler function and SMOTE algorithms. Then, I undersampled the data using the ClusterCentroids algorithm.
  • Next, I used a combinatorial approach of over- and undersampling using the SMOTEENN algorithm.
  • Finally, I compared two new machine learning models that reduce bias, using both BalancedRandomForestClassifier and EasyEnsembleClassifier functions, to predict credit risk.

Resources:

  • Data (CSV) source: LoanStats_2019Q1.csv.zip
  • Software: Python 3.9.7 and Jupyter Notebook

Results:

First, let's review what exactly we're looking at in each model.

  • Balanced accuracy score: Is a machine learning error metric for binary and multi-class classification models.
  • Precision: Also known as positive predictive value (PPV), is a measure of how reliable a positive classification. Precision is obtained by dividing the number of true positives (TP) by the number of all positives (i.e., the sum of true positives and false positives, or TP + FP). Precision = TP/(TP + FP).
  • Recall scores: Also known as sensitivity, Is a measure of how well a machine learning model can detect positive instances. Sensitivity = TP/(TP + FN)
  • F1 Score: Also called the harmonic mean, can be characterized as a single summary statistic of precision and sensitivity. F1 score = 2(Precision * Sensitivity)/(Precision + Sensitivity).

Oversampling (naive random oversampling) machine learning model

Screen Shot 2022-09-02 at 11 02 48 AM

Screen Shot 2022-09-02 at 11 03 18 AM

Screen Shot 2022-09-02 at 11 03 30 AM

  • Balanced accuracy score was = 0.6314677834584286 or 0.63
  • Precision matrix:
    • True positives: 50
    • False positives: 5337
    • Sensitivity/recall of this model: 50 / (50 + 37) = 0.58
    • Precision of this model: 50 / (50 + 5337) = 0.0093

This chart represents the models precision matrix, I wrote it out in a simpler format for my readers to understand what I'm referring to moving forward.

Screen Shot 2022-09-02 at 11 20 55 AM

SMOTE Oversampling

Screen Shot 2022-09-02 at 11 38 17 PM

Screen Shot 2022-09-02 at 11 38 29 PM

Screen Shot 2022-09-02 at 11 38 41 PM

  • Balanced accuracy score was = .6268316069795457 or 0.63
  • Precision matrix:
    • True positives: 53
    • False positives: 6086
    • Sensitivity/recall of this model: 53 / (53 + 34) = 0.61
    • Precision of this model: 53 / (53 + 6086) = 0.0086

Undersampling

Screen Shot 2022-09-03 at 3 06 31 PM

  • Balanced accuracy score was = 0.5126747001543042 or 0.51
  • Precision matrix:
    • True positives: 50
    • False positives: 9404
    • Sensitivity/recall of this model: 50 / (50 + 37) = 0.58
    • Precision of this model: 50 / (50 + 9404) = 0.0053

Combination (Over & Under) Sampling

Screen Shot 2022-09-03 at 12 02 19 AM

Screen Shot 2022-09-03 at 12 02 30 AM

Screen Shot 2022-09-03 at 12 02 39 AM

  • Balanced accuracy score was = 0.6413505042081133 or 0.64
  • Precision matrix:
    • True positives: 61
    • False positives: 7163
    • Sensitivity/recall of this model: 61 / (61 + 26) = 0.7
    • Precision of this model: 61 / (61 + 7163) = 0.0084

Ensemble Learners: Balanced Random Forest Classifier

Screen Shot 2022-09-03 at 3 10 46 PM

Screen Shot 2022-09-03 at 3 10 58 PM

  • Balanced accuracy score was = 0.795829959187949 or 0.80
  • Precision matrix:
    • True positives: 62
    • False positives: 2071
    • Sensitivity/recall of this model: 62 / (62 + 25) = 0.71
    • Precision of this model: 61 / (61 + 2071) = 0.029

Easy Ensemble AdaBoost Classifier

Screen Shot 2022-09-03 at 3 12 43 PM

Screen Shot 2022-09-03 at 3 12 52 PM

  • Balanced accuracy score was = 0.9263912558266958 or 0.93
  • Precision matrix:
    • True positives: 79
    • False positives: 946
    • Sensitivity/recall of this model: 79 / (79 + 8) = 0.91
    • Precision of this model: 79 / (79 + 946) = 0.077

Summary:

After using the credit card credit dataset from LendingClub, sampling all the data using different algorithms, and comparing two new machine learning models that reduce bias, It's time to address my findings. In this segment, I will evaluate the performances of said models and address whether they should be used to predict credit risk. For each model, though they do vary slightly, the sensitivity is very high, while the precision is very low. This is why I clarified the F1 score for each model. F1 scores range from 0 to 9, 0 being the lowest and 9 being the highest. Once we review each model, the lowest (0.58 for the Undersampling model can be considered the worst out of all six. The highest F1 score achieved was in the Easy Ensemble AdaBoost Classifier (0.91), thus this model would be the best option to predict credit risk considering its accuracy.

About

Using my skills in data preparation, statistical reasoning, and machine learning I employed different techniques to train and evaluate models with unbalanced classes.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published