Skip to content

Experimental classification algorithms on german credit data implemented using scikit-learn library

License

Notifications You must be signed in to change notification settings

chanioxaris/german-credit-data

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This is an analysis and classification of german credit data (more information at this pdf). Three classifiers tested, Support Vector Machines (SVM), Random Forests, Naive Bayes, to select the most efficient for our data. The code implemented in Python 3.6 using scikit-learn library.

Data visualization

For each attribute i used two different plots to represent their data spreading depending on data kind .

Attribute1 Attribute2
Attribute3 Attribute4
Attribute5 Attribute6
Attribute7 Attribute8
Attribute9 Attribute10
Attribute11 Attribute12
Attribute13 Attribute14
Attribute15 Attribute16
Attribute17 Attribute18
Attribute19 Attribute20

Classifier accuracy

I used 10-Fold Cross Validation method for each of three classifiers to measure their accuracy on multiple parts of train dataset. The results are the follows.

Statistic Measure Naive Bayes Random Forests SVM
Accuracy 0.71250 0.73875 0.70125

The Random Forests classifier performs better than others two and i am gonna use it for my final prediction.

Information gain

In general terms, the information gain is the change in information entropy H from a prior state to a state that takes some information as given: IG(T,a) = H(T) - H(T|a)

Information gain for each attribute sorted ascending is represented in the following table.

Attribute Number Information Gain
18 0.000129665701928
11 0.000220571349274
19 0.001202862591080
16 0.002395770112590
17 0.002940316631290
10 0.005674399790160
14 0.007041506325140
08 0.007330500076830
20 0.007704386546440
15 0.011618886823700
09 0.012746841156200
13 0.013412980533000
07 0.014547865230200
12 0.014905530877300
05 0.018461146132300
06 0.022198966052400
04 0.026897452033100
02 0.032963429423100
03 0.037889406221500
01 0.093827963023500

The next step, is to loop through all attributes and removing one each time from dataset based on the above table. I calculated again the accuracy to find the ideal number of attributes that achieve the best accuracy.

Accuracy

As we can see in the above plot, we achieve the best accuracy after removing the first eight attributes with the lowest information gain. We are gonna use the rest twelve remaining attributes to predict our final dataset.

Prediction

Final, we perform our prediction using Random Forests classifier trained by dataset_train.tsv and using only those attributes which resulted the highest prediction accuracy, as we saw at the above plot. Results exported to Predictions.csv file.

Usage

For windows based systems python information_gain.py and python predict.py

For linux bases systems py information_gain.py and py predict.py