Comparison of evaluation metrics in classification applications with imbalanced datasets

M Fatourechi, RK Ward, SG Mason…�- …�on machine learning�…, 2008 - ieeexplore.ieee.org
M Fatourechi, RK Ward, SG Mason, J Huggins, A Schl�gl, GE Birch
2008 seventh international conference on machine learning and�…, 2008ieeexplore.ieee.org
A new framework is proposed for comparing evaluation metrics in classification applications
with imbalanced datasets (ie, the probability of one class vastly exceeds others). For model
selection as well as testing the performance of a classifier, this framework finds the most
suitable evaluation metric amongst a number of metrics. We apply this framework to
compare two metrics: overall accuracy and Kappa coefficient. Simulation results
demonstrate that Kappa coefficient is more suitable.
A new framework is proposed for comparing evaluation metrics in classification applications with imbalanced datasets (i.e., the probability of one class vastly exceeds others). For model selection as well as testing the performance of a classifier, this framework finds the most suitable evaluation metric amongst a number of metrics. We apply this framework to compare two metrics: overall accuracy and Kappa coefficient. Simulation results demonstrate that Kappa coefficient is more suitable.
ieeexplore.ieee.org