Performance Metrics for Binary Classification Problems Cheatsheet

This article is merely for a quick recap of Machine Learning Knowledge, which can not be served as a tutorial.
All rights reserved by Diane(Qingyun Hu).

Prerequisites

TP: True Positive
FP: False Positive
TN: True Negative
FN: False Negative

Recall

= Sensitivity = TPR(True Positive Rate)
egin{equation}
Recall = frac{TP} {TP + FN}
end{equation}

Precision

egin{equation}
Precision = frac{TP} {TP + FP}
end{equation}

Accuracy

egin{equation}
Accuracy = frac{TP + TN} {TP + FP +TN + FN}
end{equation}

F1 Score

egin{equation}
F1 Score = frac{2 * Recall * Precision} {Recall + Precision}
end{equation}

Specificity

egin{equation}
Specificity = frac{TN} {TN + FP}
end{equation}

FPR(False Positive Rate)

= 1 - Specificity
egin{equation}
FPR = frac{FP} {TN + FP}
end{equation}

ROC Curve

x-axis: FPR ( = 1 - Specificity )
y-axis: TPR ( = Recall )

AUC (Area under the ROC Curve)

The bigger the size of AUC is, the better.

原文地址:https://www.cnblogs.com/DianeSoHungry/p/11288143.html