Micro Average vs Macro average Performance in a Multiclass classification setting

整理摘自 https://datascience.stackexchange.com/questions/15989/micro-average-vs-macro-average-performance-in-a-multiclass-classification-settin/16001

Micro- and macro-averages (for whatever metric) will compute slightly different things, and thus their interpretation differs. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes).

To illustrate why, take for example precision Pr=TP / (TP+FP). Let's imagine you have a One-vs-All(there is only one correct class output per example) multi-class classification system with four classes and the following numbers when tested:

  • Class A: 1 TP and 1 FP
  • Class B: 10 TP and 90 FP
  • Class C: 1 TP and 1 FP
  • Class D: 1 TP and 1 FP

You can see easily that PrA=PrC=PrD=0.5 , whereas PrB=0.1.

  • A macro-average will then compute: Pr=0.5+0.1+0.5+0.54=0.4
  • A micro-average will compute: Pr=1+10+1+12+100+2+2=0.123

宏查准率:这些类别中是否有尽可能多的类别的查准率尽可能高。-- 侧重各个类别是否预测准确

微查准率:这多组实验中,预测准确的数据占总的预测数据的比例。-- 侧重预测准确的数据的比例

These are quite different values for precision. Intuitively, in the macro-average the "good" precision (0.5) of classes A, C and D is contributing to maintain a "decent" overall precision (0.4). While this is technically true (across classes, the average precision is 0.4), it is a bit misleading, since a large number of examples are not properly classified. These examples predominantly correspond to class B, so they only contribute 1/4 towards the average in spite of constituting 94.3% of your test data. The micro-average will adequately capture this class imbalance, and bring the overall precision average down to 0.123 (more in line with the precision of the dominating class B (0.1)).

当class-imblance已知,但仍要采用macro-average时,需要采取的措施:

1. 报告macro-average + standard deviation(标准差) (对于>=3的多分类任务)

2. 加权macro-average  (考虑样本数的影响)

For computational reasons, it may sometimes be more convenient to compute class averages and then macro-average them. If class imbalance is known to be an issue, there are several ways around it. One is to report not only the macro-average, but also its standard deviation (for 3 or more classes). Another is to compute a weighted macro-average, in which each class contribution to the average is weighted by the relative number of examples available for it. In the above scenario, we obtain:

1. Prmacromean=0.25·0.5+0.25·0.1+0.25·0.5+0.25·0.5=0.4    

    Prmacrostdev=0.173

2. Prmacroweighted= 2/106 * 0.5 + 100 / 106 * 0.1 + 2 / 106 * 0.5 + 2 / 106 * 0.5

                                  = 0.0189·0.5+0.943·0.1+0.0189·0.5+0.0189·0.5=0.009+0.094+0.009+0.009=0.123

The large standard deviation (0.173) already tells us that the 0.4 average does not stem from a uniform precision among classes, but it might be just easier to compute the weighted macro-average, which in essence is another way of computing the micro-average.

原文地址:https://www.cnblogs.com/shiyublog/p/9798870.html