多分类的 micro-precision、micro-recall、micro-f1 相等

看到一个使用 tf 实现的 precision、recall 和 f1,仔细看发现这个实现里 micro-precision、micro-recall、micro-f1 相等,以前从没认真想过这个问题,但是仔细一想还真是这样,于是赶紧用 google 搜了一下,发现还有篇博客介绍了并举例子验证。

三指标的源码如下:

"""Multiclass"""

__author__ = "Guillaume Genthial"

import numpy as np
import tensorflow as tf
from tensorflow.python.ops.metrics_impl import _streaming_confusion_matrix


def precision(labels, predictions, num_classes, pos_indices=None,
              weights=None, average='micro'):
    """Multi-class precision metric for Tensorflow
    Parameters
    ----------
    labels : Tensor of tf.int32 or tf.int64
        The true labels
    predictions : Tensor of tf.int32 or tf.int64
        The predictions, same shape as labels
    num_classes : int
        The number of classes
    pos_indices : list of int, optional
        The indices of the positive classes, default is all
    weights : Tensor of tf.int32, optional
        Mask, must be of compatible shape with labels
    average : str, optional
        'micro': counts the total number of true positives, false
            positives, and false negatives for the classes in
            `pos_indices` and infer the metric from it.
        'macro': will compute the metric separately for each class in
            `pos_indices` and average. Will not account for class
            imbalance.
        'weighted': will compute the metric separately for each class in
            `pos_indices` and perform a weighted average by the total
            number of true labels for each class.
    Returns
    -------
    tuple of (scalar float Tensor, update_op)
    """
    cm, op = _streaming_confusion_matrix(
        labels, predictions, num_classes, weights)
    pr, _, _ = metrics_from_confusion_matrix(
        cm, pos_indices, average=average)
    op, _, _ = metrics_from_confusion_matrix(
        op, pos_indices, average=average)
    return (pr, op)


def recall(labels, predictions, num_classes, pos_indices=None, weights=None,
           average='micro'):
    """Multi-class recall metric for Tensorflow
    Parameters
    ----------
    labels : Tensor of tf.int32 or tf.int64
        The true labels
    predictions : Tensor of tf.int32 or tf.int64
        The predictions, same shape as labels
    num_classes : int
        The number of classes
    pos_indices : list of int, optional
        The indices of the positive classes, default is all
    weights : Tensor of tf.int32, optional
        Mask, must be of compatible shape with labels
    average : str, optional
        'micro': counts the total number of true positives, false
            positives, and false negatives for the classes in
            `pos_indices` and infer the metric from it.
        'macro': will compute the metric separately for each class in
            `pos_indices` and average. Will not account for class
            imbalance.
        'weighted': will compute the metric separately for each class in
            `pos_indices` and perform a weighted average by the total
            number of true labels for each class.
    Returns
    -------
    tuple of (scalar float Tensor, update_op)
    """
    cm, op = _streaming_confusion_matrix(
        labels, predictions, num_classes, weights)
    _, re, _ = metrics_from_confusion_matrix(
        cm, pos_indices, average=average)
    _, op, _ = metrics_from_confusion_matrix(
        op, pos_indices, average=average)
    return (re, op)


def f1(labels, predictions, num_classes, pos_indices=None, weights=None,
       average='micro'):
    return fbeta(labels, predictions, num_classes, pos_indices, weights,
                 average)


def fbeta(labels, predictions, num_classes, pos_indices=None, weights=None,
          average='micro', beta=1):
    """Multi-class fbeta metric for Tensorflow
    Parameters
    ----------
    labels : Tensor of tf.int32 or tf.int64
        The true labels
    predictions : Tensor of tf.int32 or tf.int64
        The predictions, same shape as labels
    num_classes : int
        The number of classes
    pos_indices : list of int, optional
        The indices of the positive classes, default is all
    weights : Tensor of tf.int32, optional
        Mask, must be of compatible shape with labels
    average : str, optional
        'micro': counts the total number of true positives, false
            positives, and false negatives for the classes in
            `pos_indices` and infer the metric from it.
        'macro': will compute the metric separately for each class in
            `pos_indices` and average. Will not account for class
            imbalance.
        'weighted': will compute the metric separately for each class in
            `pos_indices` and perform a weighted average by the total
            number of true labels for each class.
    beta : int, optional
        Weight of precision in harmonic mean
    Returns
    -------
    tuple of (scalar float Tensor, update_op)
    """
    cm, op = _streaming_confusion_matrix(
        labels, predictions, num_classes, weights)
    _, _, fbeta = metrics_from_confusion_matrix(
        cm, pos_indices, average=average, beta=beta)
    _, _, op = metrics_from_confusion_matrix(
        op, pos_indices, average=average, beta=beta)
    return (fbeta, op)


def safe_div(numerator, denominator):
    """Safe division, return 0 if denominator is 0"""
    numerator, denominator = tf.cast(numerator, tf.float32), tf.cast(denominator, tf.float32)
    zeros = tf.zeros_like(numerator, dtype=numerator.dtype)
    denominator_is_zero = tf.equal(denominator, zeros)
    return tf.where(denominator_is_zero, zeros, numerator / denominator)


def pr_re_fbeta(cm, pos_indices, beta=1):
    """Uses a confusion matrix to compute precision, recall and fbeta"""
    num_classes = cm.shape[0]
    neg_indices = [i for i in range(num_classes) if i not in pos_indices]
    cm_mask = np.ones([num_classes, num_classes])
    cm_mask[neg_indices, neg_indices] = 0
    diag_sum = tf.reduce_sum(tf.linalg.diag_part(cm * cm_mask))

    cm_mask = np.ones([num_classes, num_classes])
    cm_mask[:, neg_indices] = 0
    tot_pred = tf.reduce_sum(cm * cm_mask)

    cm_mask = np.ones([num_classes, num_classes])
    cm_mask[neg_indices, :] = 0
    tot_gold = tf.reduce_sum(cm * cm_mask)

    pr = safe_div(diag_sum, tot_pred)
    re = safe_div(diag_sum, tot_gold)
    fbeta = safe_div((1. + beta**2) * pr * re, beta**2 * pr + re)

    return pr, re, fbeta


def metrics_from_confusion_matrix(cm, pos_indices=None, average='micro',
                                  beta=1):
    """Precision, Recall and F1 from the confusion matrix
    Parameters
    ----------
    cm : tf.Tensor of type tf.int32, of shape (num_classes, num_classes)
        The streaming confusion matrix.
    pos_indices : list of int, optional
        The indices of the positive classes
    beta : int, optional
        Weight of precision in harmonic mean
    average : str, optional
        'micro', 'macro' or 'weighted'
    """
    num_classes = cm.shape[0]
    if pos_indices is None:
        pos_indices = [i for i in range(num_classes)]

    if average == 'micro':
        return pr_re_fbeta(cm, pos_indices, beta)
    elif average in {'macro', 'weighted'}:
        precisions, recalls, fbetas, n_golds = [], [], [], []
        for idx in pos_indices:
            pr, re, fbeta = pr_re_fbeta(cm, [idx], beta)
            precisions.append(pr)
            recalls.append(re)
            fbetas.append(fbeta)
            cm_mask = np.zeros([num_classes, num_classes])
            cm_mask[idx, :] = 1
            n_golds.append(tf.cast(tf.reduce_sum(cm * cm_mask), tf.float32))

        if average == 'macro':
            pr = tf.reduce_mean(precisions)
            re = tf.reduce_mean(recalls)
            fbeta = tf.reduce_mean(fbetas)
            return pr, re, fbeta
        if average == 'weighted':
            n_gold = tf.reduce_sum(n_golds)
            pr_sum = sum(p * n for p, n in zip(precisions, n_golds))
            pr = safe_div(pr_sum, n_gold)
            re_sum = sum(r * n for r, n in zip(recalls, n_golds))
            re = safe_div(re_sum, n_gold)
            fbeta_sum = sum(f * n for f, n in zip(fbetas, n_golds))
            fbeta = safe_div(fbeta_sum, n_gold)
            return pr, re, fbeta

    else:
        raise NotImplementedError()

以上是三个指标函数的实现,涉及到一个不在此文件中的辅助函数:_streaming_confusion_matrix,这个函数的源码如下:

def _streaming_confusion_matrix(labels, predictions, num_classes, weights=None):
  """Calculate a streaming confusion matrix.

  Calculates a confusion matrix. For estimation over a stream of data,
  the function creates an  `update_op` operation.

  Args:
    labels: A `Tensor` of ground truth labels with shape [batch size] and of
      type `int32` or `int64`. The tensor will be flattened if its rank > 1.
    predictions: A `Tensor` of prediction results for semantic labels, whose
      shape is [batch size] and type `int32` or `int64`. The tensor will be
      flattened if its rank > 1.
    num_classes: The possible number of labels the prediction task can
      have. This value must be provided, since a confusion matrix of
      dimension = [num_classes, num_classes] will be allocated.
    weights: Optional `Tensor` whose rank is either 0, or the same rank as
      `labels`, and must be broadcastable to `labels` (i.e., all dimensions must
      be either `1`, or the same as the corresponding `labels` dimension).

  Returns:
    total_cm: A `Tensor` representing the confusion matrix.
    update_op: An operation that increments the confusion matrix.
  """
  # Local variable to accumulate the predictions in the confusion matrix.
  total_cm = metric_variable(
      [num_classes, num_classes], dtypes.float64, name='total_confusion_matrix')

  # Cast the type to int64 required by confusion_matrix_ops.
  predictions = math_ops.cast(predictions, dtypes.int64)
  labels = math_ops.cast(labels, dtypes.int64)
  num_classes = math_ops.cast(num_classes, dtypes.int64)

  # Flatten the input if its rank > 1.
  if predictions.get_shape().ndims > 1:
    predictions = array_ops.reshape(predictions, [-1])

  if labels.get_shape().ndims > 1:
    labels = array_ops.reshape(labels, [-1])

  if (weights is not None) and (weights.get_shape().ndims > 1):
    weights = array_ops.reshape(weights, [-1])

  # Accumulate the prediction to current confusion matrix.
  current_cm = confusion_matrix.confusion_matrix(
      labels, predictions, num_classes, weights=weights, dtype=dtypes.float64)
  update_op = state_ops.assign_add(total_cm, current_cm)
  return total_cm, update_op

主要看它的两个输出有什么区别,这两个输出分别传入到 metrics_from_confusion_matrix 导致最后三个指标最后输出的都是一个二元组。查看 state_ops.assign_add 的源码:

@tf_export(v1=["assign_add"])
def assign_add(ref, value, use_locking=None, name=None):
  """Update `ref` by adding `value` to it.

  This operation outputs "ref" after the update is done.
  This makes it easier to chain operations that need to use the reset value.
  Unlike `tf.math.add`, this op does not broadcast. `ref` and `value` must have
  the same shape.

  Args:
    ref: A mutable `Tensor`. Must be one of the following types: `float32`,
      `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`,
      `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`. Should be
      from a `Variable` node.
    value: A `Tensor`. Must have the same shape and dtype as `ref`. The value to
      be added to the variable.
    use_locking: An optional `bool`. Defaults to `False`. If True, the addition
      will be protected by a lock; otherwise the behavior is undefined, but may
      exhibit less contention.
    name: A name for the operation (optional).

  Returns:
    Same as "ref".  Returned as a convenience for operations that want
    to use the new value after the variable has been updated.
  """
  if ref.dtype._is_ref_dtype:
    return gen_state_ops.assign_add(
        ref, value, use_locking=use_locking, name=name)
  return ref.assign_add(value)

这俩貌似是一个东西,不太清楚返回的元组中两个数字相等有什么作用。先放下这个问题,总之我们得到了一个混淆矩阵,维度为 num_classed * num_classes。接下来主要分析函数metrics_from_confusion_matrix这个函数主要根据不同的 average 模式计算三指标,主要是其中的函数pr_re_fbeta。先假设我们通过函数pr_re_fbeta得到了三个指标的结果,如果是 micro 模式,结果直接就是函数pr_re_fbeta的结果,传入的参数pos_indices和其他两种模式不同,待会看具体实现。如果是 macro 和 weighted 模式,通过循环将每一个类分别作为正类计算得到三个指标,其中 macro 直接对 num_classes 平均得到最后的结果,这个计算方式其实和周志华老师的《机器学习》一书中的计算方式不太一致,此书中是先计算出 num_classes 个 precision 和 recall,然后将其做平均,然后使用平均 precision 和平均 recall 计算最终的 f1;weighted 模式还需要使用每种类别的个数作为权重,也就是使用加权平均。

然后看函数pr_re_fbeta,先看 macro 模式和 weighted 模式调用的过程,cm 是前面的得到的混淆矩阵,beta 的含义不变,pos_indices 这里是一个只包含一个元素的列表,即当前循环的正类,假设 cm 如下:

[ egin{matrix} 4 & 0 & 2 \ 1 & 5 & 0 \ 3 & 2 & 9 end{matrix} ]

(cm_{ij}) 表示实际为 (i) 类,预测为 (j) 类的样本数。neg_indices 表示当前负类的所有序号,假设当前正类为 1,那么 0、2 为负类,将得到下面的 cm_mask:

[ egin{matrix} 0 & 1 & 1 \ 1 & 1 & 1 \ 1 & 1 & 0 end{matrix} ]

将 cm 和 cm_mask 按位相乘后取出对角线元素并求和,得到的刚好就是 true positive;下面继续生成 cm_mask:

[ egin{matrix} 0 & 1 & 0 \ 0 & 1 & 0 \ 0 & 1 & 0 end{matrix} ]

将 cm 和 cm_mask 按位相乘后相加得到的就是所有预测为 1 类别的样本数,即 true positive + false positive;继续生成 cm_mask:

[ egin{matrix} 0 & 0 & 0 \ 1 & 1 & 1 \ 0 & 0 & 0 end{matrix} ]

将 cm 和 cm_mask 按位相乘后相加得到的就是所有真实为 1 类别的样本数,即 true positive + false negative。

接下来就可以计算出 (i) 为正类的时候的三指标值。

当为 micro 模式时,传入的 pos_indices 是所有的类别编号,那么 neg_indices 直接为空,所有的 mask 都失效了,首先是 true positive 的计算,这还好理解,就是对角线上所有元素求和,由于 mask 失效,就是所有类别被正确预测的样本数之和;乍一看上去让人有些不好理解的是 true positive + false positive 和 true positive + false negative 的计算,它们都等于 cm 中所有元素的和,这里提供一个定性的理解:假设元素 (cm_{21}) 等于 3,首先明确它的真实类别是 2 ,预测类别是 1,对于类别 2 来说,它是一个 false negative,而对于类别 1 来说,它是一个 false positive,也就是说一个预测错误的样本向 false positive 和 false negative 的贡献一致,那么最终这二者必定相等,也就导致 precision 和 recall 相等,由 f1 的计算式可知也会相等。

在面对不平衡数据时,这三种模式哪种会更好呢,这应该视问题而定,如果我们更关注少数类,那么应该使用 macro 模式。因为在 micro 模式中,即便少数类的 true positive 为 0,在进行 每种类的 true positive 加和之后,仍然可以得到一个较大的数值,得到一个还不错的分数;而 weighted 中,会由于样本少权重小,即便错了也不会对指标有大的影响。在 macro 中,平均操作在每种类别的 precision、recall 和 f1 计算完之后,所以少数类的分类效果不好的话会影响最终的指标。

这篇博客进行了定量的验证。

参考:

  1. https://github.com/guillaumegenthial/tf_metrics
原文地址:https://www.cnblogs.com/xxBryce/p/13093636.html