fisher线性判别里的广义瑞丽商

  • tex: J(\mathbf{w})=\frac{\mathbf{w^{\mathrm{T}}S_{\mathrm{B}}w}}{\mathbf{w}^{\mathrm{T}}\mathbf{S}_{\mathrm{W}}\mathbf{w}} ..... Eq(1)

Here

  • tex: \mathbf{S}_{B}=(\mathbf{m}_{2}-\mathbf{m}_{1})(\mathbf{m}_{2}-\mathbf{m}_{1})^{T} ..... Eq(2)

is the between-class covariance matrix.

And

  • tex: S_{W}=\sum_{n\epsilon C_{1}}(x_{n}-m_{1})(x_{n}-m_{1})^{T}+\sum_{n\epsilon C_{2}}(x_{n}-m_{2})(x_{n}-m_{2})^{T} ..... Eq(3)

is the total within-class covariance matrix.

Differentiating Eq(1) with respect to tex: \mathbf{w}, we find that tex: J(\mathbf{w}) is maximized when

  • tex: \left(w^{T}S_{B}w\right)S_{W}w=\left(w^{T}S_{W}w\right)S_{B}w ..... Eq(4)

From Eq(2), we see that tex: S_{B}w is always in the direction of tex: (\mathbf{m}_{2}-\mathbf{m}_{1}). Furthermore, we do not care about the magnitude of tex: \mathbf{w}, only its direction, and so we can drop the scale factors tex: \left(w^{T}S_{B}w\right) and tex: \left(w^{T}S_{W}w\right). Multiplying both sides of Eq(4) by tex: S_{W}^{-1}, we obtain

  • tex: w\,\alpha\, S_{W}^{-1}(m_{2}-m_{1}). ..... Eq(5)

Notice that if the within-class covariance matrix is isotropic, so that tex: S_{W} is proportional to the identity matrix, then tex: \mathbf{w} is proportional to the difference between the class means.

原文地址:https://www.cnblogs.com/macula7/p/1960755.html