《神经网络的梯度推导与代码验证》之数学基础篇:矩阵微分与求导

本内容为神经网络的梯度推导与代码验证系列内容的第一章,更多相关内容请见《神经网络的梯度推导与代码验证》系列介绍

 目录


1.1 数学符号

下面介绍一下本系列统一的数学符号表达:

  • 对于标量,通常用小写字母来表示,例如$x,y$
  • 对于向量,通常用粗体的小写字母来表示,例如$oldsymbol{x,y}$。向量里的每个元素都是标量,通常用带角标的小写字母表示,例如$x_{1},x_{2}$。要注意的是,在数学上向量通常默认为列向量以达到数学约定上的统一,如果要表示行向量的话,需要用到转置操作,例如$oldsymbol{x}^{T}$就表示一个行向量
  • 对于矩阵,通常用大写加粗字母表示,如$oldsymbol{A,B}$,矩阵里的元素记作$a_{ij},b_{ij}$

 


1.2 矩阵导数的定义和布局

根据求导的自变量和因变量是标量,向量还是矩阵,我们有9种可能的矩阵求导定义,形式上如下所示:

  因变量
 标量 向量 矩阵
自变量 标量 $frac{partial y}{partial x}$   $frac{partial oldsymbol{y}}{partial x}$ $frac{partial oldsymbol{Y}}{partial x} $
向量   $frac{partial y}{partial oldsymbol{x}}$ $frac{partial oldsymbol{y}}{partial oldsymbol{x}} $

$ frac{partial oldsymbol{Y}}{partial oldsymbol{x}}$(一直没用上)

矩阵 $ frac{partial y}{partial oldsymbol{X}}$

$frac{partial oldsymbol{y}}{partial oldsymbol{X}}$(用上但不深究)

$frac{partial oldsymbol{Y}}{partial oldsymbol{X}}$(用上但不深究)

对于上述9种形式的导数,接下来按照理解上的难度分档进行介绍,但注意这里并不会对它们全介绍一遍,因为有一些是在本系列的推导不涉及到的,我对它们也没太深究。

-------------简单难度--------------

  • 标量对标量的导数,这个没什么可说的,跳过...
  • 向量/矩阵对标量的导数,定义是向量/矩阵里的每一个元素分别对这个标量求导
    • 例子:

$oldsymbol{Y}=egin{bmatrix}x& 2x\ x^{2} & 2end{bmatrix}$对$x$的导数为:

$frac{partial Y}{partial x} = leftlbrack egin{array}{ll} frac{partial y_{11}}{partial x} & frac{partial y_{12}}{partial x} \ frac{partial y_{21}}{partial x} & frac{partial y_{22}}{partial x} \ end{array} ight brack = leftlbrack egin{array}{ll} 1 & 2 \ {2x} & 0 \ end{array} ight brack$

向量对标量的导数也是同理,

$oldsymbol{y} = leftlbrack egin{array}{l} x \ {2x} \ x^{2} \ end{array} ight brack$对$x$的导数为:

$frac{partialoldsymbol{y}}{partial x} = leftlbrack egin{array}{l} frac{partial y_{1}}{partial x} \ frac{partial y_{2}}{partial x} \ frac{partial y_{3}}{partial x} \ end{array} ight brack = leftlbrack egin{array}{l} 1 \ 2 \ {2x} \ end{array} ight brack$

 

$oldsymbol{y}^{T} = leftlbrack {x,~2x,~x^{2}} ight brack$对$x$的导数为:

$frac{partialoldsymbol{y}^{T}}{partial x} = leftlbrack {frac{partial y_{1}}{partial x},~frac{partial y_{2}}{partial x},~frac{partial y_{3}}{partial x}} ight brack = leftlbrack {1,~2,~2x} ight brack$

总结一点就是,求导结果与因变量同形,这就是所谓的分母布局

  • 标量对向量/矩阵的导数,定义为这个标量对向量/矩阵中的每一个元素进行求导
    • 例子:

$y = x_{1} + {2x}_{2} + x_{3}^{2} + 1$对$x = leftlbrack egin{array}{l} egin{array}{l} x_{1} \ x_{2} \ end{array} \ x_{3} \ x_{4} \ end{array} ight brack$的导数为:

$frac{partial y}{partialoldsymbol{x}} = leftlbrack egin{array}{l} egin{array}{l} frac{partial y}{partial x_{1}} \ frac{partial y}{partial x_{2}} \ frac{partial y}{partial x_{3}} \ end{array} \ frac{partial y}{partial x_{4}} \ end{array} ight brack = leftlbrack egin{array}{l} egin{array}{l} 1 \ 2 \ end{array} \ {2x}_{3} \ 0 \ end{array} ight brack$ 

 

$y = x_{1} + {2x}_{2} + x_{3}^{2} + 1$对$x^{T} = leftlbrack {x_{1},x_{2},x_{3},x_{4}} ight brack$的导数为:

$frac{partial y}{partialoldsymbol{x}} = leftlbrack {frac{partial y}{partial x_{1}},frac{partial y}{partial x_{2}},frac{partial y}{partial x_{3}},frac{partial y}{partial x_{4}}} ight brack = leftlbrack {1,2,{2x}_{3},0} ight brack$

 

$y = x_{1} + {2x}_{2} + x_{3}^{2} + 1$对$mathbf{X} = leftlbrack egin{array}{ll} x_{11} & x_{12} \ x_{21} & x_{22} \ end{array} ight brack$的导数为:

$frac{partial y}{partialoldsymbol{X}} = leftlbrack egin{array}{ll} frac{partial y}{partial x_{11}} & frac{partial y}{partial x_{12}} \ frac{partial y}{partial x_{21}} & frac{partial y}{partial x_{22}} \ end{array} ight brack = leftlbrack egin{array}{ll} 1 & 2 \ {2x}_{3} & 0 \ end{array} ight brack$

总结一点就是,求导结果与自变量同形,这就是所谓的分子布局。所谓矩阵求导,不过是逐元素进行标量层面的求导然后排列成向量/矩阵罢了。

--------------难度稍大一点------------

  • 接下来是略微复杂的向量对向量的导数。上面提到的求导,不是自变量是标量就是因变量是标量,所以无论是计算,还是求导之后的结果布局,都是比较显而易见的。而向量对向量求导我们一般定义如下:

设$oldsymbol{y} = leftlbrack egin{array}{l} y_{1} \ y_{2} \ y_{3} \ end{array} ight brack$,$oldsymbol{x} = leftlbrack egin{array}{l} x_{1} \ x_{2} \ end{array} ight brack$,则$frac{partialoldsymbol{y}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} egin{array}{l} frac{partial y_{1}}{partial x_{1}} \ frac{partial y_{2}}{partial x_{1}} \ end{array} & egin{array}{l} frac{partial y_{1}}{partial x_{2}} \ frac{partial y_{2}}{partial x_{2}} \ end{array} \ frac{partial y_{3}}{partial x_{1}} & frac{partial y_{3}}{partial x_{2}} \ end{array} ight brack$

上面这个求导得到矩阵我们称之为雅克比矩阵(重要),它的第一个维度(行)是以分子为准,第二个维度(列)是以分母为准。直观地来看,就是对分子进行横向的展开。

    • 例子:

$oldsymbol{y} = leftlbrack egin{array}{l} {x_{1} + x_{2}} \ x_{1} \ {x_{1} + x_{2}^{2}} \ end{array} ight brack$对$oldsymbol{x} = leftlbrack egin{array}{l} x_{1} \ x_{2} \ end{array} ight brack$的导数为:

$frac{partialoldsymbol{y}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} egin{array}{l} frac{partial y_{1}}{partial x_{1}} \ frac{partial y_{2}}{partial x_{1}} \ end{array} & egin{array}{l} frac{partial y_{1}}{partial x_{2}} \ frac{partial y_{2}}{partial x_{2}} \ end{array} \ frac{partial y_{3}}{partial x_{1}} & frac{partial y_{3}}{partial x_{2}} \ end{array} ight brack = leftlbrack egin{array}{ll} egin{array}{l} 1 \ 1 \ end{array} & egin{array}{l} 1 \ 0 \ end{array} \ 1 & {2x_{2}} \ end{array} ight brack$


1.3 矩阵求导的优势

之所以要搞矩阵微分,当然不是吃饱了撑着,而是为了在分析大量的神经网络参数的时候不容易出错。

举个例子:

$oldsymbol{A} = left( egin{array}{ll} 1 & 2 \ 3 & 4 \ end{array} ight)$,$oldsymbol{x} = leftlbrack egin{array}{l} x_{1} \ x_{2} \ end{array} ight brack$,求$oldsymbol{y} = oldsymbol{A}oldsymbol{x}$对$oldsymbol{x}oldsymbol{~}$的导数

 

如果我们用求导的定义来解这个问题的话,我们首先需要计算出向量$oldsymbol{y} = leftlbrack egin{array}{l} {x_{1} + 2x_{2}} \ {{3x}_{1} + {4x}_{2}} \ end{array} ight brack$,然后根据1.2节中,向量对向量的求导定义,计算$frac{partialoldsymbol{y}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} frac{partial y_{1}}{partial x_{1}} & frac{partial y_{1}}{partial x_{2}} \ frac{partial y_{2}}{partial x_{1}} & frac{partial y_{2}}{partial x_{2}} \ end{array} ight brack$,得到$frac{partialoldsymbol{y}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} 1 & 2 \ 3 & 4 \ end{array} ight brack = oldsymbol{A}$。

再来个例子:

$oldsymbol{A} = leftlbrack egin{array}{ll} 1 & 2 \ 3 & 4 \ end{array} ight brack$,$oldsymbol{x} = leftlbrack egin{array}{l} x_{1} \ x_{2} \ end{array} ight brack$,求$y = oldsymbol{x}^{T}oldsymbol{A}oldsymbol{x}$对$oldsymbol{x}$的导数,

如果我们用求导的定义来解这个问题的话,先计算出标量$y = x_{1}^{2} + 5x_{1}x_{2} + {4x}_{2}^{2}$然后根据1.2节中,标量对向量的求导的定义,计算$frac{partial y}{partialoldsymbol{x}} = leftlbrack egin{array}{l} frac{partial y}{partial x_{1}} \ frac{partial y}{partial x_{2}} \ end{array} ight brack = leftlbrack egin{array}{l} {2x_{1} + 5x_{2}} \ {8x_{2} + 5x_{1}} \ end{array} ight brack$

 

事实上,$leftlbrack egin{array}{l} {2x_{1} + 5x_{2}} \ {8x_{2} + 5x_{1}} \ end{array} ight brack = leftlbrack egin{array}{l} {x_{1} + 2x_{2}} \ {3x_{1} + 4x_{2}} \ end{array} ight brack + leftlbrack egin{array}{l} {x_{1} + 3x_{2}} \ {2x_{1} + 4x_{2}} \ end{array} ight brack = oldsymbol{A}^{T}x + oldsymbol{A}x$,第二个等号不是巧合,而是可以通过矩阵求导(后面会说怎么求)直接得到的结论。

 

由此可见,对于第一个例子,或许我们通过定义法尚且能又快又准地写出求导结果,但对于第二例子,按照定义出发,从标量对标量求导的角度出发,计算出$y$后再对$oldsymbol{x}$求导就显得有点繁琐了,而且还容易出错。但如果从矩阵求导的角度入手,因为是在向量/矩阵的维度上看待求导操作,所以求导的结果可以很容易写成向量和矩阵的组合,这样又高效,形式又简洁。


1.4 矩阵微分与矩阵求导

高中我们学过一元函数 的微分跟其导数的关系是下面这样的:

$df = f^{'}left( x ight)dx$

到大学,我们在高数课本数又进一步学到了多元函数$fleft( x_{1},x_{2},x_{3} ight)$跟其导数的关系是下面这样的:

$df = frac{partial f}{partial x_{1}}dx_{1} + frac{partial f}{partial x_{2}}dx_{2} + frac{partial f}{partial x_{2}}dx_{2}$ (1.1)

上面这个就是全微分方程的公式了(还有印象吗)

观察上式,可以发现:

$df = {sumlimits_{i = 1}^{n}frac{partial f}{partial x_{i}}}dx_{i} = left( frac{partial f}{partialoldsymbol{x}} ight)^{T}doldsymbol{x}$ (1.2)

第一个等号是全微分公式,而第二个等号表达了梯度(偏导数)与微分的联系,形式上是$frac{partial f}{partialoldsymbol{x}}$与$doldsymbol{x}$的内积。

受此启发,我们可以推导到矩阵上:

$~df = {sumlimits_{i = 1}^{m}{sumlimits_{j = 1}^{n}frac{partial f}{partial x_{ij}}}}dx_{ij} = trleft( {left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X}} ight)$ (1.3)

 

其中第二个等号使用了矩阵迹的性质,即迹函数等于矩阵主对角线的元素之和。即:

$trleft( {A^{T}B} ight) = {sumlimits_{i,j}{a_{ij}b_{ij}}}$

上面这个式子左边看着挺恶心的,但右边的数学含义是非常明显的,就是两个矩阵对应元素相乘然后相加,跟向量的内积类似,这个叫矩阵的内积

 

举个例子:

设$fleft( x_{11},x_{12},x_{21},x_{22} ight)$是一个多元函数。根据全微分公式,有$df = {sumlimits_{i = 1}^{m}{sumlimits_{j = 1}^{n}frac{partial f}{partial x_{ij}}}}dx_{ij}$成立。现在我们将上面这4个自变量排成一个矩阵$oldsymbol{X} = leftlbrack egin{array}{ll} x_{11} & x_{12} \ x_{21} & x_{22} \ end{array} ight brack$,那么按照前面给出的标量对矩阵求导的定义,有:

$frac{partial f}{partialoldsymbol{X}} = leftlbrack egin{array}{ll} frac{partial f}{partial x_{11}} & frac{partial f}{partial x_{12}} \ frac{partial f}{partial x_{21}} & frac{partial f}{partial x_{22}} \ end{array} ight brack$

$left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X} = leftlbrack egin{array}{ll} frac{partial f}{partial x_{11}} & frac{partial f}{partial x_{21}} \ frac{partial f}{partial x_{12}} & frac{partial f}{partial x_{22}} \ end{array} ight brackleftlbrack egin{array}{ll} {dx}_{11} & {dx_{12}} \ {dx}_{21} & {dx}_{22} \ end{array} ight brack = leftlbrack egin{array}{ll} {frac{partial f}{partial x_{11}}{dx}_{11} + frac{partial f}{partial x_{21}}{dx}_{21}} & {frac{partial f}{partial x_{11}}dx_{12} + frac{partial f}{partial x_{21}}{dx}_{22}} \ {frac{partial f}{partial x_{12}}{dx}_{11} + frac{partial f}{partial x_{22}}{dx}_{21}} & {frac{partial f}{partial x_{12}}{dx}_{21} + frac{partial f}{partial x_{22}}{dx}_{22}} \ end{array} ight brack$

而$trleft( ~ ight)$是求矩阵对角线元素之和,所以

$df = frac{partial f}{partial x_{11}}{dx}_{11} + frac{partial f}{partial x_{21}}{dx}_{21} + frac{partial f}{partial x_{12}}{dx}_{12} + frac{partial f}{partial x_{22}}{dx}_{22} = ~trleft( {left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X}} ight)$成立。


1.5 矩阵微分性质归纳

我们在讨论如何使用矩阵微分来求导前,先看看矩阵微分的性质,都挺明显的:

  • 微分加减法:$dleft( {oldsymbol{X} pm oldsymbol{Y}} ight) = doldsymbol{X} pm doldsymbol{Y}$
  • 微分乘法:$dleft( oldsymbol{X}oldsymbol{Y} ight) = left( doldsymbol{X} ight)oldsymbol{Y} + oldsymbol{X}left( doldsymbol{Y} ight)$
  • 微分转置:$dleft( oldsymbol{X}^{oldsymbol{T}} ight) = left( doldsymbol{X} ight)^{T}$
  • 微分的迹:$dtrleft( oldsymbol{X} ight) = trleft( doldsymbol{X} ight)$
  • 微分哈达马乘积(逐元素相乘):$dleft( {Xigodot Y} ight) = Xigodot dY + dXigodot Y$,其优先级比普通矩阵相乘操作低
  • 逐元素求导:$dsigmaleft( oldsymbol{X} ight) = sigma^{'}left( oldsymbol{X} ight)igodot doldsymbol{X} = diagleft( {sigma^{'}left( oldsymbol{X} ight)} ight)doldsymbol{X}$
  • 逆矩阵微分:$doldsymbol{X}^{- 1} = - oldsymbol{X}^{- 1}doldsymbol{X}oldsymbol{X}^{- 1}$
  • 行列式微分(没用过):$dleft| oldsymbol{X} ight| = left| oldsymbol{X} ight|trleft( oldsymbol{X}^{- 1}doldsymbol{X} ight)$

其中$sigmaleft( X ight)$表示的含义是对 里的所有元素都进行$oldsymbol{sigma}$函数的计算,即$sigmaleft( X ight) = leftlbrack egin{array}{lll} {sigmaleftlbrack x_{11} ight brack} & cdots & {sigmaleftlbrack x_{1n} ight brack} \ vdots & ddots & vdots \ {sigmaleftlbrack x_{m1} ight brack} & cdots & {sigmaleftlbrack x_{mn} ight brack} \ end{array} ight brack$,这其实就是神经网络中的数据经过激活函数的过程。

 

举个例子,$X = leftlbrack egin{array}{ll} x_{11} & x_{12} \ x_{21} & x_{22} \ end{array} ight brack$,$dsinleft( X ight) = leftlbrack egin{array}{ll} {cosx_{11}dx_{11}} & {{cosx}_{12}dx_{12}} \ {cosx_{21}dx_{21}} & {{cosx}_{22}dx_{22}} \ end{array} ight brack = cosleft( X ight)igodot dX$

对于其他性质,存疑的话可以自行举例验证


1.6 标量对矩阵/向量的导数求解套路-迹技巧

我们试图利用标量(loss)对向量(神经网络某层的输出)/矩阵导数(神经网络某层的参数)和微分的联系,即公式1.2和公式1.3来计算标量对向量/矩阵的导数。如果一个标量的微分能被写成这种形式,那导数就是等号右边的转置符号下的那个部分。也就是$df = {sumlimits_{i = 1}^{n}frac{partial f}{partial x_{i}}}dx_{i} = left( frac{partial f}{partialoldsymbol{x}} ight)^{T}doldsymbol{x}$和$df = {sumlimits_{i = 1}^{m}{sumlimits_{j = 1}^{n}frac{partial f}{partial x_{ij}}}}dx_{ij} = trleft( {left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X}} ight)$这两个等式中$frac{partial f}{partialoldsymbol{x}}$。

 

在实际演练之前还有一个必用的套路需要介绍,在求标量对矩阵/向量的导数的时候非常有用,叫迹技巧。后面通过小demo就知道它为什么是必要的了。这里先列举出迹的一些性质(全部都很有用):

  • 标量的迹等于自己:$trleft( x ight) = x$
  • 转置不变性:$trleft( oldsymbol{A}^{T} ight) = trleft( oldsymbol{A} ight)$
  • 轮换不变性:$trleft( {oldsymbol{A}oldsymbol{B}} ight) = trleft( {oldsymbol{B}oldsymbol{A}} ight)$,其中 与 尺寸相同(这个是显然的,否则维度不相容)。两侧都等于$sumlimits_{i,j}{oldsymbol{A}_{ij}oldsymbol{B}_{ij}}$
  • 加减法:$trleft( {oldsymbol{A} pm oldsymbol{B}} ight) = trleft( oldsymbol{A} pm oldsymbol{B} ight)$
  • 矩阵乘法和迹交换:$trleft( {left( {oldsymbol{A}igodotoldsymbol{B}} ight)^{T}oldsymbol{C}} ight) = trleft( {oldsymbol{A}^{T}left( {oldsymbol{B}igodotoldsymbol{C}} ight)} ight)$,需要满足 同维度。两侧都等于${sumlimits_{i,j}{oldsymbol{A}_{ij}oldsymbol{B}_{ij}}}oldsymbol{C}_{ij}$。

标量对矩阵/向量的求导技巧总结:若标量函数$f$是矩阵$oldsymbol{X}$经加减乘法、逆、行列式、逐元素函数等运算构成,则使用相应的运算法则对$f$求微分,再使用迹技巧给$df$套上迹并将其它项交换至$doldsymbol{X}$左侧,对照导数与微分的联系$df = trleft( {left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X}} ight)$,即可得到导数。

 

特别地,若矩阵退化为向量,对照导数与微分的联系$df = left( frac{partial f}{partialoldsymbol{x}} ight)^{T}doldsymbol{x}$,即可得到导数。

 

举个例子:

$y = oldsymbol{a}^{T}expleft( oldsymbol{X}oldsymbol{b} ight)$,$frac{partial y}{partialoldsymbol{X}}$

根据迹技巧第一条:$dy = trleft( {dy} ight) = trleft( {dleft( {oldsymbol{a}^{T}expleft( oldsymbol{X}oldsymbol{b} ight)} ight)} ight)$

根据矩阵微分性质第二条:$trleft( {dleft( {oldsymbol{a}^{T}expleft( oldsymbol{X}oldsymbol{b} ight)} ight)} ight) = trleft( {{doldsymbol{a}}^{T}ex{pleft( {oldsymbol{X}oldsymbol{b}} ight)} + oldsymbol{a}^{T}dex{pleft( {oldsymbol{X}oldsymbol{b}} ight)}} ight)$,因为是对$oldsymbol{X}$求导,所以${doldsymbol{a}}^{T} = 0$,因此$trleft( {dleft( {oldsymbol{a}^{T}expleft( oldsymbol{X}oldsymbol{b} ight)} ight)} ight) = ~trleft( {oldsymbol{a}^{T}dex{pleft( {oldsymbol{X}oldsymbol{b}} ight)}} ight)$

根据矩阵微分性质第五条:$trleft( {oldsymbol{a}^{T}dex{pleft( {oldsymbol{X}oldsymbol{b}} ight)}} ight) = trleft( {oldsymbol{a}^{T}left( {expleft( oldsymbol{X}oldsymbol{b} ight)igodot dleft( oldsymbol{X}oldsymbol{b} ight)} ight)} ight)$

根据迹技巧第五条:$trleft( {oldsymbol{a}^{T}left( {expleft( oldsymbol{X}oldsymbol{b} ight)igodot dleft( oldsymbol{X}oldsymbol{b} ight)} ight)} ight) = trleft( {left( {oldsymbol{a}igodot expleft( {Xb} ight)} ight)^{T}dXb} ight)$

根据迹技巧第三条:$trleft( {left( {oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)} ight)^{T}doldsymbol{X}oldsymbol{b}} ight) = trleft( {oldsymbol{b}left( {oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)} ight)^{T}doldsymbol{X}} ight)$

于是,$dy = trleft( {oldsymbol{b}left( {oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)} ight)^{T}doldsymbol{X}} ight) = trleft( {left( {left( {oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)} ight)oldsymbol{b}^{T}} ight)^{T}doldsymbol{X}} ight)$,对比$df = trleft( {left( frac{partial f}{partialoldsymbol{X}} ight)^{T}doldsymbol{X}} ight)$,我们可以求得$frac{partial y}{partialoldsymbol{X}} = oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)oldsymbol{b}^{T}$

 

举个简单的例子验证一下上述矩阵求导结果:$oldsymbol{X} = leftlbrack egin{array}{ll} x_{11} & x_{12} \ x_{21} & x_{22} \ end{array} ight brack$,$b = leftlbrack egin{array}{l} 1 \ 2 \ end{array} ight brack$,$a = leftlbrack egin{array}{l} 2 \ 3 \ end{array} ight brack$,则$y = leftlbrack 2,3 ight brackleftlbrack egin{array}{l} {expleft( x_{11} + 2x_{12} ight)} \ {expleft( x_{21} + 2x_{22} ight)} \ end{array} ight brack = 2{expleft( {x_{11} + 2x_{12}} ight)} + 3expleft( x_{21} + 2x_{22} ight)$,按照基本的定义来计算,我们得到$frac{partial y}{partialoldsymbol{X}} = leftlbrack egin{array}{ll} {2{expleftlbrack {x_{11} + 2x_{12}} ight brack}} & {4{expleftlbrack {x_{11} + 2x_{12}} ight brack}} \ {3expleft( x_{21} + 2x_{22} ight)} & {6expleft( x_{21} + 2x_{22} ight)} \ end{array} ight brack$

又因为$left( {oldsymbol{a}igodot expleft( {oldsymbol{X}oldsymbol{b}} ight)} ight)oldsymbol{b}^{T} = leftlbrack egin{array}{l} {2expleft( x_{11} + 2x_{12} ight)} \ {3expleft( x_{11} + 2x_{12} ight)} \ end{array} ight brackleftlbrack {1,~2} ight brack = leftlbrack egin{array}{ll} {2{expleftlbrack {x_{11} + 2x_{12}} ight brack}} & {4{expleftlbrack {x_{11} + 2x_{12}} ight brack}} \ {3expleft( x_{21} + 2x_{22} ight)} & {6expleft( x_{21} + 2x_{22} ight)} \ end{array} ight brack$

因此证明无误。

 

微分法求导套路小结:

使用矩阵微分,可以在不对向量或矩阵中的某一元素单独求导再拼接,因此会比较方便,当然熟练使用的前提是对上面矩阵微分的性质,以及迹函数的性质熟练运用。还有一些场景,求导的自变量和因变量直接有复杂的多层链式求导的关系,此时微分法使用起来也有些麻烦。如果我们可以利用一些常用的简单求导结果,再使用链式求导法则,则会非常的方便。因此下一节我们讨论向量矩阵求导的链式法则。


1.7 向量微分与向量对向量求导的关系

上面讲的都是标量微分与标量对矩阵/向量的导数的关系,下面进一步拓展到向量微分与向量(神经网络某一层的输出)对向量(神经网络另一层的输出)的导数关系:


$doldsymbol{f} = frac{partialoldsymbol{f}}{partialoldsymbol{x}}doldsymbol{x}$ (1.4)

相比公式1.2,上式的偏导部分少了转置。

总之,先举个例子来验证一下公式1.4:

$oldsymbol{f} = oldsymbol{A}oldsymbol{x}$,$oldsymbol{A} = leftlbrack egin{array}{ll} 1 & 2 \ 0 & {- 1} \ end{array} ight brack$,求$frac{partialoldsymbol{f}}{partialoldsymbol{x}}$

解:

用定义法来解的话,先算出$oldsymbol{f} = leftlbrack egin{array}{l} {x_{1} + 2x_{2}} \ {- x_{2}} \ end{array} ight brack$,按照1.2节中向量对向量的求导定义,得到$frac{partialoldsymbol{f}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} 1 & 2 \ 0 & {- 1} \ end{array} ight brack$

用公式1.4来解的话,$doldsymbol{f} = doldsymbol{A}oldsymbol{x} = oldsymbol{A}doldsymbol{x}$,对比$doldsymbol{f} = frac{partialoldsymbol{f}}{partialoldsymbol{x}}doldsymbol{x}$,得到$frac{partialoldsymbol{f}}{partialoldsymbol{x}} = oldsymbol{A}$,如此看来公式没有问题。

 

如果要从原理上理解为什么公式1.2和公式1.4差了个转置,不妨考虑对比下面两个特殊例子

1)$f = oldsymbol{a}^{T}oldsymbol{x}$,$oldsymbol{a} = leftlbrack egin{array}{l} 1 \ 2 \ end{array} ight brack$,求$frac{partial f}{partialoldsymbol{x}}$

2)$oldsymbol{f} = oldsymbol{a}^{T}oldsymbol{x}$,$oldsymbol{a} = leftlbrack egin{array}{l} 1 \ 2 \ end{array} ight brack$,求$frac{partialoldsymbol{f}}{partialoldsymbol{x}}$

上面两个例子,第一个左边是一个标量$f = x_{1} + 2x_{2}$,而第二个左边是一个长度为1的“向量”$oldsymbol{f} = leftlbrack {x_{1} + 2x_{2}} ight brack$。

观察第一个例子:

根据定义,可以秒求出$frac{partial f}{partialoldsymbol{x}} = leftlbrack egin{array}{l} 1 \ 2 \ end{array} ight brack$,如果要得到公式1.2的左边的标量$df$,那么列向量$frac{partial f}{partialoldsymbol{x}}$和列向量$doldsymbol{x}$必然有一方需要转置才行,否则不满足维度相容的规则,所以我们对$frac{partial f}{partialoldsymbol{x}}$进行转置从而得到$df = dleft( {x_{1} + 2x_{2}} ight) = leftlbrack egin{array}{l} 1 \ 2 \ end{array} ight brack^{T}leftlbrack egin{array}{l} {dx_{1}} \ {dx_{2}} \ end{array} ight brack = left( frac{partial f}{partialoldsymbol{x}} ight)^{T}doldsymbol{x}$

 

观察第二个例子:

根据定义,可以秒求出$frac{partialoldsymbol{f}}{partialoldsymbol{x}} = leftlbrack {1,~2} ight brack$,对比第一个例子,区别就在这里,因为这个地方出现区别,导致后面会有不一样的结论。

 

有了公式(1.2)和(1.3)能做什么?能做非常有用的事情,那就是通过写一个全微分公式,配合一些简单的矩阵微分的性质(后面有说),我们就能得到标量(神经网络的loss)对矩阵(参数矩阵)的微分了。


1.8 矩阵向量求导链式法则

终于到这一步了。

矩阵向量求导链式法则很多时候可以帮我们快速求出导数结果,但它跟标量对标量求导的链式法则不完全相同,所以需要单独讨论。

1.8.1 向量对向量求导的链式法则

首先我们来看看向量对向量求导的链式法则。假设多个向量存在依赖关系,比如三个向量$left. oldsymbol{x} ightarrowoldsymbol{y} ightarrowoldsymbol{z} ight.$,则存在下面的链式法则:

$frac{partialoldsymbol{z}}{partialoldsymbol{x}} = frac{partialoldsymbol{z}}{partialoldsymbol{y}}frac{partialoldsymbol{y}}{partialoldsymbol{x}}$

需要注意的是,上述链式法则只对向量间求导有效

举个例子感受一下:

$oldsymbol{z} = {expleft( oldsymbol{y} ight)},~oldsymbol{y} = oldsymbol{A}oldsymbol{x},~oldsymbol{A} = leftlbrack egin{array}{ll} 1 & 2 \ 3 & 0 \ end{array} ight brack$,求$frac{partialoldsymbol{z}}{partialoldsymbol{x}}$

解:

根据$oldsymbol{z}$的定义式求得$oldsymbol{z} = leftlbrack egin{array}{l} {expleft( x_{1} + 2x_{2} ight)} \ {expleft( 3x_{1} ight)} \ end{array} ight brack$,回顾1.2节中提到的向量对向量求导的定义,得到$frac{partialoldsymbol{z}}{partialoldsymbol{x}} = leftlbrack egin{array}{ll} {expleft( x_{1} + 2x_{2} ight)} & {2expleft( x_{1} + 2x_{2} ight)} \ {3expleft( x_{1} + 2x_{2} ight)} & 0 \ end{array} ight brack$

 

如果用链式法则来求,我们先求$frac{partialoldsymbol{z}}{partialoldsymbol{y}}$,$doldsymbol{z} = dexpleft( oldsymbol{y} ight) = {expleft( oldsymbol{y} ight)}igodot doldsymbol{y}$,这里距离$doldsymbol{f} = frac{partialoldsymbol{f}}{partialoldsymbol{x}}doldsymbol{x}$这样的形式还差一点点,注意到${expleft( oldsymbol{y} ight)}igodot doldsymbol{y} = diagleft( {expleft( oldsymbol{y} ight)} ight)doldsymbol{y}$(自行验证)。

其中$diagleft( {expleft( oldsymbol{y} ight)} ight) = leftlbrack egin{array}{ll} {expleft( y_{1} ight)} & 0 \ 0 & {expleft( y_{2} ight)} \ end{array} ight brack$意思是将向量$expleft( oldsymbol{y} ight)$的元素作为一个矩阵的对角线上的元素,其他位置全为0。所以$doldsymbol{z} = dexpleft( oldsymbol{y} ight) = {expleft( oldsymbol{y} ight)}igodot doldsymbol{y} = diagleft( {expleft( oldsymbol{y} ight)} ight)doldsymbol{y}$,$frac{partialoldsymbol{z}}{partialoldsymbol{y}} = diagleft( {expleft( oldsymbol{y} ight)} ight)$。

接着我们求$frac{partialoldsymbol{y}}{partialoldsymbol{x}}$,$doldsymbol{y} = doldsymbol{A}oldsymbol{x} = oldsymbol{A}doldsymbol{x} + oldsymbol{x}doldsymbol{A} = oldsymbol{A}doldsymbol{x}$,所以$frac{partialoldsymbol{y}}{partialoldsymbol{x}} = oldsymbol{A}$。

 

$frac{partialoldsymbol{z}}{partialoldsymbol{x}} = frac{partialoldsymbol{z}}{partialoldsymbol{y}}frac{partialoldsymbol{y}}{partialoldsymbol{x}} = diagleft( {expleft( oldsymbol{y} ight)} ight)oldsymbol{A} = leftlbrack egin{array}{ll} {expleft( x_{1} + 2x_{2} ight)} & 0 \ 0 & {expleft( 3x_{1} ight)} \ end{array} ight brackleftlbrack egin{array}{ll} 1 & 2 \ 3 & 0 \ end{array} ight brack = leftlbrack egin{array}{ll} {expleft( x_{1} + 2x_{2} ight)} & {2expleft( x_{1} + 2x_{2} ight)} \ {3expleft( 3x_{1} ight)} & 0 \ end{array} ight brack$

验证无误。

 

1.8.2 标量对多个向量的链式求导法则

标量对多个向量的链式法则可以借助上面得到的两个有用结论推导出来:

  • 结论1:$f$是一个标量的时候,如果设$oldsymbol{f} = leftlbrack f ight brack$是一个1x1的特殊的向量,那么有$frac{partial f}{partialoldsymbol{x}} = left( frac{partialoldsymbol{f}}{partialoldsymbol{x}} ight)^{T}$成立
  • 结论2:如果$oldsymbol{x},oldsymbol{y},oldsymbol{z}$是向量,有$frac{partialoldsymbol{z}}{partialoldsymbol{x}} = frac{partialoldsymbol{z}}{partialoldsymbol{y}}frac{partialoldsymbol{y}}{partialoldsymbol{x}}$

如果$left. oldsymbol{x} ightarrowoldsymbol{y} ightarrow f ight.$(标量),由上面第一个结论可知,$frac{partial f}{partialoldsymbol{x}} = left( frac{partialoldsymbol{f}}{partialoldsymbol{x}} ight)^{T}$。

等号左边是标量对向量的导数,等号右边是向量对向量的导数,现在可以对右边应用结论2了,也就是向量对向量的链式法则$frac{partial f}{partialoldsymbol{x}} = left( frac{partialoldsymbol{f}}{partialoldsymbol{x}} ight)^{T} = left( {frac{partialoldsymbol{f}}{partialoldsymbol{y}}frac{partialoldsymbol{y}}{partialoldsymbol{x}}} ight)^{T}$。

 

还没完事,求导的话最有利的情况下求标量对向量的导数,因为迹技巧只有在这种情况下才有用,所以再进一步将这个特殊的向量$oldsymbol{f}$转换回标量$f$,最终得到了下面的标量对多个向量的链式求导法则:

$frac{partial f}{partialoldsymbol{x}} = left( frac{partialoldsymbol{f}}{partialoldsymbol{x}} ight)^{T} = left( {frac{partialoldsymbol{f}}{partialoldsymbol{y}}frac{partialoldsymbol{y}}{partialoldsymbol{x}}} ight)^{T} = left( {left( frac{partial f}{partialoldsymbol{y}} ight)^{T}frac{partialoldsymbol{y}}{partialoldsymbol{x}}} ight)^{T} = left( frac{partialoldsymbol{y}}{partialoldsymbol{x}} ight)^{T}frac{partial f}{partialoldsymbol{y}}$

这个结论推广到一般情况是,若$left.oldsymbol{y}_{1} ightarrowoldsymbol{y}_{2} ightarrowoldsymbol{y}_{3} ightarrowldotsoldsymbol{y}_{oldsymbol{n}} ightarrow z ight.$(标量),则:

$frac{partial z}{partialoldsymbol{y}_{1}} = left( {frac{partialoldsymbol{y}_{oldsymbol{n}}}{partialoldsymbol{y}_{oldsymbol{n} - 1}}frac{partialoldsymbol{y}_{oldsymbol{n} - 1}}{partialoldsymbol{y}_{oldsymbol{n} - 2}}ldotsfrac{partialoldsymbol{y}_{2}}{partialoldsymbol{y}_{1}}} ight)^{T}frac{partial z}{partialoldsymbol{y}_{oldsymbol{n}}}$

 

1.8.3  标量对多个矩阵的链式求导法则(证略)

下面我们再来看看标量对多个矩阵的链式求导法则,假设有这样的依赖关系$left. oldsymbol{X} ightarrowoldsymbol{Y} ightarrow z ight.$,那我们有:

$frac{partial z}{partial x_{ij}} = {sumlimits_{k,l}{frac{partial z}{partial Y_{kl}}frac{partial Y_{kl}}{partial X_{ij}} = trleft( {left( frac{partial z}{partial Y} ight)^{T}frac{partial Y}{partial X_{ij}}} ight)}}$

这里大家会发现我们没有给出基于矩阵整体的链式求导法则,主要原因是矩阵对矩阵的求导是比较复杂的定义,我们目前也未涉及。因此只能给出对矩阵中一个标量的链式求导方法。这个方法并不实用,因为我们并不想每次都基于定义法来求导最后再去排列求导结果。

 

以实用为主的话,其实没必要对这部分深入研究下去,只需要记住一些实用的情况就行了,而且非常好记忆:

我们也可以称以下结论为“标量对线性变换的导数”

总结下就是:

  • $z = fleft( oldsymbol{Y} ight),~oldsymbol{Y} = oldsymbol{A}oldsymbol{X} + oldsymbol{B}$,则$frac{partial z}{partialoldsymbol{X}} = A^{T}frac{partial z}{partialoldsymbol{Y}}$
  • 结论在$oldsymbol{X}$替换成向量$oldsymbol{x}$的情况下仍然成立,$z = fleft( oldsymbol{y} ight),oldsymbol{y} = oldsymbol{A}oldsymbol{x} + oldsymbol{B}$,则$frac{partial z}{partialoldsymbol{x}} = A^{T}frac{partial z}{partialoldsymbol{y}}$
  • 结论在$oldsymbol{X}$替换成向量$oldsymbol{x}$的情况下仍然成立,$frac{partial z}{partialoldsymbol{x}} = A^{T}frac{partial z}{partialoldsymbol{y}}$,则$frac{partial z}{partialoldsymbol{x}} = frac{partial z}{partialoldsymbol{y}}x^{T}$

1.9 用矩阵求导来求解机器学习上的参数梯度

神经网络的求导术是学术史上的重要成果,还有个专门的名字叫做BP算法,我相信如今很多人在初次推导BP算法时也会颇费一番脑筋,事实上使用矩阵求导术来推导并不复杂。为简化起见,我们推导二层神经网络的BP算法。后面会相继系统介绍如何推导FNN,CNN,RNN和LSTM的参数求导。

 

我们运用上面学过的所有知识,来求分析一个二层神经网络的loss对各层参数的梯度。以经典的 MNIST 手写数字分类问题为例,这个二层神经网络输入图片拉伸成的向量$oldsymbol{x}$,然后输出一个概率向量$oldsymbol{y}$。用交叉熵作为loss函数可以得下面计算公式:

$l = - oldsymbol{y}^{T}{log{softmaxleft( {oldsymbol{W}_{2}sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) + oldsymbol{b}_{2}} ight)}}$

其中,$oldsymbol{x}$是个n x 1的列向量,$oldsymbol{W}_{1}$是p x n的矩阵,$oldsymbol{W}_{2}$是m x p的矩阵,$oldsymbol{y}$是m x 1的列向量,$l$是标量,$sigma$是sigmoid函数。

 

我们一层一层往前计算梯度:

注意到$softmaxleft( oldsymbol{x} ight) = frac{expleft( oldsymbol{x} ight)}{ oldsymbol{1}^{T}expleft( oldsymbol{x} ight)}$,其中$expleft( oldsymbol{x} ight)$是个列向量,$ oldsymbol{1}^{T}$是一个元素全为1的行向量,而$ oldsymbol{1}^{T}expleft( oldsymbol{x} ight)$是一个标量。举个小例子,如果$oldsymbol{x} = leftlbrack egin{array}{l} 1 \ 2 \ 3 \ end{array} ight brack$则$softmaxleft( oldsymbol{x} ight) = leftlbrack egin{array}{l}frac{expleft( 1 ight)}{{expleft(1 ight)} + {expleft( 2 ight)} + {expleft( 3 ight)}} \frac{expleft( 2 ight)}{{expleft(1 ight)} + {expleft( 2 ight)} + {expleft( 3 ight)}} \frac{expleft( 3 ight)}{{expleft(1 ight)} + {expleft( 2 ight)} + {expleft( 3 ight)}} \end{array} ight brack$

令$oldsymbol{a} = oldsymbol{W}_{2}sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) + oldsymbol{b}_{2}$,先求$frac{partial l}{partialoldsymbol{a}}$。

$dl = - oldsymbol{y}^{T}dleft( {logsoftmaxleft( oldsymbol{a} ight)} ight) = - oldsymbol{y}^{T}dleft( {logleft( frac{expleft( oldsymbol{a} ight)}{ oldsymbol{1}^{T}expleft( oldsymbol{a} ight)} ight)} ight)$

 

这里要注意逐元素log满足等式$logleft( {oldsymbol{u}/c} ight) = logleft( oldsymbol{u} ight) -  oldsymbol{1}logleft( c ight)$,其中$oldsymbol{u}$和$oldsymbol{1}$是同形的列向量,$c$是标量。

 

因为$ oldsymbol{1}^{T}expleft( oldsymbol{a} ight)$是一个标量,所以套用上面这个规则,有:

$logleft( frac{expleft( oldsymbol{a} ight)}{ oldsymbol{1}^{T}expleft( oldsymbol{a} ight)} ight) = logleft( {expleft( oldsymbol{a} ight)} ight) -  oldsymbol{1}logleft( { oldsymbol{1}^{T}expleft( oldsymbol{a} ight)} ight)$

因而有:

$dl = - oldsymbol{y}^{T}dleft( {logleft( {expleft( oldsymbol{a} ight)} ight) - oldsymbol{1}logleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)} ight) = - oldsymbol{y}^{T}dleft( {oldsymbol{a} - oldsymbol{1}logleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)} ight) = - oldsymbol{y}^{T}doldsymbol{a} + dleft( {oldsymbol{y}^{T}oldsymbol{1}logleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)} ight)$

 

因为$oldsymbol{y}$的元素之和等于1,所以$oldsymbol{y}^{T}oldsymbol{1} = 1$。进一步,我们得到:

$dl = - oldsymbol{y}^{T}doldsymbol{a} + dleft( {logleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)} ight)$

根据矩阵微分性质第六条:

$dleft( {logleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)} ight) = {log}^{'}left( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight) odot dleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)$

因为${log}^{'}left( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)$和$dleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)$都是标量,所以有:

 ${log}^{'}left( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight) odot dleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight) = frac{dleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)}{oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}}$

继续根据矩阵微分性质第六条做变换:$frac{dleft( {oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} ight)}{oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} = frac{oldsymbol{1}^{T}dleft( {expleft( oldsymbol{a} ight)} ight)}{oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}} = frac{oldsymbol{1}^{T}left( {{expleft( oldsymbol{a} ight)} odot doldsymbol{a}} ight)}{oldsymbol{1}^{T}{expleft( oldsymbol{a} ight)}}$

 

接下来使用迹技巧第一条:

$dleft( {1^{T}{expleft( oldsymbol{a} ight)}} ight) = trleft( {dleft( {1^{T}{expleft( oldsymbol{a} ight)}} ight)} ight) = trleft( {1^{T}left( {{expleft( oldsymbol{a} ight)} odot doldsymbol{a}} ight)} ight)$

根据迹技巧第五条:

$trleft( {1^{T}left( {{expleft( oldsymbol{a} ight)} odot doldsymbol{a}} ight)} ight) = trleft( left( {left( {1{{odot exp}left( oldsymbol{a} ight)}} ight)^{T}doldsymbol{a}} ight) ight) = trleft( {left( {expleft( oldsymbol{a} ight)} ight)^{T}doldsymbol{a}} ight)$

 

再逆着用一次迹技巧第一条:

 $trleft( {left( {expleft( oldsymbol{a} ight)} ight)^{T}doldsymbol{a}} ight) = left( {expleft( oldsymbol{a} ight)} ight)^{T}doldsymbol{a}$

所以有:

 $dl = - oldsymbol{y}^{T}doldsymbol{a} + dleft( {logleft( {1^{T}{expleft( oldsymbol{a} ight)}} ight)} ight) = - oldsymbol{y}^{T}doldsymbol{a} + frac{left( {expleft( oldsymbol{a} ight)} ight)^{T}doldsymbol{a}}{1^{T}{expleft( oldsymbol{a} ight)}} = left( {- oldsymbol{y}^{T} + frac{left( {expleft( oldsymbol{a} ight)} ight)^{T}}{1^{T}{expleft( oldsymbol{a} ight)}}} ight)doldsymbol{a}$

 

对照公式1.2得到$frac{partial l}{partialoldsymbol{a}} = frac{expleft( oldsymbol{a} ight)}{1^{T}{expleft( oldsymbol{a} ight)}} - oldsymbol{y} = softmaxleft( oldsymbol{a} ight) - oldsymbol{y}$

 

接下来,我们求$frac{partial l}{partialoldsymbol{W}_{2}}$:

 

我们知道,$l = - oldsymbol{y}^{T}{log{softmaxleft( oldsymbol{a} ight)}}$,$oldsymbol{a} = oldsymbol{W}_{2}sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) + oldsymbol{b}_{2}$

 

回想一下1.7.3节第四条结论,上面这个形式和$l = fleft( oldsymbol{a} ight),oldsymbol{a} = oldsymbol{A}oldsymbol{x} + oldsymbol{b}$,求$frac{partial l}{partialoldsymbol{A}}$是一致的。

 

直接用1.7.3节第四条结论,我们得到$frac{partial l}{partialoldsymbol{W}_{2}} = frac{partial l}{partialoldsymbol{a}}left( {sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight)^{T}$

 

因为$doldsymbol{a} = doldsymbol{b}_{2} = oldsymbol{I}doldsymbol{b}_{2}$,

 

显然$frac{partial l}{partialoldsymbol{b}_{2}} = oldsymbol{I}^{T}frac{partial l}{partialoldsymbol{a}} = frac{partial l}{partialoldsymbol{a}}$,于是我们得到了第二层神经网络中$oldsymbol{W}_{2}$和$oldsymbol{b}_{2}$的梯度。

 

接着我们想求$frac{partial l}{partialoldsymbol{W}_{1}}$:

 

令$oldsymbol{z} = oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}$,先求出$frac{partial l}{partialoldsymbol{z}}$。

 

根据微分性质第六条:

 $doldsymbol{a} = dleft( {oldsymbol{W}_{2}sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) + oldsymbol{b}_{2}} ight) = oldsymbol{W}_{2}dsigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) = oldsymbol{W}_{2}left( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) odot doldsymbol{z}} ight)$

$oldsymbol{W}_{2}left( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight) odot doldsymbol{z}} ight) = oldsymbol{W}_{2}diagleft( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight)doldsymbol{z}$

我们得到$frac{partialoldsymbol{a}}{partialoldsymbol{z}} = oldsymbol{W}_{2}diagleft( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight)$,进而得到$frac{partial l}{partialoldsymbol{z}} = diagleft( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight){oldsymbol{W}_{2}}^{T}frac{partial l}{partialoldsymbol{a}}$

 

现在已知$frac{partial l}{partialoldsymbol{z}}$,$oldsymbol{z} = oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}$,求$frac{partial l}{partialoldsymbol{W}_{1}}$

我们直接用1.7.3节第四条结论,得到$frac{partial l}{partialoldsymbol{W}_{1}} = frac{partial l}{partialoldsymbol{z}}oldsymbol{x}^{T}$,

按照同样的套路,得到$frac{partial l}{partialoldsymbol{b}_{1}} = left( frac{partialoldsymbol{z}}{partialoldsymbol{b}_{2}} ight)^{T}frac{partial l}{partialoldsymbol{z}} = frac{partial l}{partialoldsymbol{z}}$

至此我们已求完两层神经网络的参数的梯度:$frac{partial l}{partialoldsymbol{W}_{2}},frac{partial l}{partialoldsymbol{b}_{2}},~frac{partial l}{partialoldsymbol{W}_{1}},~frac{partial l}{partialoldsymbol{b}_{1}}$

$frac{partial l}{partialoldsymbol{W}_{2}} = frac{partial l}{partialoldsymbol{a}}left( {sigmaleft( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight)^{T}$

$frac{partial l}{partialoldsymbol{b}_{2}} = frac{partial l}{partialoldsymbol{a}}$

$frac{partial l}{partialoldsymbol{W}_{1}} = diagleft( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight){oldsymbol{W}_{2}}^{T}frac{partial l}{partialoldsymbol{a}}oldsymbol{x}^{T}$

$frac{partial l}{partialoldsymbol{b}_{1}} = diagleft( {sigma^{'}left( {oldsymbol{W}_{1}oldsymbol{x} + oldsymbol{b}_{1}} ight)} ight){oldsymbol{W}_{2}}^{T}frac{partial l}{partialoldsymbol{a}}$

其中,$frac{partial l}{partialoldsymbol{a}} = softmaxleft( oldsymbol{a} ight) - oldsymbol{y}$。

 

可以看出,求神经网络中的参数的梯度,实际上只用求出输出层那块的梯度$frac{partial l}{partialoldsymbol{a}}$,前面隐藏层的参数梯度都只是基于输出层那部分的梯度的矩阵运算罢了。

 

---------推广---------

上面的推导是针对一条样本的情况,真实情况是,我们有n组样本$left( {x_{1},y_{1}} ight),left( {x_{2},y_{2}} ight),ldotsleft( {x_{n},y_{n}} ight)$,因此loss函数应当是

$l = {sumlimits_{i = 1}^{n}{- {oldsymbol{y}_{oldsymbol{i}}}^{T}{log{softmaxleft( {oldsymbol{W}_{2}sigmaleft( {oldsymbol{W}_{1}oldsymbol{x}_{oldsymbol{i}} + oldsymbol{b}_{1}} ight) + oldsymbol{b}_{2}} ight)}}}}$

但这样的loss本质上依旧是个标量,并不影响推导的思路,累加符号可以放到最外面去,最后各参数的梯度等于各条样本单独计算出来的梯度再做累加。

 

如果本文对您有所帮助的话,不妨点下“推荐”让它能帮到更多的人,谢谢。


 

参考资料:

(欢迎转载,转载请注明出处。欢迎留言或沟通交流: lxwalyw@gmail.com)

原文地址:https://www.cnblogs.com/sumwailiu/p/13398121.html