神经网络入门 第5章 实现多层神经网络BP算法

    前言

    神经网络是一种很特别的解决问题的方法。本书将用最简单易懂的方式与读者一起从最简单开始,一步一步深入了解神经网络的基础算法。本书将尽量避开让人望而生畏的名词和数学概念,通过构造可以运行的Java程序来实践相关算法。

    关注微信号“逻辑编程"来获取本书的更多信息。

    上一章我们讨论了神经网络的表达能力的数学原理,这一章我们就来实现一个神经网络以及训练算法。

    

    我们今天讨论类似下面的全连接多层单向神经网络:

    

    我们把输入也看作一层,上图中一共有三层。输入和输出层是由问题的输入和输出规模决定的,中间层的大小则比较灵活。

    下面我们先写出我们神经网络的基本结构。每个神经元一个对象的话未必看容易编写,并且效率更低。因此,整个神经网络我们只需要一个类,只是需要把原来的单个的属性都改造成数组。

public class NeuralNetwork {
int[] shape;
int layers;
double[][][] weights;
double[][] bias;
double[][] zs;
double[][] xs;

    其中:

    - shape数字表示每层网络神经元个数;

    - shape数组的长度就是layers,表示神经网络的层数; 

    - weights[ ][ ][ ]的三个维度分别表示 [层][神经元][对应输入];

    - bias[ ][ ]两个维度表示[层][神经元];

    - zs[ ][ ]数组用来存放每个神经元z=w*x+b的结果;

    -xs[ ][ ]数组涌来存放输入x(第一层)和后面每层的输出s(z)

    接下来我们需要初始化上述属性:

public NeuralNetwork(int... shape) {
this.shape = shape;
layers = shape.length;
weights = new double[layers][][];
bias = new double[layers][];
//First layer is input layer, no weight
   weights[0] = new double[0][0];
bias[0] = new double[0];
zs = new double[layers][];
xs = new double[layers][];
for (int i = 1; i < layers; i++) {
weights[i] = new double[this.shape[i]][this.shape[i - 1]];
bias[i] = new double[this.shape[i]];
}
    fillRandom(weights);
fillRandom(bias);
}

    因为第一层是输入层, 不要计算,所以我们把它的weight和bias设置为空数组。最后我们把w和b的初始值设置为随机数。这是因为如果一开始都是均匀的,所有神经元都是一样的,那就很难在训练中产生差别了,毕竟我们需要每个神经元去接近一个不同的函数。

    接下来是我们神经网络的功能函数:

double[] f(double[] in) {
zs[0] = xs[0] = in;
for (int i = 1; i < layers; i++) {
zs[i] = add(wx(xs[i - 1], weights[i]), bias[i]);
xs[i] = sigmoid(zs[i]);
}
return xs[layers - 1];
}


double sigmoid(double d) {
return 1.0 / (1.0 + exp(-d));
}

double[] sigmoid(double[] d) {
int length = d.length;
double[] v = new double[length];
for (int i = 0; i < length; i++) {
v[i] = sigmoid(d[i]);
}
return v;
}

double[] wx(double[] x, double[][] weight) {
int numberOfNeron = weight.length;
double[] wx = new double[numberOfNeron];
for (int i = 0; i < numberOfNeron; i++) {
wx[i] = dot(weight[i], x);//SUM(w*x)
   }
return wx;
}

    与前面我们讲过的单个神经元类似,f函数计算sigmoid(w*x+b)。只不过现在所有的变量都是数组,我们需要循环计算对应的的变量。并且循环计算每一层,其结果作为下一层的输入。因为第一层不需要计算,所以从下标1开始循环。最后一层的计算结果就是神经网络的输出。

    上边的wx方法计算神经元的w数组与其输入的乘积之和。见下图:

    下面我们开始讨论怎么训练这个神经网络。其方法类似于前面我们讨论过的单个神经元的情况。但是有以下不同:

    1. 每层有多个神经元。我们把每一个神经元分别计算即可。输出层的每个输出都可以与正确答案比对得出偏差。

    2. 有多个层。最后一层我们很容易通过神经网络的输出和训练数据的结果相减来获取cost值,但是前面的层怎么获取cost呢?答案其实很简单,就像求w的导数一样,我们可以求最后一层的输入x的导数。前面我们讲过如何求函数 y=w*x + b 对w和b的导数。类似的,x的导数等于w。我们吧这个x的导数乘cost就是前一层的cost。这就是传说中的反向传播算法(BP, backpropagation)。在开始反向传播之前, 我们要调用f函数让整个网络计算一遍,以此获得最后一层的cost。这个过程叫做向前传播(Feed Forward)。


void train(double[] in, double[] expect, double rate) {
double[] y = f(in);
double[] cost = sub(expect, y);
double[][][] dw = new double[layers][][];
double[][] db = new double[layers][];
dw[0] = new double[0][0];
db[0] = new double[0];
for (int i = layers - 1; i > 0; i--) {
double[] sp = signmoidPrime(zs[i]);
cost = mul(cost, sp);
dw[i] = dw(xs[i - 1], cost);
db[i] = cost;
cost = dx(weights[i], cost);
}

weights = add(weights, mul(dw, rate));
bias = add(bias, mul(db, rate));
}

    上面的训练函数基本过程与单个神经元是类似的,请参考前面章节。我们对以下两个方法稍作说明。

    

double[] dx(double[][] w, double[] c) {
int numberOfX = w[0].length;
double[] v = new double[numberOfX];
for (int i = 0; i < numberOfX; i++) {
for (int j = 0; j < c.length; j++) {
v[i] += w[j][i] * c[j];
}
}
return v;
}

    dx方法求中间一层神经元的cost。也就是下一层神经元的w和cost的乘积之和。见下图:

double[][] dw(double[] x, double[] c) {
int numberOfNeuron = c.length;
int numberOfIn = x.length;
double[][] dw = new double[numberOfNeuron][numberOfIn];
for (int neuron = 0; neuron < numberOfNeuron; neuron++) {
for (int input = 0; input < numberOfIn; input++) {
dw[neuron][input] = c[neuron] * x[input];
}
}
return dw;
}

dw是对应多个输入的x*c求和:

    下面是完整的代码,整个神经网络类大约200行代码,并且包含了很多数组的运算。如果采用第三方数学库,则这些矩阵运算就可以省略了。文中我尽量避免使用矩阵、行列式、转置等数学概念,为的是避免这些数学概念带来的不适感。实际上数学都来源于实际应用,在理解概念背后的实际应用之前不使用这些数学概念倒是更容易理解。

package com.luoxq.ann;

import static java.lang.Math.exp;

public class NeuralNetwork {
int[] shape;
int layers;
double[][][] weights;
double[][] bias;
double[][] zs;
double[][] xs;

public NeuralNetwork(int... shape) {
this.shape = shape;
layers = shape.length;
weights = new double[layers][][];
bias = new double[layers][];
//First layer is input layer, no weight
       weights[0] = new double[0][0];
bias[0] = new double[0];
zs = new double[layers][];
xs = new double[layers][];
for (int i = 1; i < layers; i++) {
weights[i] = new double[this.shape[i]][this.shape[i - 1]];
bias[i] = new double[this.shape[i]];
}
        fillRandom(weights);
fillRandom(bias);
}

Random rand = new Random();

void fillRandom(double[] d) {
for (int i = 0; i < d.length; i++) {
d[i] = rand.nextGaussian();
}
}

void fillRandom(double[][] d) {
for (int i = 0; i < d.length; i++) {
fillRandom(d[i]);
}
}

void fillRandom(double[][][] d) {
for (int i = 0; i < d.length; i++) {
fillRandom(d[i]);
}
    double[] f(double[] in) {
zs[0] = xs[0] = in;
for (int i = 1; i < layers; i++) {
zs[i] = add(wx(xs[i - 1], weights[i]), bias[i]);
xs[i] = sigmoid(zs[i]);
}
return xs[layers - 1];
}


double sigmoid(double d) {
return 1.0 / (1.0 + exp(-d));
}

double[] sigmoid(double[] d) {
int length = d.length;
double[] v = new double[length];
for (int i = 0; i < length; i++) {
v[i] = sigmoid(d[i]);
}
return v;
}


double[] wx(double[] x, double[][] weight) {
int numberOfNeron = weight.length;
double[] wx = new double[numberOfNeron];
for (int i = 0; i < numberOfNeron; i++) {
wx[i] = dot(weight[i], x);//SUM(w*x)
       }
return wx;
}

void train(double[] in, double[] expect, double rate) {
double[] y = f(in);
double[] cost = sub(expect, y);
double[][][] dw = new double[layers][][];
double[][] db = new double[layers][];
dw[0] = new double[0][0];
db[0] = new double[0];
for (int i = layers - 1; i > 0; i--) {
double[] sp = signmoidPrime(zs[i]);
cost = mul(cost, sp);
dw[i] = dw(xs[i - 1], cost);
db[i] = cost;
cost = dx(weights[i], cost);
}

weights = add(weights, mul(dw, rate));
bias = add(bias, mul(db, rate));
}


double[] signmoidPrime(double d[]) {
int length = d.length;
double[] v = new double[length];
for (int i = 0; i < length; i++) {
v[i] = sigmoidPrime(d[i]);
}
return v;
}

double sigmoidPrime(double d) {
return sigmoid(d) * (1 - sigmoid(d));
}

double[] sub(double[] a, double[] b) {
int len = a.length;
double[] v = new double[len];
for (int i = 0; i < len; i++) {
v[i] = a[i] - b[i];
}
return v;
}

//derivative of x is w*c and sum for each x
   double[] dx(double[][] w, double[] c) {
int numberOfX = w[0].length;
double[] v = new double[numberOfX];
for (int i = 0; i < numberOfX; i++) {
for (int j = 0; j < c.length; j++) {
v[i] += w[j][i] * c[j];
}
}
return v;
}

//derivative of w is x*c for each c and each x
   double[][] dw(double[] x, double[] c) {
int numberOfNeuron = c.length;
int numberOfIn = x.length;
double[][] dw = new double[numberOfNeuron][numberOfIn];
for (int neuron = 0; neuron < numberOfNeuron; neuron++) {
for (int input = 0; input < numberOfIn; input++) {
dw[neuron][input] = c[neuron] * x[input];
}
}
return dw;
}

//V[i]*X[i]
   double[] mul(double[] v, double[] x) {
double[] d = new double[v.length];
for (int i = 0; i < v.length; i++) {
d[i] = v[i] * x[i];
}
return d;
}

double[][][] mul(double[][][] a, double b) {
double[][][] v = new double[a.length][][];
for (int i = 0; i < a.length; i++) {
v[i] = mul(a[i], b);
}
return v;
}


double[][] mul(double[][] a, double b) {
double[][] v = new double[a.length][];
for (int i = 0; i < a.length; i++) {
v[i] = mul(a[i], b);
}
return v;
}

double[] mul(double[] a, double b) {
double[] d = new double[a.length];
for (int i = 0; i < a.length; i++) {
d[i] = a[i] * b;
}
return d;
}

double[][][] add(double[][][] a, double[][][] b) {
double[][][] v = new double[a.length][][];
for (int i = 0; i < a.length; i++) {
v[i] = add(a[i], b[i]);
}
return v;
}

double[][] add(double[][] a, double[][] b) {
int length = a.length;
double[][] v = new double[length][];
for (int i = 0; i < length; i++) {
v[i] = add(a[i], b[i]);
}
return v;
}

double[] add(double[] a, double[] b) {
int length = a.length;
double[] v = new double[length];
for (int i = 0; i < length; i++) {
v[i] = a[i] + b[i];
}
return v;
}

double dot(double[] w, double[] x) {
double v = 0;
for (int i = 0; i < w.length; i++) {
v += w[i] * x[i];
}
return v;
}
}

    我们通过一个七段数码管的例子看看梯度下降算法是如何通过训练调整神经网络的输入权重的。

    假设我们要把七段数码管的七位二进制转化为10位二进制。我们把七段数码管的七个输入按照七段数码管的位置从上到下,从左到右作为一个长度为7的输入数组。然后输出是10个端口,每个端口的输出y[i]在数码管表示i值时输出1,否则输出0。如下图,画部分神经元。

    当七段数码管表示1时,有两段是黑色的,也就是说有两个输入是1,其余是0。训练时,第一个输出端口参考输出是1,其余是输出0。假设网络w和b初始值是随机数,那么未经训练时10个端口的输出毫无规律,我们假设y1输出0.5把,那么偏差值就是(1-0.5)。

    我们开始训练,也就是我们开始调整w和b。我们来看一下梯度下降是如何调整w值的。我们前面章节的出的结论是dw=x*c。对图中的y1来说,c值只有一个就是0.5。输入值有两个是1,即x2和x5,其余是0。所以dw[2]和dw[5]将会有非零值,也就是说w2和w5将在训练中增大。其余输入为0的其w调整量在这个训练中dw=0。经过多次训练y1的w2和w6应该接近1。在输入其它数字时,y1的参考值是0,其cost可能是负值,这样其它输入对应的w就会被调低,甚至会被调整到负数。x2,x5在其它输入时也可能会显示黑色,比如训练4/7/8/9时,也可能会被调降,但是在训练1中它们会被增大。同时b也在响应调整,往减小偏差方向移动。经过反复训练w2,w5应该至少比其他几个w值更大,与y2的关联更大。相应地,其它神经元与其相关的输入所对应的的关联也增强。就好比生物神经元的某些输入与特定的神经元慢慢建立更强的关联。这就是为什么我们能训练神经网络。

    下面我们通过运行我们的神经网络代码来看看上面的过程。我们编写一个七段数码管的程序来使用我们的神经网络类。同时给我们的神经网络类加一个 dump方法来输出其w和b。

    下面是我们的Test7类:

package com.luoxq.ann;

import java.text.NumberFormat;

public class Test7 {

static double[][] x = {
{1, 1, 1, 0, 1, 1, 1},
{0, 0, 1, 0, 0, 1, 0},
{1, 0, 1, 1, 1, 0, 1},
{1, 0, 1, 1, 0, 1, 1},
{0, 1, 1, 1, 0, 1, 0},
{1, 1, 0, 1, 0, 1, 1},
{1, 1, 0, 1, 1, 1, 1},
{1, 0, 1, 0, 0, 1, 0},
{1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 0, 1, 1}
};

static double[][] expect = {
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1}
};

public static void main(String... args) {
NeuralNetwork nn = new NeuralNetwork(7, 10);


System.out.println("Testing: ");
double cost = getCost(nn);
System.out.println("Cost before train: " + cost);
System.out.println("Dump: " + nn.dump());
System.out.println("Training...");
int epochs = 1000;
double rate = 10;

for (int epoch = 0; epoch < epochs; epoch++) {
for (int sample = 0; sample < x.length; sample++)
nn.train(x[sample], expect[sample], rate);
}
System.out.println("Testing: ");
cost = getCost(nn);
System.out.println("Cost after train: " + cost);
System.out.println("Dump: " + nn.dump());
}


static double getCost(NeuralNetwork nn) {
double cost = 0;
for (int i = 0; i < x.length; i++) {
double[] y = nn.f(x[i]);
System.out.println("output of " + i + ": " + toString(y));
double[] exp = expect[i];
cost += getCost(y, exp);
}
return cost;
}

static double getCost(double[] y, double[] exp) {
double cost = 0;
for (int j = 0; j < y.length; j++) {
double diff = Math.abs(y[j] - exp[j]);
cost += diff;
}
return cost;
}

static String toString(double[] d) {
NumberFormat nf = NumberFormat.getInstance();
nf.setMaximumFractionDigits(2);
nf.setMinimumFractionDigits(2);
StringBuilder sb = new StringBuilder();
for (double dd : d) {
sb.append(nf.format(dd)).append(",");
}
return sb.toString();
}

}

    运行结果如下:

Testing: 

output of 0: 0.03,0.15,0.84,0.27,0.97,0.84,0.06,0.82,0.33,0.88,

output of 1: 0.42,0.50,0.34,0.63,0.94,0.19,0.13,0.97,0.48,0.39,

output of 2: 0.51,0.69,0.88,0.09,0.97,0.68,0.39,0.82,0.38,0.92,

output of 3: 0.20,0.48,0.68,0.77,0.85,0.27,0.66,0.93,0.35,0.70,

output of 4: 0.16,0.73,0.53,0.83,0.80,0.23,0.08,0.92,0.11,0.84,

output of 5: 0.02,0.15,0.86,0.85,0.60,0.61,0.32,0.69,0.13,0.97,

output of 6: 0.05,0.29,0.94,0.57,0.94,0.87,0.23,0.59,0.10,0.99,

output of 7: 0.14,0.34,0.40,0.44,0.92,0.09,0.41,0.93,0.34,0.35,

output of 8: 0.08,0.51,0.88,0.48,0.96,0.77,0.18,0.78,0.14,0.95,

output of 9: 0.03,0.31,0.76,0.80,0.68,0.43,0.25,0.85,0.18,0.88,

Cost before train: 51.194078898042235

Dump: 

layer_1{

Neuron_0{weights: -1.47,-2.26,0.53,0.94,1.20,-0.22,-0.51,,bias:-0.6408124820167941}

Neuron_1{weights: -0.67,-0.75,0.93,1.74,0.85,-0.03,-1.15,,bias:-0.8842719020941838}

Neuron_2{weights: 0.28,0.42,-0.64,0.37,0.88,-0.34,0.76,,bias:0.31442792363574007}

Neuron_3{weights: -0.76,0.17,-0.37,0.91,-1.43,2.03,0.51,,bias:-1.1351774514157895}

Neuron_4{weights: -0.36,-0.98,0.34,-0.43,2.33,0.76,-0.27,,bias:1.711100711025143}

Neuron_5{weights: -0.82,0.73,-0.73,-0.44,1.47,-0.31,1.73,,bias:-0.4357646167318964}

Neuron_6{weights: 1.58,-1.74,-0.32,1.28,-0.45,0.64,-0.26,,bias:-2.265955204876941}

Neuron_7{weights: -0.86,-0.83,0.92,-0.21,-0.43,0.59,0.15,,bias:1.9655563173752257}

Neuron_8{weights: -0.56,-0.89,0.37,-1.10,-0.27,-0.42,1.12,,bias:-0.03699667305814675}

Neuron_9{weights: -0.16,1.19,-1.49,0.89,0.89,-0.66,0.55,,bias:1.6994223062815141}

}

Training...

Testing: 

output of 0: 0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.02,0.00,

output of 1: 0.00,0.99,0.00,0.01,0.01,0.00,0.00,0.01,0.00,0.00,

output of 2: 0.00,0.00,0.99,0.01,0.00,0.00,0.00,0.00,0.02,0.00,

output of 3: 0.00,0.00,0.00,0.99,0.00,0.00,0.00,0.01,0.00,0.01,

output of 4: 0.00,0.01,0.00,0.00,0.99,0.00,0.00,0.00,0.02,0.01,

output of 5: 0.00,0.00,0.00,0.01,0.00,0.99,0.01,0.00,0.00,0.01,

output of 6: 0.00,0.00,0.00,0.00,0.00,0.01,0.99,0.00,0.02,0.00,

output of 7: 0.00,0.01,0.00,0.01,0.00,0.00,0.00,0.99,0.00,0.00,

output of 8: 0.00,0.00,0.01,0.00,0.00,0.00,0.00,0.00,0.97,0.00,

output of 9: 0.00,0.00,0.00,0.01,0.00,0.01,0.00,0.00,0.00,0.98,

Cost after train: 0.4031423588581169

Dump: 

layer_1{

Neuron_0{weights: -2.03,-2.29,-1.44,0.38,0.66,-1.65,-1.06,,bias:-2.604111172261378}

Neuron_1{weights: -8.93,-4.98,2.19,-4.30,0.12,1.76,-3.36,,bias:0.3819736542834827}

Neuron_2{weights: -0.24,-3.06,0.33,1.09,3.72,-7.14,0.25,,bias:-0.2331994457380536}

Neuron_3{weights: -0.22,-9.42,-0.22,3.63,-7.17,1.86,5.71,,bias:-6.500570025353018}

Neuron_4{weights: -5.42,4.92,-0.58,4.43,-0.16,-1.89,-5.23,,bias:-2.255547223858806}

Neuron_5{weights: -0.52,1.54,-8.94,1.27,-8.94,-0.15,2.45,,bias:-0.30233681526958145}

Neuron_6{weights: 0.40,-1.18,-11.00,0.61,9.12,0.17,-0.97,,bias:-3.605913007762964}

Neuron_7{weights: 8.87,-3.42,-1.88,-4.26,-2.53,-1.71,-5.16,,bias:-0.8965156667897005}

Neuron_8{weights: -3.51,4.48,7.53,7.61,12.85,3.15,-1.70,,bias:-26.873685962813397}

Neuron_9{weights: 3.77,8.24,8.17,-1.33,-10.65,-9.58,4.56,,bias:-9.942385633642232}

}

    在上面运行结果中我们可以看到w2和w5具有较大值。

原文地址:https://www.cnblogs.com/javadaddy/p/6746544.html