学习进度笔记11

TensorFlow逻辑回归

逻辑回归可以看作只有一层网络的前向神经网络,并且参数连接的权重只是一个值,而非矩阵。公式为:y_predict=logistic(X*W+b),其中X为输入,W为输入与隐含层之间的权重,b为隐含层神经元的偏置,而logistic为激活函数,一般为sigmoid或者tanh,y_predict为最终预测结果。

逻辑回归是一种分类器模型,需要函数不断的优化参数,这里目标函数为y_predict与真实标签Y之间的L2距离,使用随机梯度下降算法来更新权重和偏置。

源代码:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"

mnist=input_data.read_data_sets("/home/yxcx/tf_data",one_hot=True)

#Parameters
learning_rate=0.01
training_epochs=25
batch_size=100
display_step=1

#tf Graph Input
x=tf.placeholder(tf.float32,[None,784])
y=tf.placeholder(tf.float32,[None,10])

#Set model weights
W=tf.Variable(tf.zeros([784,10]))
b=tf.Variable(tf.zeros([10]))

#Construct model
pred=tf.nn.softmax(tf.matmul(x,W)+b)

#Minimize error using cross entropy
cost=tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred),reduction_indices=1))

#Gradient Descent
optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

#Initialize the variables
init=tf.global_variables_initializer()

#Start training
with tf.Session() as sess:
    sess.run(init)

    #Training cycle
    for epoch in range(training_epochs):
        avg_cost=0
        total_batch=int(mnist.train.num_examples/batch_size)
        # loop over all batches
        for i in range(total_batch):
            batch_xs,batch_ys=mnist.train.next_batch(batch_size)
            #Fit training using batch data
            _,c=sess.run([optimizer,cost],feed_dict={x:batch_xs,y:batch_ys})

            #Conpute average loss
            avg_cost+= c/total_batch
        if (epoch+1) % display_step==0:
            print("Epoch:",'%04d' % (epoch+1),"Cost:" ,"{:.09f}".format(avg_cost))

    print("Optimization Finished!")

    #Test model
    correct_prediction=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
    # Calculate accuracy for 3000 examples
    accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    print("Accuracy:",accuracy.eval({x:mnist.test.images[:3000],y:mnist.test.labels[:3000]}))

结果截图:

原文地址:https://www.cnblogs.com/songxinai/p/14254387.html