## TensorFlow如何实现反向传播

| |
[ 所属分类 开发（python） | 发布者 店小二05 | 时间 | 作者 红领巾 ] 0人收藏点击收藏

TensorFlow通过计算图来更新变量和最小化损失函数来反向传播误差的。这步将通过声明优化函数（optimization function）来实现。一旦声明好优化函数，TensorFlow将通过它在所有的计算图中解决反向传播的项。当我们传入数据，最小化损失函数，TensorFlow会在计算图中根据状态相应的调节变量。

# 反向传播
#----------------------------------
#
# 以下python函数主要是展示回归和分类模型的反向传播
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
ops.reset_default_graph()
# 创建计算图会话
sess = tf.Session()
# 回归算法的例子:
# We will create sample data as follows:
# x-data: 100 random samples from a normal ~ N(1, 0.1)
# target: 100 values of the value 10.
# We will fit the model:
# x-data * A = target
# Theoretically, A = 10.
# 生成数据，创建占位符和变量A
x_vals = np.random.normal(1, 0.1, 100)
y_vals = np.repeat(10., 100)
x_data = tf.placeholder(shape=[1], dtype=tf.float32)
y_target = tf.placeholder(shape=[1], dtype=tf.float32)
# Create variable (one model parameter = A)
A = tf.Variable(tf.random_normal(shape=[1]))
# 增加乘法操作
my_output = tf.multiply(x_data, A)
# 增加L2正则损失函数
loss = tf.square(my_output - y_target)
# 在运行优化器之前，需要初始化变量
init = tf.global_variables_initializer()
sess.run(init)
# 声明变量的优化器
train_step = my_opt.minimize(loss)
# 训练算法
for i in range(100):
rand_index = np.random.choice(100)
rand_x = [x_vals[rand_index]]
rand_y = [y_vals[rand_index]]
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
if (i+1)%25==0:
print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))
# 分类算法例子
# We will create sample data as follows:
# x-data: sample 50 random values from a normal = N(-1, 1)
# + sample 50 random values from a normal = N(1, 1)
# target: 50 values of 0 + 50 values of 1.
# These are essentially 100 values of the corresponding output index
# We will fit the binary classification model:
# If sigmoid(x+A) < 0.5 -> 0 else 1
# Theoretically, A should be -(mean1 + mean2)/2
# 重置计算图
ops.reset_default_graph()
# Create graph
sess = tf.Session()
# 生成数据
x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(3, 1, 50)))
y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))
x_data = tf.placeholder(shape=[1], dtype=tf.float32)
y_target = tf.placeholder(shape=[1], dtype=tf.float32)
# 偏差变量A (one model parameter = A)
A = tf.Variable(tf.random_normal(mean=10, shape=[1]))
# 增加转换操作
# Want to create the operstion sigmoid(x + A)
# Note, the sigmoid() part is in the loss function
# 由于指定的损失函数期望批量数据增加一个批量数的维度
# 这里使用expand_dims（）函数增加维度
my_output_expanded = tf.expand_dims(my_output, 0)
y_target_expanded = tf.expand_dims(y_target, 0)
# 初始化变量A
init = tf.global_variables_initializer()
sess.run(init)
# 声明损失函数 交叉熵(cross entropy)
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output_expanded, labels=y_target_expanded)
# 增加一个优化器函数 让TensorFlow知道如何更新和偏差变量
train_step = my_opt.minimize(xentropy)
# 迭代
for i in range(1400):
rand_index = np.random.choice(100)
rand_x = [x_vals[rand_index]]
rand_y = [y_vals[rand_index]]
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
if (i+1)%200==0:
print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
print('Loss = ' + str(sess.run(xentropy, feed_dict={x_data: rand_x, y_target: rand_y})))
# 评估预测
predictions = []
for i in range(len(x_vals)):
x_val = [x_vals[i]]
prediction = sess.run(tf.round(tf.sigmoid(my_output)), feed_dict={x_data: x_val})
predictions.append(prediction[0])
accuracy = sum(x==y for x,y in zip(predictions, y_vals))/100.
print('最终精确度 = ' + str(np.round(accuracy, 2)))

Step #25 A = [ 6.12853956]
Loss = [ 16.45088196]
Step #50 A = [ 8.55680943]
Loss = [ 2.18415046]
Step #75 A = [ 9.50547695]
Loss = [ 5.29813051]
Step #100 A = [ 9.89214897]
Loss = [ 0.34628963]
Step #200 A = [ 3.84576249]
Loss = [[ 0.00083012]]
Step #400 A = [ 0.42345378]
Loss = [[ 0.01165466]]
Step #600 A = [-0.35141727]
Loss = [[ 0.05375391]]
Step #800 A = [-0.74206048]
Loss = [[ 0.05468176]]
Step #1000 A = [-0.89036471]
Loss = [[ 0.19636908]]
Step #1200 A = [-0.90850282]
Loss = [[ 0.00608062]]
Step #1400 A = [-1.09374011]
Loss = [[ 0.11037558]]

tags: tf,rand,data,Loss,Step,target,vals,np,my,random,sess,run,normal,output

1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责；
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性，不作出任何保证或承若；
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。