cs20_1-1

1. 基本特点

1.1 save computation(惰性运行)

x = 2
y = 3
add_op = tf.add(x, y)
mul_op = tf.multiply(x, y)
useless = tf.multiply(x, add_op)
pow_op = tf.pow(add_op, mul_op)
with tf.Session() as sess:
	z = sess.run(pow_op)

如上,因为sess.run(pow_op)不需要用到useless,所以useless的运算不会被执行(save computation)

1.2 Distributed Computation

  1. 大图分解为子图多GPU并行计算, e.g.

    freamework举例

  2. Multiple graphs有很多问题,如果非要实现Multiple graphs,可以考虑It’s better to have disconnected subgraphs within one graph

  3. 创建一个Graph

    some codes,待整理

  4. Why Graph:

    1. Save computation. Only run subgraphs that lead to the values you want to fetch.
    2. Break computation into small, differential pieces to facilitate auto-differentiation
    3. Facilitate distributed computation, spread the work across multiple CPUs, GPUs, TPUs, or other devices
    4. Many common machine learning models are taught and visualized as directed graphs
原文地址:https://www.cnblogs.com/LS1314/p/10366146.html