114、TensorFlow设备放置

# creates a variable named v and places it on the second GPU device
import tensorflow as tf
#with tf.device("/device:GPU:1"):
#    v = tf.get_variable("v", [1])

#it is particularly important for variables to be in the correct device
#in distributed settings . 
#Accidentally putting variables on workers instead of parameter servers
#Can severely slow down training or, in the worst case 
#let each worker blithely forge ahead with its own independent copy of each variable
#For this reason we provide tf.train.replica_device_setter 
#which can automatically place variables in parameter servers
cluster_spec = {"ps":["ps0:2222","ps1:2222"],
                        "worker":["worker0:2222","worker1:2222","worker2:2222"]}
with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)):
    v = tf.get_variable("v",shape=[20,20]) #this variable is placed in the paramter server by the replica_device_setter
原文地址:https://www.cnblogs.com/weizhen/p/8451471.html