TensorF中的异步计算

2024-06-16 09:39:37 发布

您现在位置:Python中文网/ 问答频道 /正文

最近我一直在玩弄TensorFlow,我提到这个框架不能使用我所有可用的计算资源。在Convolutional Neural Networks教程中,他们提到

Naively employing asynchronous updates of model parameters leads to sub-optimal training performance because an individual model replica might be trained on a stale copy of the model parameters. Conversely, employing fully synchronous updates will be as slow as the slowest model replica.

尽管他们在教程和whitepaper中都提到了这一点,但我并没有真正找到在本地计算机上进行异步并行计算的方法。有可能吗?或者它是TensorFlow发布版本的一部分。如果是,那怎么办?


Tags: ofthe框架modeltensorflowas教程be
1条回答
网友
1楼 · 发布于 2024-06-16 09:39:37

TensorFlow的开源版本支持异步梯度下降,甚至不需要修改图形。最简单的方法是并行执行多个并发步骤:

loss = ...

# Any of the optimizer classes can be used here.
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

sess = tf.Session()
sess.run(tf.initialize_all_variables())

def train_function():
  # TODO: Better termination condition, e.g. using a `max_steps` counter.
  while True:
    sess.run(train_op)

# Create multiple threads to run `train_function()` in parallel
train_threads = []
for _ in range(NUM_CONCURRENT_STEPS):
  train_threads.append(threading.Thread(target=train_function))

# Start the threads, and block on their completion.
for t in train_threads:
  t.start()
for t in train_threads:
  t.join()

此示例设置对NUM_CONCURRENT_STEPSsess.run(train_op)调用。由于这些线程之间没有协调,因此它们以异步方式进行。

实际上,实现同步并行训练(目前)更具挑战性,因为这需要额外的协调,以确保所有副本读取相同版本的参数,并确保它们的所有更新同时可见。multi-GPU example for CIFAR-10 training通过使用共享参数在训练图中复制多个“塔”来执行同步更新,并在应用更新之前显式平均整个塔的梯度。


N.B.此答案中的代码将所有计算放在同一设备上,如果您的计算机中有多个GPU,则这将不是最佳选择。如果要使用所有的GPU,请按照multi-GPU CIFAR-10 model的示例,创建多个“塔”,并将其操作固定在每个GPU上。代码大致如下:

train_ops = []

for i in range(NUM_GPUS):
  with tf.device("/gpu:%d" % i):
    # Define a tower on GPU `i`.
    loss = ...

    train_ops.append(tf.train.GradientDescentOptimizer(0.01).minimize(loss))

def train_function(train_op):
  # TODO: Better termination condition, e.g. using a `max_steps` counter.
  while True:
    sess.run(train_op)


# Create multiple threads to run `train_function()` in parallel
train_threads = []
for train_op in train_ops:
  train_threads.append(threading.Thread(target=train_function, args=(train_op,))


# Start the threads, and block on their completion.
for t in train_threads:
  t.start()
for t in train_threads:
  t.join()

注意,您可能会发现使用"variable scope"来促进塔楼之间的变量共享很方便。

相关问题 更多 >