张量流在计算梯度处悬挂

2024-04-24 06:33:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我遇到了一个问题,我的Tensorflow执行在compute_gradients上卡住了。我正在初始化我的模型,然后像这样设置损失函数。请注意,目前我还没有开始培训,所以问题不是我的数据

# The model for training
given_model = GivenModel(images_input=images_t)

print("Done setting up the model")

with tf.device('/gpu:0'):
    with tf.variable_scope('prediction_loss'):
        logits = given_model.prediction

        softmax_loss_per_sample = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels))


        total_training_loss = softmax_loss_per_sample

        optimizer = tf.train.AdamOptimizer()
        gradients, variables = zip(*optimizer.compute_gradients(total_training_loss))
        gradients, _ = tf.clip_by_global_norm(gradients, gradient_clip_threshold)
        optimize = optimizer.apply_gradients(zip(gradients, variables))


    with tf.control_dependencies([optimize]):
        train_op = tf.constant(0)

这个代码只是挂起,什么也不做。当我用ctrl+c退出时(不管运行多长时间),它总是停留在compute_gradients。在

有人知道为什么会这样吗?我不是在一个循环内做这个,我的模型不是那么大。它似乎也在使用CPU来完成这项工作(GPU上还没有分配内存),尽管有with tf.device('/gpu:0'):选项,但我不能强制它使用GPU。在

谢谢

以下是我按ctrl+c组合键时打印的内容:

^{pr2}$

Tags: 模型modelgpudevicetfwithtraininggiven
3条回答

我的问题是模型太大了。把它变小就解决了这个问题。在

如果此时你还没有开始训练,也许这与图的构造有关。您确定GivenModel是正确的吗? 因为我将此自动编码器example与您对优化器的定义如下所示,因此在执行此代码时未发现任何问题:

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Training Parameters
learning_rate = 0.01
num_steps = 10
batch_size = 8

# Network Parameters
num_hidden_1 = 256 # 1st layer num features

num_hidden_2 = 128 # 2nd layer num features (the latent dim)
num_input = 784 # MNIST data input (img shape: 28*28)

# tf Graph input (only pictures)
X = tf.placeholder("float", [None, num_input])

weights = {
    'encoder_h1': tf.Variable(tf.random_normal([num_input, num_hidden_1])),
    'encoder_h2': tf.Variable(tf.random_normal([num_hidden_1, num_hidden_2])),
    'decoder_h1': tf.Variable(tf.random_normal([num_hidden_2, num_hidden_1])),
    'decoder_h2': tf.Variable(tf.random_normal([num_hidden_1, num_input])),
}
biases = {
    'encoder_b1': tf.Variable(tf.random_normal([num_hidden_1])),
    'encoder_b2': tf.Variable(tf.random_normal([num_hidden_2])),
    'decoder_b1': tf.Variable(tf.random_normal([num_hidden_1])),
    'decoder_b2': tf.Variable(tf.random_normal([num_input])),
}

# Building the encoder
def encoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
                                   biases['encoder_b1']))
    # Encoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
                                   biases['encoder_b2']))
    return layer_2


# Building the decoder
def decoder(x):
    # Decoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
                                   biases['decoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
                                   biases['decoder_b2']))
    return layer_2

# Construct model
encoder_op = encoder(X)
decoder_op = decoder(encoder_op)

# Prediction
y_pred = decoder_op
# Targets (Labels) are the input data.
y_true = X

# Define loss and optimizer, minimize the squared error
### your code with a reconstruction loss
with tf.device('/gpu:0'):
    with tf.variable_scope('prediction_loss'):

        loss = tf.reduce_mean(tf.pow(y_true - y_pred, 2))

        optimizer = tf.train.AdamOptimizer()
        gradients, variables = zip(*optimizer.compute_gradients(loss))
        gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
        optimize = optimizer.apply_gradients(zip(gradients, variables))

    with tf.control_dependencies([optimize]):
        train_op = tf.constant(0)
### end of your code

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start Training
# Start a new TF session
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    # Training
    for i in range(1, num_steps+1):
        # Prepare Data
        # Get the next batch of MNIST data (only images are needed, not labels)
        batch_x, _ = mnist.train.next_batch(batch_size)

        # Run optimization op (backprop) and cost op (to get loss value)
        _, l = sess.run([train_op, loss], feed_dict={X: batch_x})
        # Display logs per step
        print('Step %i: Minibatch Loss: %f' % (i, l))

所以,我认为问题可能出在模型的其他部分,但是为了确定我们需要模型的进一步细节。在

现在,关于模型的位置是在cpu还是在gpu。如果您没有定义任何在cpu上,gpu设备将自动为您选择。因此,理论上模型将自动分配到gpu上。但是,同样,可能图的构造有问题,它没有达到模型实际分配到gpu内存的程度。在

我遇到这个问题有三个原因:

  1. 型号太大,所以减少批量大小

  2. 存在无梯度的var:

     clone_gradients = optimizer.compute_gradients(total_clone_loss)
     for grad_and_vars in zip(*clone_grads):
         tf.logging.info("clone_grads"+str(grad_and_vars))
    

    它打印:

    在信息:tensorflow:克隆渐变后((,),) 信息:tensorflow:克隆梯度后((None,),)

相关问题 更多 >