TensorFlow错误:没有为任何变量提供梯度,请检查图形中不支持梯度的操作

2024-04-25 18:14:25 发布

您现在位置:Python中文网/ 问答频道 /正文

尝试使用TensorflowFIFOQueue的派生类。我重写了排队函数。它接收图像并将最后一个密集层的输出排成队列。 现在我将输出张量出列,并尝试计算成本函数,并使用Adam Optimiser最小化它。你知道吗

在计算成本并在排队函数内部最小化它时,我的代码运行良好。但是,当我将loss_op(即我的成本)移到派生类之外时,我会得到一个错误:“没有为任何变量提供梯度,请检查图形中不支持梯度的op”

导入

from tensorflow.python.ops.data_flow_ops import FIFOQueue
import tensorflow as tf
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_data_flow_ops

读取数据

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
Y = mnist.train.labels
X = mnist.train.images

派生队列

class MyQueue(FIFOQueue):
    def enqueue(self, x,Y,name=None):

        #Reshape
        x = tf.reshape(x, shape=[-1, 28, 28, 1])
        # 1st conv_2d layer
        conv1_mp = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu,name = 'Q1_c1')
        # 1st max pool layer
        conv1 = tf.layers.max_pooling2d(conv1_mp, 2, 2,name='Q1_mp1')
        # 2nd conv_2d layer
        conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu,name = 'Q1_c2')
        # 2nd max pool layer
        conv2_mp = tf.layers.max_pooling2d(conv2, 2, 2,name='Q1_mp2')
        #Flatten
        flat = tf.contrib.layers.flatten(conv2_mp)                
        #Dense 1
        dense_1 = tf.layers.dense(tf.reshape(flat,[-1,1600]), 1024,name = 'Q2_D1' )
        #Dropout = 0.8
        drop = tf.layers.dropout(dense_1, rate=0.8, training=True,name='Q2_Dp')
        #Output class = 10
        out = tf.layers.dense(drop, n_classes,name = 'Q2_Op')      


        #update vals to put "out" in the queue
        vals = out


        # Rest of the enqueue operation which has not been changed

        with ops.name_scope(name, "%s_enqueue" % self._name,
                        self._scope_vals(vals)) as scope:
              vals = self._check_enqueue_dtypes(vals)
              # NOTE(mrry): Not using a shape function because
              #  we need access to the `QueueBase` object.
              for val, shape in zip(vals, self._shapes):
                val.get_shape().assert_is_compatible_with(shape)

              if self._queue_ref.dtype == _dtypes.resource:
                return gen_data_flow_ops.queue_enqueue_v2(
                    self._queue_ref, vals, name=scope)
              else:
                return gen_data_flow_ops.queue_enqueue(
                    self._queue_ref, vals, name=scope)

主要

q_pred = MyQueue( capacity=1, dtypes=tf.float32 )
enqueue_op = q_pred.enqueue(X,Y)
data_pred = q_pred.dequeue()

init = tf.global_variables_initializer()

with tf.Session() as sess:
   sess.run(init)
   sess.run(enqueue_op)  

   out = data_pred


   #Calculating Cost
   cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
            logits=out, labels=Y),name = 'Q2_loss')

   # Adam optimiser
   optimizer = tf.train.AdamOptimizer(learning_rate=0.001)

   #Write in the graph
   writer = tf.summary.FileWriter("logs\MyDerivedQueue", sess.graph)

   ####### ERROR LINE ###################
   # Minimising the cost. 
   train_op = optimizer.minimize(cost)

   correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(Y, 1))
   accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Tags: nameimportselfdataqueuelayerstftensorflow
1条回答
网友
1楼 · 发布于 2024-04-25 18:14:25

使用多种打击和试验方法。我得出的结论是,这不会工作,因为反向传播是不在我们的控制。当使用多GPU时,每个GPU都会给出它的前馈,而现在当反向传播时,我们无法知道应该更新哪些权重/参数。你知道吗

相关问题 更多 >