TensorFlow:'值错误:没有为任何变量提供渐变'

2024-05-29 05:58:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在tensorflow中实现DeepMind的DQN算法,在我调用optimizer.minimize(self.loss)的行中遇到了这个错误:

ValueError: No gradients provided for any variable...

通过阅读其他关于这个错误的帖子,我发现这意味着损失函数不依赖于用来建立模型的任何张量,但是在我的代码中我看不出这是怎么回事。qloss()函数显然依赖于对predict()函数的调用,该函数依赖于所有层张量来进行计算。在

The model setup code can be viewed here


Tags: 函数noself算法fortensorflow错误dqn
1条回答
网友
1楼 · 发布于 2024-05-29 05:58:05

我发现问题是,在我的qloss()函数中,我从张量中提取值,对它们进行操作并返回值。虽然这些值确实依赖于张量,但它们本身并不是张量,因此TensorFlow无法判断它们是否依赖于图中的张量。在

我通过更改qloss()来修复这个问题,这样它就可以直接对张量进行操作并返回一个张量。新功能如下:

def qloss(actions, rewards, target_Qs, pred_Qs):
    """
    Q-function loss with target freezing - the difference between the observed
    Q value, taking into account the recently received r (while holding future
    Qs at target) and the predicted Q value the agent had for (s, a) at the time
    of the update.

    Params:
    actions   - The action for each experience in the minibatch
    rewards   - The reward for each experience in the minibatch
    target_Qs - The target Q value from s' for each experience in the minibatch
    pred_Qs   - The Q values predicted by the model network

    Returns: 
    A list with the Q-function loss for each experience clipped from [-1, 1] 
    and squared.
    """
    ys = rewards + DISCOUNT * target_Qs

    #For each list of pred_Qs in the batch, we want the pred Q for the action
    #at that experience. So we create 2D list of indeces [experience#, action#]
    #to filter the pred_Qs tensor.
    gather_is = tf.squeeze(np.dstack([tf.range(BATCH_SIZE), actions]))
    action_Qs = tf.gather_nd(pred_Qs, gather_is)

    losses = ys - action_Qs
    clipped_squared_losses = tf.square(tf.minimum(tf.abs(losses), 1))

    return clipped_squared_losses

相关问题 更多 >

    热门问题