Pytork:反向传播多个损失

2024-04-26 21:01:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我想备份多个样本。这意味着PyTorch的损失不止一次。我想在一个特定的时间戳上这样做。 我正在努力做到这一点:

        losso = 0
        for g, logprob in zip(G, self.action_memory):
            losso += -g * logprob
        self.buffer.append(losso)

        if (self.game_counter > self.pre_training_games):
            for element in self.buffer:
                self.policy.optimizer.zero_grad()
                element.backward(retain_graph=True)
                self.policy.optimizer.step()

但我遇到了一个运行时错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [91, 9]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Tags: oftheinselftrueforbufferpolicy
1条回答
网友
1楼 · 发布于 2024-04-26 21:01:02

似乎您正在使用loss:
一方面,您将每次迭代的损失添加到loss
而另一方面,您可以通过每个迭代的backward()执行loss

这可能是您出错的原因

相关问题 更多 >