PyTorch:第二次尝试向后遍历图形,但缓冲区已被释放。指定retain_graph=True

2024-04-20 14:24:18 发布

您现在位置:Python中文网/ 问答频道 /正文

这是我在处理一些合成数据时收到的错误消息。我有点困惑,因为错误仍然存在,尽管我做了别人建议我做的事情。这可能与我没有指定批次有关吗?PyTorch数据集的使用会缓解这个问题吗

这是我的代码(我是PyTorch的新手,现在才开始学习)——它应该是可复制的:

数据创建:

x, y = np.meshgrid(np.random.randn(100) , np.random.randn(100))    
z = 2 * x + 3 * y + 1.5 * x * y - x ** 2 - y**2
X = x.ravel().reshape(-1, 1)
Y = y.ravel().reshape(-1, 1)    
Z = z.ravel().reshape(-1, 1)    
U = np.concatenate([X, Y], axis = 1)    
U = torch.tensor(U, requires_grad=True)    
Z = torch.tensor(Z, requires_grad=True)    
V = []

for i in range(U.shape[0]):        
    u = U[i, :]  
    u1 = u.view(-1, 1) @ u.view(1, -1)    
    u1 = u1.triu()    
    ones = torch.ones_like(u1)    
    mask = ones.triu()    
    mask = (mask == 1)    
    u2 = torch.masked_select(u1, mask)    
    u3 = torch.cat([u, u2])    
    u3 = u3.view(1, -1)    
    V.append(u3)

V = torch.cat(V, dim = 0)

培养一个模范

from torch import nn    
from torch import optim    
net = nn.Sequential(nn.Linear(V.shape[1], 1))    
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(50):  # loop over the dataset multiple times    
    running_loss = 0.0        
    i = 0
    for inputs , labels in zip(V, Z):
        # get the inputs; data is a list of [inputs, labels]

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)

        loss.backward(retain_graph = True)

        optimizer.step()

        # print statistics
        running_loss += loss.item()

        i += 1

        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

错误消息:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-143-2454f4bb70a5> in <module>
     25 
     26 
---> 27         loss.backward(retain_graph = True)
     28 
     29         optimizer.step()

~\Anaconda3\envs\torch\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
    193                 products. Defaults to ``False``.
    194         """
--> 195         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    196 
    197     def register_hook(self, hook):

~\Anaconda3\envs\torch\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     97     Variable._execution_engine.run_backward(
     98         tensors, grad_tensors, retain_graph, create_graph,
---> 99         allow_unreachable=True)  # allow_unreachable flag
    100 
    101 

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

你能解释一下错误并修复代码吗


Tags: theintrue错误npmasknntorch
1条回答
网友
1楼 · 发布于 2024-04-20 14:24:18

假设您在设置retain_graph=True之后没有再次运行数据创建代码,因为它在IPython REPL中。它可以解决这个问题,但在几乎所有情况下,设置retain_graph=True都不是合适的解决方案

在您的例子中,问题是您已经为U设置了requires_grad=True,这意味着在数据创建中涉及U的所有内容都将记录在计算图中,当调用loss.backward()时,梯度将通过所有这些内容传播到U。第一次之后,这些渐变的所有缓冲区都将被释放,第二次后退将失败

无论是U还是Z都不应该有requires_grad=True,因为它们没有被优化/学习。只有学习到的参数(提供给优化程序的参数)应该有requires_grad=True,通常也不必手动设置,因为^{}会自动处理

您还应该确保从NumPy数据创建的张量的类型为torch.float(float32),因为NumPy的浮点数组通常是float64,这通常是不必要的,并且比float32慢,特别是在GPU上

U = torch.tensor(U, dtype=torch.float)

Z = torch.tensor(Z, dtype=torch.float)

并从向后调用中删除retain_graph=True

loss.backward()

相关问题 更多 >