在多任务处理中,反向传播后无法创建新的计算图

2024-06-16 13:41:44 发布

您现在位置:Python中文网/ 问答频道 /正文

背景:我使用DQN和DDPG同时解决两个任务。DQN和DDPG的state(input)都是两部分。一部分是环境状态,另一部分是CNN+LSTM从环境中提取的状态。这两个部分分别在forward_dqn()forward_actor()forward_critic()中串联

问题1: 我按顺序反向传播loss_dqnloss_ddpg_actorloss_ddpg_critic,并在loss_ddpg_actor的反向传播中得到错误“试图再次反向通过图形,但缓冲区已被释放”。因为在损耗dqn反向传播之后,计算图被释放,所以我再次向前传播CNN+LSTM来计算损耗。为什么不能再次创建计算图?谢谢

模型:(环境:环境)

output_cnnlstm = cnnlstm.forward(env)
DQN_output = dqn.forward(cat(output_cnnlstm, state_env))
Actor_output = actor.forward(cat(output_cnnlstm, state_env))
Critic_output = critic.forward(cat(output_cnnlstm, state_env))

代码1(Q1):

    # dqn
    # forward: cnnlstm
    s_cnnlstm_out, _, _ = self.model.forward_cnnlstm(s_cnnlstm, flag_optim=True)
    # forward: dqn
    q_eval_dqn = self.model.forward_dqn_eval(s_dqn, s_cnnlstm_out).gather(1, a_dqn)
    q_next_dqn = self.model.forward_dqn_target(s_dqn_next, s_cnnlstm_out).detach()
    q_target_dqn = r + GAMMA_DQN * q_next_dqn.max(dim=1)[0].reshape(SIZE_BATCH * SIZE_TRANSACTION, 1)
    # optimzie: dqn
    loss_dqn = self.loss_dqn(q_eval_dqn, q_target_dqn)
    self.optimizer_cnnlstm.zero_grad()
    self.optimizer_dqn.zero_grad()
    loss_dqn.backward()
    self.optimizer_cnnlstm.step()
    self.optimizer_dqn.step()
    loss_dqn = loss_dqn.detach().numpy()
    # ddpg
    # actor
    # forward: cnnlstm
    s_cnnlstm_out, _, _ = self.model.forward_cnnlstm(s_cnnlstm, flag_optim=True)
    # forward: ddpg: actor
    a_eval_ddpg = self.model.forward_actor_eval(s_ddpg, s_cnnlstm_out)
    # optimze: ddpg: cnnlstm + actor
    loss_ddpg_actor = - self.model.forward_cirtic_eval(s_ddpg, a_eval_ddpg, s_cnnlstm_out).mean()
    self.optimizer_cnnlstm.zero_grad()
    self.optimizer_actor.zero_grad()
    loss_ddpg_actor.backward()
    self.optimizer_cnnlstm.step()
    self.optimizer_actor.step()
    loss_ddpg_actor = loss_ddpg_actor.detach().numpy()

问题2: 我写了一个演示来测试传播过程,演示似乎工作得很好,因为损耗正常下降,测试误差很低。所以我想问一下这两种代码和模型之间的区别

型号:

output_model1 = model1.forward(x)
output_model21 = model21.forward(cat(output_model1, x1))
output_model22 = model221.forward(cat(output_model1, x2))

与Q1模型相比,输出模式1~cnnlstm、输出模式21~DQN、输出模式22~Actor

问题3: 我在演示中在loss1.backward()之后和optimizer1.step()之前设置断点。然而,一方面,Model21线性层的权重随着优化而变化。另一方面,x._grad是梯度值张量,而x1._gradNone。因此,我想知道Model21的参数是否得到优化,以及为什么x1._grad没有优化

代码2(Q2和Q3):

for i in range(NUM_OPTIM):
    # optimize task 1
    y1_pred = self.model.forward_task1(x, x1)
    loss1 = self.loss_21(y1_pred, y1)
    self.optimizer1.zero_grad()
    self.optimizer21.zero_grad()
    loss1.backward()
    self.optimizer1.step()
    self.optimizer21.step(
    # optimze task 2
    y2_pred = self.model.forward_task2(x, x2)
    loss2 = self.loss_22(y2_pred, y2)
    self.optimizer1.zero_grad()
    self.optimizer22.zero_grad()
    loss2.backward()
    self.optimizer1.step()
    self.optimizer22.step()

Tags: selfoutputmodelstepdqnevaloutoptimizer