我正在使用3个GPU的硬件设置,并希望使用以下代码培训我的模型:
with torch.cuda.device(2):
train_load, val_load = SRD.load_sr_st_dataset(route_img,route_dpt)
#print(train_load.dataset.__sizeof__())
sr_stereo = SRD.sr_stereo(max_d=200)
sr_stereo.cuda()
optimizer = torch.optim.Adam(sr_stereo.parameters(),lr = 0.001)
criterion = nn.MSELoss(reduction='sum')
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,
milestones=range(5,100,5), gamma=0.5, last_epoch=-1)
sr_stereo.train()
for e in range(epoch_num):
scheduler.step()
for sample,valid in zip(train_load,val_load):
l10,l20,r10,r20,depth = SRD.parsedata(sample)
l10v, l20v, r10v, r20v, depthv = SRD.parsedata(valid)
out_train = sr_stereo(l10,l20,r10,r20)
out_val = sr_stereo(l10v, l20v, r10v, r20v)
optimizer.zero_grad()
loss_train = criterion(out_train,depth)
loss.backward()
optimizer.step()
scheduler.step()
因为我已经有两个模型正在接受GPU0和GPU1的培训,所以我希望这个模型在GPU2上运行
我得到了一个错误:
RuntimeError: CUDA out of memory. Tried to allocate 282.00 MiB (GPU 2; 31.88 GiB total capacity; 29.99 GiB already allocated; 78.81 MiB free; 30.04 GiB reserved in total by PyTorch)
有什么我遗漏的吗?我无法理解为什么会发生错误,因为GPU 2有足够的内存,并且不运行任何训练
在终端中使用
nvidia-smi
命令检查可访问的内存量。很明显,你缺乏记忆力。我假设您没有足够的内存在GPU#2上加载您的模型相关问题 更多 >
编程相关推荐