试图使用Google Colab从此存储库中复制超级分辨率GAN-Super Resolution,但每次执行最后一块代码时,都会发生以下错误:
RuntimeError Traceback (most recent call last)
<ipython-input-23-b9349075c05d> in <module>()
16
17 gen_out = gen(torch.from_numpy(lr_images).to(cuda).float())
---> 18 _,f_label = disc(gen_out)
19 _,r_label = disc(torch.from_numpy(hr_images).to(cuda).float())
20 d1_loss = (disc_loss(f_label,torch.zeros_like(f_label,dtype=torch.float)))
4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
2280
2281 return torch.batch_norm(
-> 2282 input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
2283 )
2284
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 10.29 GiB already allocated; 63.81 MiB free; 10.65 GiB reserved in total by PyTorch)
我已经尝试过减少批量大小,但没有效果。如何解决这个问题
将批量大小从64更改为1最终解决了问题。荣誉归于@Farhood ET
相关问题 更多 >
编程相关推荐