我目前正在尝试实现一个类似于U-Net的网络,在将数据加载到GPU时遇到了一些奇怪的问题。 特别是,当我用
model = CNN(NUM_LAYERS)
model.to(DEVICE)
GPU上的内存使用量上升到1063 MiB
。这告诉我,我的模型内存大约是1gb。到现在为止,一直都还不错。然后,用
我将一批训练数据加载到GPU。GPU的总内存使用量上升到1093 MiB
,这似乎是对的,它告诉我,这批8个训练图像的大小大约为30MB。现在我执行
output = model(input_tensor)
内存使用量迅速上升到21123 MiB
!如果我把批量大小改为更大的,比如说16,我得到
CUDA out of memory (GPU 1; 31.72 GiB total capacity; 29.91 GiB already allocated; 359.75 MiB free; 402.84 MiB cached)
。在
我试着弄清楚当我给模型输入一个张量时到底会发生什么,但我似乎不明白为什么GPU内存会突然这样增加。我说的对吗?在这一点上,我应该在GPU上有三个东西:模型,输入张量和输出张量?在
顺便说一下,dataloader看起来像这样:
class AsyncCNNData(dataset):
def __init__(self, list_paths, resize_dims=None, transform=None, device='cpu'):
'Initialization'
super().__init__()
self.list_paths = list_paths
self.transform = transform
self.to_tensor = transforms.ToTensor()
self.device = device
self.resize_dims=resize_dims
def __len__(self):
'Denotes the total number of samples'
return len(self.list_paths)
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
sample_path = self.list_paths[index]
# Load data and get label
fs = cv.FileStorage(sample_path, cv.FILE_STORAGE_READ)
cti_list = []
for i in range(0, NUM_LAYERS, 1):
img_name = "layer_{0:03d}".format(i)
img = (fs.getNode(img_name)).mat()
diff = np.subtract(self.resize_dims, img.shape)/2
img = cv.copyMakeBorder(img, int(diff[0]), int(diff[0]), int(diff[1]), int(diff[1]), cv.BORDER_CONSTANT, None, 1)
cti_list.append(img)
input_tensor = np.stack(cti_list)
input_tensor = input_tensor[np.newaxis, :, :, :]
img = (fs.getNode("frame")).mat()
diff = np.subtract(self.resize_dims, img.shape)/2
img = cv.copyMakeBorder(img, int(diff[0]), int(diff[0]), int(diff[1]), int(diff[1]), cv.BORDER_CONSTANT, None, 1)
img = img[:, :, np.newaxis]
output_tensor = self.to_tensor(img)
if self.transform != None:
input_tensor = self.transform(input_tensor)
else:
input_tensor = torch.from_numpy(input_tensor)
return input_tensor, output_tensor, sample_path
以及
yaml_files = pd.read_csv(DATA_PATH).values.flatten().tolist()
data = AsyncCNNData(yaml_files, device=DEVICE, resize_dims=RESIZE)
train_size = int(len(data)*TRAIN_RATIO)
data_train, data_test = random_split(data, [train_size, len(data)-train_size])
dataloader = DataLoader(data_train, batch_size=BATCH_SIZE, shuffle=True)
非常感谢!在
目前没有回答
相关问题 更多 >
编程相关推荐