如何使用Dataloader创建形状正确的输入?

2024-06-09 16:48:41 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图输入一个图像和一个向量作为模型的输入。图像具有4d的正确形状,但我输入的向量没有这种形状。图像大小为424x512,而矢量为形状(18,图1)。使用dataloader之后,我得到了一批形状(50x1x424x512)和(50x18)。模型给出了误差,因为它需要向量形状是4d太。我该怎么做? 这是我的密码:

def loadTrainingData_B(args):
    fdm = []
    tdm = []
    parameters = []
    for i in image_files[:4]:
        try:
            false_dm = np.fromfile(join(ref, i), dtype=np.int32)
            false_dm = Image.fromarray(false_dm.reshape((424, 512, 9)).astype(np.uint8)[:,:,1])
            fdm.append(false_dm)
            true_dm = np.fromfile(join(ref, i), dtype=np.int32)
            true_dm = Image.fromarray(true_dm.reshape((424, 512, 9)).astype(np.uint8)[:,:,1])
            tdm.append(true_dm)
            pos = param_filenames.index(i)
            param = np.array(params[pos, 1:])
            param = np.where(param == '-point-light-source', 1, param).astype(np.float64)
            parameters.append(param)
        except:
            print('[!] File {} not found'.format(i))
    return (fdm, parameters, tdm)

class Flat_ModelB(Dataset):
    def __init__(self, args, train=True, transform=None):
        self.args = args
        if train == True:
            self.fdm, self.parameters, self.tdm = loadTrainingData_B(self.args)
        else:
            self.fdm, self.parameters, self.tdm = loadTestData_B(self.args)
        self.data_size = len(self.parameters)
        self.transform = transforms.Compose([transforms.ToTensor()])

    def __getitem__(self, index):
        return (self.transform(self.fdm[index]).double(), torch.from_numpy(self.parameters[index]).double(), self.transform(self.tdm[index]).double())

    def __len__(self):
        return self.data_size

我得到的错误是:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1 5 5, but got 2-dimensional input of size [50, 18] instead

模型如下:

class Model_B(nn.Module):
    def __init__(self, config):
        super(Model_B, self).__init__()
        self.config = config
        # CNN layers for fdm
        self.layer1 = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=2, padding=2),
            nn.ReLU(),
            nn.BatchNorm2d(16))
        self.layer2 = nn.Sequential(
            nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=2, padding=2),
            nn.ReLU(),
            nn.BatchNorm2d(32))
        self.layer3 = nn.Sequential(
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=5, stride=2, padding=2),
            nn.ReLU(),
            nn.BatchNorm2d(32))
        self.layer4 = nn.Sequential(
            nn.ConvTranspose2d(in_channels=32, out_channels=32, kernel_size=5, stride=2, padding=2, output_padding=1),
            nn.ReLU(),
            nn.BatchNorm2d(32))
        self.layer5 = nn.Sequential(
            nn.ConvTranspose2d(in_channels=32, out_channels=16, kernel_size=5, stride=2, padding=2,output_padding=1),
            nn.ReLU(),
            nn.BatchNorm2d(16))
        self.layer6 = nn.Sequential(
            nn.ConvTranspose2d(in_channels=16, out_channels=1, kernel_size=5, stride=2, padding=2, output_padding=1),
            nn.ReLU(),
            nn.BatchNorm2d(1))
        # CNN layer for parameters
        self.param_layer1 = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=2, padding=2),
            nn.ReLU(),
            nn.BatchNorm2d(32))

    def forward(self, x, y):
        out = self.layer1(x)
        out_param = self.param_layer1(y)
        print("LayerParam 1 Output Shape : {}".format(out_param.shape))
        print("Layer 1 Output Shape : {}".format(out.shape))
        out = self.layer2(out)
        print("Layer 2 Output Shape : {}".format(out.shape))
        out = self.layer3(out)
        # out = torch.cat((out, out_param), dim=2)
        print("Layer 3 Output Shape : {}".format(out.shape))
        out = self.layer4(out)
        print("Layer 4 Output Shape : {}".format(out.shape))
        out = self.layer5(out)
        print("Layer 5 Output Shape : {}".format(out.shape))
        out = self.layer6(out)
        print("Layer 6 Output Shape : {}".format(out.shape))
        return out

以及访问数据的方法:

for batch_idx, (fdm, parameters) in enumerate(self.data):
            if self.config.gpu:
                fdm = fdm.to(device)
                parameters = parameters.to(device)
                print('shape of parameters for model a : {}'.format(parameters.shape))

            output = self.model(fdm)
            loss = self.criterion(output, parameters)

编辑: 我认为我的代码是不正确的,因为我试图在(18)的向量上应用卷积。我试着复制这个向量,制作它(18x64),然后输入它。它仍然不起作用,并提供以下输出:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1 5 5, but got 3-dimensional input of size [4, 18, 64] instead

我不知道如何连接一个18长度的向量到第3层的输出,如果我不能做这些事情


Tags: inselfformatsizeparamnpdmnn
1条回答
网友
1楼 · 发布于 2024-06-09 16:48:41

看起来您正在训练一个自动编码器模型,并希望在瓶颈层中使用一些额外的向量输入对其进行参数化。如果您想对它执行一些转换,那么您必须决定是否需要任何空间依赖关系。给定恒定的输入大小(N,1424512),layer3的输出将有一个形状(N,32,53,64)。根据您想要的车型性能,您有很多选择:

  1. 使用带有激活的nn.Linear来转换参数向量。然后可以添加额外的空间维度,并在所有空间位置重复此向量:
img = torch.rand((1, 1, 424, 512))
vec = torch.rand(1, 19)

layer3_out = model(img)
N, C, H, W = layer3_out.shape

param_encoder = nn.Sequential(nn.Linear(19, 30), nn.ReLU(), nn.Linear(30, 10))
param = param_encoder(vec)
param = param.unsqueeze(-1).unsqueeze(-1).expand(N, -1, H, W)
encoding = torch.cat([param, layer3_out], dim=1)
  1. 使用转置卷积将参数向量向上采样到layer3输出的大小。但这将很难实现,因为您必须计算精确的输出形状以适应(N,32,53,64)
  2. 使用nn.Linear使用MLP将输入向量转换为layer3输出中通道的大小2x。然后使用所谓的Feature-wise transformations来缩放和移动layer3的特征地图

我建议从第一个选项开始,因为这是实现的最简单的选项,然后再尝试其他选项

相关问题 更多 >