使用Pytorch的多任务回归问题(问题:所有测试数据的输出相同)

2024-04-20 13:21:25 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试多任务回归问题。在

输入形状:200*60000,输出形状:200*3(这里,200=数据总数,60000=特征数量)

因此,对于每个数据点,我必须预测3个值(连续)。在

样本代码:


class Classifier(nn.Module):
    def __init__(self,input_nodes):
        super(Classifier, self).__init__()
        self.input_nodes = input_nodes

        self.sharedlayer = nn.Sequential(
            nn.Linear(input_nodes, 300),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(300, 100),
            nn.ReLU(),
            nn.Dropout(),
        )


        self.att1 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )
        self.att2 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )
        self.att3 = nn.Sequential(
            nn.Linear(100, 40),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(40, 20),
            nn.ReLU(),
            nn.Dropout(),
            nn.Linear(20, 1)
        )

    def forward(self, x):

        h_shared = self.sharedlayer(x)
        out1 = self.att1(h_shared)
        out2 = self.att2(h_shared)
        out3 = self.att3(h_shared)

        return out1, out2, out3

criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

for epoch in range(n_epochs):
            running_loss = 0
            i = 0
            model.train()
            for data, label in trainloader:
                 i = i + 1
                 out1, out2, out3 = model(data)


                 l1 = criterion(out1, label[:,0].view(-1,1))    
                 l2 = criterion(out2, label[:,1].view(-1,1))
                 l3 = criterion(out3, label[:,2].view(-1,1))    

                 loss = (l1 + l2 + l3)
                 optimizer.zero_grad()
                 loss.backward()
                 optimizer.step()

Problem: The model always produces same value for all test data.

示例:假设有3个测试数据:

对于输出1:3.5 3.5 3.5

对于输出2:9.5 9.5 9.5

对于输出3:0.2 0.2 0.2

你能帮我弄清楚这里有什么问题吗?在

为什么它为所有测试数据生成相同的值?在


Tags: selfinputmodelnnlabeldropoutsharednodes