运行时错误:输入必须有2个维度,得到3个维度

2024-03-29 02:09:42 发布

您现在位置:Python中文网/ 问答频道 /正文

我在试着把一系列的二元组放到一个LSTM中。这包括:

2.使用pad_序列填充序列

3.在嵌入层中输入填充序列

4.封装嵌入层的输出

5.将电池组插入LSTM。在

class LSTMClassifier(nn.Module):
    # LSTM initialization
    def __init__(self, embedding_dim=32, hidden_dim=50, vocab_size=7138, label_size=2, static_size, batch_size=32):
        super(LSTMClassifier, self).__init__()
        # Initializing batch size
        self.batch_size = batch_size
        # Setting the hidden layer dimension of the LSTM
        self.hidden_dim = hidden_dim
        # Initializing the embedding layer
        self.embeddings = nn.Embedding(vocab_size, embedding_dim-2)
        # Initializing the LSTM layer with one hidden layer 
        self.lstm = nn.LSTM(((embedding_dim*vocab_size)+static_size), hidden_dim, num_layers=1, batch_first=True)
        # Initializing linear linear that takes the hidden layer output
        self.hidden2label = nn.Linear(hidden_dim, label_size)
        # Initializing the hidden layer
        self.hidden = self.init_hidden()

    # Defining the hidding state of the LSTM
    def init_hidden(self):
        # the first is the hidden h
        # the second is the cell  c
        return (autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim).cuda()),
                autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim).cuda()))

    # Defining the feed forward logic of the LSTM. It contains:
    # 1. The embedding layer
    # 2. The LSTM layer with one hidden layer
    # 3. The software layer
    def forward(self, seq, freq, time, static):
        # reset the LSTM hidden state. Must be done before you run a new batch. Otherwise the LSTM will treat
        # a new batch as a continuation of a sequence
        self.hidden = self.init_hidden()

        # Get sequence lengths
        seq_lengths = torch.LongTensor(list(map(len, seq))) # length of 59

        # Pad the sequences
        seq = rnn_utils.pad_sequence(seq, batch_first = True)
        freq = rnn_utils.pad_sequence(freq, batch_first = True)
        time = rnn_utils.pad_sequence(time, batch_first = True)
        static = rnn_utils.pad_sequence(static, batch_first = True)

        seq = autograd.Variable(seq)
        freq = autograd.Variable(freq)
        time = autograd.Variable(time)
        static = autograd.Variable(static)

        # This is the pass to the embedding layer. 
        # The sequence is of dimension N and the output is N x Demb
        embeds = self.embeddings(seq)
        embeds = torch.cat((embeds,freq), dim=3)
        embeds = torch.cat((embeds,time), dim=3)
        print(embeds.size()) #torch.Size([32, 59, 7138, 32])

        x = embeds.view(self.batch_size, seq.size()[1], 1,-1)
        print(x.size()) #torch.Size([32, 59, 1, 228416])

        static = static.view(self.batch_size, -1,1,3)
        x = torch.cat([x, static], dim=3)
        print(x.size()) #torch.Size([32, 59, 1, 228419])

        # pack the padded sequence so that paddings are ignored
        x = torch.nn.utils.rnn.pack_padded_sequence(x, seq_lengths, batch_first=True)

        lstm_out, self.hidden = self.lstm(x, self.hidden)

        # unpack the packed padded sequence so that it is ready for prediction
        lstm_out = torch.nn.utils.rnn.pad_packed_sequence(lstm_out, batch_first=True)

        y = self.hidden2label(lstm_out[-1])
        log_probs = F.log_softmax(y)
        return log_probs

但是,我得到的错误是:

^{pr2}$

我认为LSTM模型需要三维输入?我很困惑为什么它需要两个维度。我该怎么修?在


Tags: theselflayertruesizebatchstatictorch