Keras输入层和嵌入层

2024-03-28 11:22:36 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试建立一个电视脚本生成模型,当运行以下模型时,输入层和嵌入层出错。 我试过在没有这两条线的情况下运行模型,效果很好。有人能帮我解决这个错误吗?你知道吗

embedding = 300
lstm_size = 128
vocab_size = len(vocab) #8420
seq_len = 100


model = Sequential()
model.add(Input((None, )))
model.add(Embedding(inp, input_dim = vocab_size, output_dim = embedding, 
input_length = 1000))
model.add(LSTM(lstm_size, return_sequences = True, return_state = True))
model.add(LSTM(lstm_size, return_sequences = True, return_state = True))
model.add(LSTM(lstm_size, return_sequences = True, return_state = True))
model.add(Flatten())
model.add(Dense(vocab_size))

TypeError                                 Traceback (most recent call last)
<ipython-input-66-695a9250515c> in <module>
 19 #model = Model(inp, out)
 20 model = Sequential()
---> 21 model.add(Input((None, )))
 22 model.add(Embedding(inp, input_dim = vocab_size, output_dim = embedding, input_length = 1000))
 23 model.add(LSTM(lstm_size, return_sequences = True, return_state = True))

~\Anaconda3\lib\site-packages\tensorflow\python\training\checkpointable\base.py in _method_wrapper(self, *args, **kwargs)
440     self._setattr_tracking = False  # pylint: disable=protected-access
441     try:
--> 442       method(self, *args, **kwargs)
443     finally:
444       self._setattr_tracking = previous_value  # pylint: disable=protected-access

~\Anaconda3\lib\site- packages\tensorflow\python\keras\engine\sequential.py in add(self, layer)
143       raise TypeError('The added layer must be '
144                       'an instance of class Layer. '
--> 145                       'Found: ' + str(layer))
146     self.built = False
147     set_inputs = False

TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_37:0", shape=(?, ?), dtype=float32)


This is coming for the Input layer
and,

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-67-3c663f8df357> in <module>
 20 model = Sequential()
 21 #model.add(Input((None, )))
---> 22 model.add(Embedding(inp, input_dim = vocab_size, output_dim = embedding, input_length = 1000))
 23 model.add(LSTM(lstm_size, return_sequences = True, return_state = True))
 24 model.add(LSTM(lstm_size, return_sequences = True, return_state = True))

TypeError: __init__() got multiple values for argument 'input_dim'

this comes for embedding layer.

Tags: selfaddlayertrueinputsizemodelreturn
1条回答
网友
1楼 · 发布于 2024-03-28 11:22:36

输入不是图层对象。这就是为什么会出现第一个错误。您不需要通过调用Sequential()来传递这样的内容。嵌入()可以是第一层。你知道吗

第二个错误是因为您将inp传递给它。第一个值应该是inpvocab_size,但不能两者都是。你知道吗

基本上

embedding = 300
lstm_size = 128
vocab_size = len(vocab) #8420
seq_len = 100


model = Sequential()
model.add(Embedding(vocab_size, embedding, input_length = 1000))

相关问题 更多 >