如何在函数式API中实现双向包装器?

2024-04-19 05:48:34 发布

您现在位置:Python中文网/ 问答频道 /正文

双向层是将编码器连接到解码器还是将解码器连接到解码器。 这是编码器的3个部分,提供给下面的解码器

#encoding layers
input_context = Input(shape = (maxLen, ), dtype = 'int32', name = 'input_context')
input_ctx_embed = embed_layer(input_context)
encoder_lstm, h1, c1 = LSTM(256, return_state = True, return_sequences = True)(input_ctx_embed)
encoder_lstm2,h2, c2 = LSTM(256, return_state = True, return_sequences = True)(encoder_lstm)
_,h3, c3 = LSTM(256, return_state = True)(encoder_lstm2)
encoder_states = [h1, c1, h2, c2,h3,c3]

#layers for the decoder
input_target = Input(shape = (maxLen, ), dtype = 'int32', name = 'input_target')
input_tar_embed = embed_layer(input_target)
# the decoder lstm uses the final states from the encoder lstm as the initial state
decoder_lstm, context_h, context_c = LSTM(256, return_state = True, return_sequences = True) 
         (input_tar_embed, initial_state = [h1, c1],)
decoder_lstm2, context_h2, context_c2 = LSTM(256, return_state = True, return_sequences = True) 
         (decoder_lstm, initial_state = [h2, c2],)
final, context_h3, context_c3 = LSTM(256, return_state = True, return_sequences = True) 
        (decoder_lstm2, initial_state = [h3, c3],)
dense_layer=Dense(vocab_size, activation = 'softmax')
output = TimeDistributed(dense_layer)(final)
#output=Dropout(0.3)(output)
model = Model([input_context, input_target], output)

Tags: thelayertrueencoderinputreturncontextembed
1条回答
网友
1楼 · 发布于 2024-04-19 05:48:34

不确定双向层在哪里,因为在我看来,如果您想使用keras.layers.LSTM()来构建一个双向RNN结构而不使用keras.layer.Bidirectional(),那么keras.layers.LSTM()中有一个设置叫做go_backwards,它的默认值是False,设置它True会使LSTM向后运行。如果你只是问在编码器-解码器结构中把双向LSTM放在哪里,那么我的回答是“你可以把它放在你想放的任何地方,如果这样做能使你的模型更好的话。”

如果我搞混了什么,告诉我

相关问题 更多 >