原始代码如下所示,显示我的计算机内存不足
encoder_inputs = Input(shape=input_shape)
encoder = LSTM(lstm_dim, return_state=True,
unroll=unroll)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
states = Concatenate(axis=-1)([state_h, state_c])
neck = Dense(latent_dim, activation="relu")
neck_outputs = neck(states)
decode_h = Dense(lstm_dim, activation="relu")
decode_c = Dense(lstm_dim, activation="relu")
state_h_decoded = decode_h(neck_outputs)
state_c_decoded = decode_c(neck_outputs)
encoder_states = [state_h_decoded, state_c_decoded]
decoder_inputs = Input(shape=input_shape)
decoder_lstm = LSTM(lstm_dim,
return_sequences=True,
unroll=unroll
)
decoder_outputs = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(output_dim, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
#Define the model, that inputs the training vector for two places, and predicts one character ahead of the input
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.fit([X_train,X_train],Y_train,
epochs=50,
batch_size=256,
shuffle=True,
callbacks=[h, rlr],
validation_data=[[X_test,X_test],Y_test])
我已经将输入拟合到模型的不同层中两次,这对于我遇到的问题很重要
为了解决内存问题,我开始使用fit_genorator()
来看看它是否能解决这个问题,但通过“输入问题”来解决
目前没有回答
相关问题 更多 >
编程相关推荐