回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我有一个模型,我已经训练了40个时代。我为每个时代保留了检查点,还用<code>model.save()</code>保存了模型。训练代码是</p>
<pre><code>n_units = 1000
model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
# define the checkpoint
filepath="word2vec-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)
</code></pre>
<p>然而,当装载模型并再次训练时,它会像以前没有训练过一样从头开始。输球不是从上次训练开始的。</p>
<p>让我困惑的是,当我用重新定义的模型结构和<code>load_weight</code>加载模型时,<code>model.predict()</code>工作得很好。因此,我相信模型权重是加载的。</p>
<pre><code>model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
filename = "word2vec-39-0.0027.hdf5"
model.load_weights(filename)
model.compile(loss='mean_squared_error', optimizer='adam')
</code></pre>
<p>但是,当我继续训练</p>
<pre><code>filepath="word2vec-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)
</code></pre>
<p>损失和初始状态一样高。</p>
<p>我搜索并找到了一些保存和加载模型的示例:
<a href="http://machinelearningmastery.com/save-load-keras-deep-learning-models/" rel="noreferrer">http://machinelearningmastery.com/save-load-keras-deep-learning-models/</a>
<a href="https://github.com/fchollet/keras/issues/1872" rel="noreferrer">https://github.com/fchollet/keras/issues/1872</a></p>
<p>但都不管用。有人能帮我吗?谢谢。</p>
<p><strong>更新</strong></p>
<p><a href="https://stackoverflow.com/questions/42666046/loading-a-trained-keras-model-and-continue-training">Loading a trained Keras model and continue training</a></p>
<p>我试过了</p>
<pre><code>model.save('partly_trained.h5')
del model
load_model('partly_trained.h5')
</code></pre>
<p>它起作用了。但当我关闭python时,重新打开并再次<code>load_model</code>。它失败了。损失和初始状态一样高。</p>
<p><strong>更新</strong></p>
<p>我试过于洋的示例代码。它起作用了。但回到我的代码,我还是失败了。
这是最初的训练。第二个纪元应该以损失=3.1开始。</p>
<pre><code>13700/13846 [============================>.] - ETA: 0s - loss: 3.0519
13750/13846 [============================>.] - ETA: 0s - loss: 3.0511
13800/13846 [============================>.] - ETA: 0s - loss: 3.0512Epoch 00000: loss improved from inf to 3.05101, saving model to LPT-00-3.0510.h5
13846/13846 [==============================] - 81s - loss: 3.0510
Epoch 2/60
50/13846 [..............................] - ETA: 80s - loss: 3.1754
100/13846 [..............................] - ETA: 78s - loss: 3.1174
150/13846 [..............................] - ETA: 78s - loss: 3.0745
</code></pre>
<p>我关闭了Python并重新打开它。使用<code>model = load_model("LPT-00-3.0510.h5")</code>加载模型,然后使用</p>
<pre><code>filepath="LPT-{epoch:02d}-{loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=60, batch_size=50, callbacks=callbacks_list)
</code></pre>
<p>损失从4.54开始。</p>
<pre><code>Epoch 1/60
50/13846 [..............................] - ETA: 162s - loss: 4.5451
100/13846 [..............................] - ETA: 113s - loss: 4.3835
</code></pre>