我想为使用tensorflow
构造的神经网络模型实现两个回调EarlyStopping
和{Keras
)
下面的示例代码是我在编写的脚本中如何实现早期停止的,我不知道它是否正确。在
# A list to record loss on validation set
val_buff = []
# If early_stop == True, then terminate training process
early_stop = False
while icount < maxEpoches:
'''Shuffle the training set'''
'''Update the model by using Adam optimizer over the entire training set'''
# Evaluate loss on validation set
val_loss = self.sess.run(self.loss, feed_dict = feeddict_val)
val_buff.append(val_loss)
if icount % ep == 0:
diff = np.array([val_buff[ind] - val_buff[ind - 1] for ind in range(1, len(val_buff))])
bad = len(diff[diff > 0])
if bad > 0.5 * len(diff):
early_stop = True
if early_stop:
self.saver.save(self.sess, 'model.ckpt')
raise OverFlow()
val_buff = []
icount += 1
当我训练模型并跟踪验证集上的损失时,我发现损失会上下波动,所以很难判断模型何时开始过度拟合。在
既然Earlystopping
和{ReduceLearningRateOnPlateau
?在
振荡误差/损耗很常见。实施早期停止或学习率降低规则的主要问题是验证损失的计算相对滞后。为了解决这个问题,我建议下一条规则:当最好的验证错误至少超过N个时代时停止训练。在
相关问题 更多 >
编程相关推荐