<p>有一种方法可以准确地看到所有权重和偏差的值随时间的变化。您可以使用Keras回调方法,该方法可用于记录每个训练时期的权重值。例如,使用这样的模型</p>
<pre><code>import numpy as np
model = Sequential([Dense(16, input_shape=(train_inp_s.shape[1:])), Dense(12), Dense(6), Dense(1)])
</code></pre>
<p>安装期间添加回调**kwarg:</p>
<pre><code>gw = GetWeights()
model.fit(X, y, validation_split=0.15, epochs=10, batch_size=100, callbacks=[gw])
</code></pre>
<p>其中回调由</p>
<pre><code>class GetWeights(Callback):
# Keras callback which collects values of weights and biases at each epoch
def __init__(self):
super(GetWeights, self).__init__()
self.weight_dict = {}
def on_epoch_end(self, epoch, logs=None):
# this function runs at the end of each epoch
# loop over each layer and get weights and biases
for layer_i in range(len(self.model.layers)):
w = self.model.layers[layer_i].get_weights()[0]
b = self.model.layers[layer_i].get_weights()[1]
print('Layer %s has weights of shape %s and biases of shape %s' %(
layer_i, np.shape(w), np.shape(b)))
# save all weights and biases inside a dictionary
if epoch == 0:
# create array to hold weights and biases
self.weight_dict['w_'+str(layer_i+1)] = w
self.weight_dict['b_'+str(layer_i+1)] = b
else:
# append new weights to previously-created weights array
self.weight_dict['w_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['w_'+str(layer_i+1)], w))
# append new weights to previously-created weights array
self.weight_dict['b_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['b_'+str(layer_i+1)], b))
</code></pre>
<p>这个回调函数构建了一个包含所有层权重和偏移量的字典,这些层权重和偏移量由层编号标记,因此您可以看到它们随着时间的推移是如何随着模型的训练而变化的。您会注意到,每个权重和偏移数组的形状取决于模型层的形状。为模型中的每个层保存一个权重数组和一个偏移数组。第三轴(深度)显示了它们随时间的演变。</p>
<p>在这里,我们使用了10个时代和一个由16、12、6和1个神经元组成的模型:</p>
<pre><code>for key in gw.weight_dict:
print(str(key) + ' shape: %s' %str(np.shape(gw.weight_dict[key])))
w_1 shape: (5, 16, 10)
b_1 shape: (1, 16, 10)
w_2 shape: (16, 12, 10)
b_2 shape: (1, 12, 10)
w_3 shape: (12, 6, 10)
b_3 shape: (1, 6, 10)
w_4 shape: (6, 1, 10)
b_4 shape: (1, 1, 10)
</code></pre>