我有两个参数相同的模型。它们都是在MNIST数据集上训练的。第一个是使用模型.拟合()和第二个使用model.train_批处理(). 第二个模型给出的精确度较低。我想知道是什么原因造成的,怎么解决?在
数据准备:
batch_size = 150
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
模型1:
^{pr2}$模型1精度:
Test loss: 0.023489486496470636 Test accuracy: 0.9924
模型2:
model2 = Sequential()
model2.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model2.add(Conv2D(64, (3, 3), activation='relu'))
model2.add(Conv2D(128, (3, 3), activation='relu'))
model2.add(Conv2D(256, (3, 3), activation='relu'))
model2.add(Conv2D(128, (3, 3), activation='relu'))
model2.add(Conv2D(64, (3, 3), activation='relu'))
model2.add(Conv2D(64, (3, 3), activation='relu'))
model2.add(Conv2D(32, (3, 3), activation='relu'))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(num_classes, activation='softmax'))
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
batch_size2 = 150
epochs2 = 12
step_epoch = x_train.shape[0] // batch_size2
def next_batch_train(i):
return x_train[i:i+batch_size2,:,:,:], y_train[i:i+batch_size2,:]
iter_num = 0
epoch_num = 0
model_outputs = []
loss_history = []
while epoch_num < epochs2:
while iter_num < step_epoch:
x,y = next_batch_train(iter_num)
loss_history += model2.train_on_batch(x,y)
iter_num += 1
print("EPOCH {} FINISHED".format(epoch_num + 1))
epoch_num += 1
iter_num = 0 # reset counter
score = model2.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
模型2精度:
Test loss: 0.5577236003954947 Test accuracy: 0.9387
差异的四个来源:
fit()
默认使用shuffle=True
,这包括第一个历元(以及随后的历元)step_epoch
个批次,但是重复step_epoch - 1
;将<
改为<=
next_batch_train
切片已经太差了;下面是它正在做什么和需要做什么:x_train[0:128] > x_train[1:129] > x_train[2:130] > ...
x_train[0:128] > x_train[128:256] > x_train[256:384] > ...
为了补救,您应该在}一起使用(不推荐)。另外,一个提示:
model2
的列车循环中包括一个洗牌步骤,或者使用fit
和{64, 128, 256, 128, 64
Conv2D过滤器是一个相当糟糕的安排;你所做的是极大地上采样,从某种意义上说是“制造数据”-如果你要使用更多的过滤器,也要成比例地增加它们的strides
,这样层之间的总张量保持不变(或更小)。在下面所有提到的修正+更新的种子函数;运行1个epoch,12个太长-如果1个有效,12个也会。如果你愿意的话,可以保留你原来的模型,但是我建议用下面的一个来测试,因为它明显更快。在
^{pr2}$
更好的选择:使用洗牌
请注意,这并不能保证您的结果与
fit()
一致,因为fit()
可能会有不同的洗牌方式(即使使用随机种子),但实际上实现是正确的。以上也不会在第一个纪元时洗牌(容易更改)。在我注意到这两个模型之间的一个区别是,在第二个模型中,您没有在每个纪元之后重新排列训练数据。
.fit()
默认情况下将洗牌训练数据。在相关问题 更多 >
编程相关推荐