我正在运行一个图像分类模型与图像,我的问题是,我的验证精度高于我的训练精度。
数据(训练/验证)是随机设置的。我用InceptionV3作为一个预先训练的模型。准确度和验证准确度之间的比率在100个周期内保持不变。
我尝试了更低的学习率和额外的批处理规范化层。在
有人知道要调查什么吗?谢谢你的帮助!在
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(468, activation='relu')(x)
x = Dropout(0.5)(x)
# and a logistic layer
predictions = Dense(468, activation='softmax')(x)
# this is the model we will train
model = Model(base_model.input,predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
# compile the model (should be done *after* setting layers to non-trainable)
adam = Adam(lr=0.0001, beta_1=0.9)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])
# train the model on the new data for a few epochs
batch_size = 64
epochs = 100
img_height = 224
img_width = 224
train_samples = 127647
val_samples = 27865
train_datagen = ImageDataGenerator(
rescale=1./255,
#shear_range=0.2,
zoom_range=0.2,
zca_whitening=True,
#rotation_range=0.5,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'AD/AutoDetect/',
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'AD/validation/',
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
# fine-tune the model
model.fit_generator(
train_generator,
samples_per_epoch=train_samples // batch_size,
nb_epoch=epochs,
validation_data=validation_generator,
nb_val_samples=val_samples // batch_size)
找到了属于468个类的127647个图像。
找到了27865张图片,属于468个类别。
纪元1/100
2048/1994[============================]-48s-损失:6.2839-会计科目:0.0073-会计科目:5.8506-会计科目:0.0179
纪元2/100
2048/1994[===============================]-44s-损失:5.8338-会计科目:0.0430-会计科目:5.4865-会计科目:0.1004
纪元3/100
2048/1994[============================]-45s-损失:5.5147-会计科目:0.0786-会计科目:5.1474-会计科目:0.1161
纪元4/100
2048/1994[===============================]-44s-损失:5.1921-会计科目:0.1074-资产损失:4.8049-资产负债比率:0.1786
see this answer
这是因为你在你的模型中添加了一个辍学层,以防止在训练过程中精度达到1.0。在
相关问题 更多 >
编程相关推荐