检查目标时出错:conv2d_transpos的预期形状

2024-04-26 11:36:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我想用Keras实现一个人脸数据集的自动编码器。 我使用train_on_batch,因为数据集太大,但我面临这样的问题:

for i in range(10):
    batch_index = 0
    while batch_index <= train_data.batch_index:
        data = train_data.next()
        result = train_result.next()
        model.train_on_batch(data[0],result[0])
        batch_index = batch_index + 1
          ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent     call last)
      <ipython-input-54-d7d64e954a89> in <module>
  4         data = train_data.next()
  5         result = train_result.next()
----> 6         model.train_on_batch(data[0],result[0])
  7         batch_index = batch_index + 1

~/.local/lib/python3.5/site-packages/keras/engine/training.py in     train_on_batch(self, x, y, sample_weight, class_weight)
   1209             x, y,
   1210             sample_weight=sample_weight,
-> 1211             class_weight=class_weight)
   1212         if self._uses_dynamic_learning_phase():
   1213             ins = x + y + sample_weights + [1.]

     ~/.local/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787                 feed_output_shapes,
788                 check_batch_axis=False,  # Don't enforce the batch size.
--> 789                 exception_prefix='target')
    790 
    791             # Generate sample-wise weight values given the     `sample_weight` and

~/.local/lib/python3.5/site-packages/keras/engine/training_utils.py in              standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
136                             ': expected ' + names[i] + ' to have shape ' +
137                             str(shape) + ' but got array with shape ' +
--> 138                             str(data_shape))
    139     return data
    140 

ValueError: Error when checking target: expected conv2d_transpose_21 to      have shape (250, 250, 1) but got array with shape (250, 250, 3)

我的模特也是:

^{pr2}$

我正在使用keras ImageDataGenerator加载图像,它加载:

train_data = trainGenerator.flow_from_directory('lfw',batch_size=67,target_size=(250, 250))
Found 13199 images belonging to 1 classes.

这是所有的代码

from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import matplotlib.pyplot as plt
import numpy as np
import keras

def cutHalf(img):
    for j in range(125):
        for i in range(250):
            img[i][j][0]=1
            img[i][j][1]=1
            img[i][j][2]=1
    return img

img_width = 250
img_height = 250
train_datagen = ImageDataGenerator(rescale=1./255)
train_datagen2 = ImageDataGenerator(rescale=1./255,preprocessing_function=cutHalf)

train_generator = train_datagen.flow_from_directory(
        'lfw',target_size=(img_width, img_height),
        class_mode=None)
train_generator2 = train_datagen2.flow_from_directory(
        'lfw',target_size=(img_width, img_height),
        class_mode=None)

def fixed_generator(generator,generator2):
    batch_index = 0
    while batch_index <= generator.batch_index:
        yield (generator.next(), generator2.next())

Input_Layer = keras.Input(shape=(img_width, img_height,3))
x = keras.layers.Conv2D(20,5,activation='relu')(Input_Layer)
x = keras.layers.MaxPooling2D(2)(x)
x = keras.layers.Conv2D(20,2,activation = 'relu')(x)
x = keras.layers.MaxPooling2D(2)(x)
encoded = x
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,2,activation='relu')(x)
x = keras.layers.UpSampling2D(2)(x)
x = keras.layers.Conv2DTranspose(20,5,activation= 'relu')(x)
model = keras.Model(input = Input_Layer ,output = x)

model.compile(optimizer='adam', 
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit_generator(
        fixed_generator(train_generator,train_generator2),
        nb_epoch=20,
        steps_per_epoch=50
        )

Tags: sampleinimgdataindexlayersbatchtrain
1条回答
网友
1楼 · 发布于 2024-04-26 11:36:02

我假设train_data.next()train_result.next()都返回一个大小为(1,250,250,3)的数组。在

当我尝试运行您的代码时,遇到以下错误:

Traceback (most recent call last):

File "", line 1, in runfile('/Users/lorenzo/Documents/stackoverflow/auto_encoder.py', wdir='/Users/lorenzo/Documents/stackoverflow')

File "/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace)

File "/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

File "/Users/lorenzo/Documents/stackoverflow/auto_encoder.py", line 43, in model.train_on_batch(onedata, oneresult)

File "/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1211, in train_on_batch class_weight=class_weight)

File "/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 789, in _standardize_user_data exception_prefix='target')

File "/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 138, in standardize_input_data str(data_shape))

ValueError: Error when checking target: expected conv2d_transpose_6 to have shape (250, 250, 20) but got array with shape (250, 250, 3)

据说最后一个Conv2DTranspose层的预期目标大小是(250, 250, 20),但是你用一个形状为(250, 250, 3)的数组输入模型。在

解决方案: x = keras.layers.Conv2DTranspose(20,5,activation= 'relu')(x)应更改为x = keras.layers.Conv2DTranspose(3,5,activation= 'relu')(x),以便模型的输出与目标大小匹配。在

编辑: 正如@Daniel Möller所说,损失应该是“分类的交叉熵”,最后一层的过滤计数应该是3。在

以下是输出示例:

Found 530 images belonging to 1 classes.
Found 530 images belonging to 1 classes.
D:/D_Document/Github/keras_autoencoder.py:65: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=Tensor("in..., outputs=Tensor("co...)`
  model = keras.Model(input = Input_Layer ,output = x)
D:/D_Document/Github/keras_autoencoder.py:78: UserWarning: The semantics of the Keras 2 argument `steps_per_epoch` is not the same as the Keras 1 argument `samples_per_epoch`. `steps_per_epoch` is the number of batches to draw from the generator at each epoch. Basically steps_per_epoch = samples_per_epoch/batch_size. Similarly `nb_val_samples`->`validation_steps` and `val_samples`->`steps` arguments have changed. Update your method calls accordingly.
  steps_per_epoch=50
D:/D_Document/Github/keras_autoencoder.py:78: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<generator..., epochs=20, steps_per_epoch=50)`
  steps_per_epoch=50
Epoch 1/20
50/50 [==============================] - 102s 2s/step - loss: 0.6981 - acc: 0.6931
Epoch 2/20
50/50 [==============================] - 95s 2s/step - loss: 0.6406 - acc: 0.7584
Epoch 3/20
50/50 [==============================] - 92s 2s/step - loss: 0.6396 - acc: 0.7588
Epoch 4/20
50/50 [==============================] - 93s 2s/step - loss: 0.6381 - acc: 0.7543
Epoch 5/20
50/50 [==============================] - 93s 2s/step - loss: 0.6377 - acc: 0.7618
Epoch 6/20
50/50 [==============================] - 89s 2s/step - loss: 0.6357 - acc: 0.7569
Epoch 7/20
50/50 [==============================] - 91s 2s/step - loss: 0.6394 - acc: 0.7651
Epoch 8/20
50/50 [==============================] - 93s 2s/step - loss: 0.6380 - acc: 0.7660
Epoch 9/20
50/50 [==============================] - 93s 2s/step - loss: 0.6380 - acc: 0.7643
Epoch 10/20
50/50 [==============================] - 89s 2s/step - loss: 0.6399 - acc: 0.7669

相关问题 更多 >