对一系列图像使用分类交叉熵

2024-06-02 07:58:08 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个模型,它接受一系列图像作为输入(None, n_step, 128, 128)(而不是单个图像),其中n_step是一个固定数10。我用categorical_crossentropy对四类问题进行分类。但我有一个错误,如下所示

ValueError: A target array with shape (1342, 10, 4) was passed for an output of shape (None, 1, 4) while using as loss `categorical_crossentropy`. This loss expects targets to have the same shape as the output.

我从错误中了解到,它一次只能看到一张图像。有没有什么方法可以让我把这个损失用于一系列图像

模型的输出也将是一组10标签

编辑:

n_steps = 10
feature_count = 4
def create_model():

    trajectory_input = Input(shape=(n_steps, feature_count), name='trajectory_input')
    image_input  = Input(shape=(128, 128, n_steps), name='image_input')

    x_aware = (Conv2D(32, kernel_size=(3, 3), activation='relu'))(image_input)
    x_aware = (BatchNormalization())(x_aware)
    x_aware = (MaxPooling2D(pool_size=(2, 2)))(x_aware)
    x_aware = (Dropout(0.25))(x_aware)

    x_aware = (Conv2D(64, kernel_size=(3, 3), activation='relu'))(x_aware)
    x_aware = (BatchNormalization())(x_aware)
    x_aware = (MaxPooling2D(pool_size=(2, 2)))(x_aware)
    x_aware = (Dropout(0.25))(x_aware)

    x_aware = (Conv2D(64, kernel_size=(3, 3), activation='relu'))(x_aware)
    x_aware = (BatchNormalization())(x_aware)
    x_aware = (MaxPooling2D(pool_size=(2, 2)))(x_aware)
    x_aware = (Dropout(0.25))(x_aware)

    x_aware = (Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x_aware)
    x_aware = (BatchNormalization())(x_aware)
    x_aware = (Dropout(0.25))(x_aware)
    x_aware = Reshape((1, 12544))(x_aware)

    x = (Dense(32, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(trajectory_input)
    x = Reshape((1, 32*n_steps))(x)

    x = concatenate([x, x_aware])
    x = (Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)
    x = (Dense(32, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)

    x_reg = (Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)
    x_class = (Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)

    x_reg = Reshape((2, 4))(x_reg)

    output_regression = (Dense(2, name='main_output'))(x_reg)
    output_class = (Dense(4, name='classification_output', activation='softmax'))(x_class)

    adam = Adam(lr=learning_rate)
    model = Model(inputs=[trajectory_input, image_input], outputs=[output_regression, output_class])
    model.compile(optimizer=adam, loss={'main_output': 'mse','classification_output': 'categorical_crossentropy'}, metrics={'main_output': [euc_dist_1, euc_dist_2], 'classification_output': 'accuracy'})
    model.summary()
    return model

输入是回归任务的图像及其相关信息,输出是标签和下一个可预测值


Tags: 图像inputoutputsizemodelstepsactivationkernel
1条回答
网友
1楼 · 发布于 2024-06-02 07:58:08

问题在于分类任务的密集层和重塑层的输入。因为输入是在(10,128,128)中,所以单位数应该是10*num_class。所以改变这个

x_reg = (Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)
x_class = (Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)

x_reg = Reshape((2, 4))(x_reg)

output_regression = (Dense(2, name='main_output'))(x_reg)
output_class = (Dense(4, name='classification_output', activation='softmax'))(x_class)

这就解决了问题

x_reg = (Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)
x_class = (Dense(40, activation='relu', kernel_regularizer=regularizers.l2(0.001)))(x)

x_reg = Reshape((2, 4))(x_reg)
x_class = Reshape((10,4))(x_class)

output_regression = (Dense(2, name='main_output'))(x_reg)
output_class = (Dense(4, name='classification_output', activation='softmax'))(x_class)

相关问题 更多 >