如何在Keras中得到一个图层的输出形状?

2024-04-25 21:49:26 发布

您现在位置:Python中文网/ 问答频道 /正文

我在Keras中有以下代码(基本上我正在修改此代码以供使用),我得到此错误:

'ValueError:检查目标时出错:预期conv3d_3具有5个维度,但得到的数组具有形状(104096)'

代码:

from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers

# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.

model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(None, 64, 64, 1),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
               activation='sigmoid',
               padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')

我提供的数据格式如下:[1,10,64,64,1]。 所以我想知道我错在哪里,以及如何看到每一层的输出形状。


Tags: fromimportaddsizemodelreturnlayerskernel
1条回答
网友
1楼 · 发布于 2024-04-25 21:49:26

您可以通过^{}获得层的输出形状。

for layer in model.layers:
    print(layer.output_shape)

给你:

(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)

或者,您可以使用^{}漂亮地打印模型:

model.summary()

提供有关每个层的参数数量和输出形状的详细信息,以及格式良好的整体模型结构:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 64, 64, 40)  59200     
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40)  160       
_________________________________________________________________
conv3d_1 (Conv3D)            (None, None, 64, 64, 1)   1081      
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________

如果只想访问有关特定层的信息,可以在构造该层时使用name参数,然后按如下方式调用:

...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...

model.get_layer('conv3d_0')

编辑:为了便于参考,它将始终与layer.output_shape相同,请不要为此实际使用Lambda或自定义层。但是你可以用^{}层来回音传递张量的形状。

...
def print_tensor_shape(x):
    print(x.shape)
    return x
model.add(Lambda(print_tensor_shape))
...

或者编写一个自定义层并在call()上打印张量的形状。

class echo_layer(Layer):
...
    def call(self, x):
        print(x.shape)
        return x
...

model.add(echo_layer())

相关问题 更多 >