我有一个模型(即空间模型),它嵌套在时间模型中以构建CNN-LSTM模型。时间分布层似乎不接受两个输入,而我的空间模型需要两个输入。因此,我不得不使用Lambda层来允许TImeDistributed接受多个输入。然而,当我在打印时态模型的总结时,似乎忽略了空间模型的训练。在
from keras.layers import Dense, Dropout, Activation,Lambda,Input,LSTM
from keras.layers import Conv1D, MaxPooling1D,Flatten,TimeDistributed,Reshape
from keras.models import Model
import keras
# =============================================================================
#Spatial Part
#conv1d for temperature.......>
#concatente
#con1d for pressure .......>
# =============================================================================
# Conv1D Model 1
pnnl_temp=Input(shape=(200,1))
connv_temp1=Conv1D(filters=2,kernel_size=(10),strides=2,padding="valid" ,activation="relu")(pnnl_temp)
conv_maxpooling1=MaxPooling1D(pool_size=3,strides=1)(connv_temp1)
connv_temp2=Conv1D(filters=1,kernel_size=(10),strides=2,padding="valid" ,activation="relu")(conv_maxpooling1)
conv_maxpooling2=MaxPooling1D(pool_size=2,strides=None)(connv_temp2)
conv_maxpooling2_size=conv_maxpooling2.get_shape().as_list()[-1]*\
conv_maxpooling2.get_shape().as_list()[-2] # find the number of elements in tensor
conv_flatter_temp=Reshape((conv_maxpooling2_size,1))(conv_maxpooling2) #flatten layer returns (?,?)as dimension
# Conv1D Model 2
pnnl_pressure=Input(shape=(200,1))
connv_pressure1=Conv1D(filters=2,kernel_size=(10),strides=2,padding="valid" ,activation="relu")(pnnl_pressure)
conv_maxpooling_pressure1=MaxPooling1D(pool_size=3,strides=1)(connv_pressure1)
connv_pressure2=Conv1D(filters=1,kernel_size=(10),strides=2,padding="valid" ,activation="relu")(conv_maxpooling_pressure1)
conv_maxpooling_pressure2=MaxPooling1D(pool_size=2,strides=None)(connv_pressure2)
conv_maxpooling2_size_pressure=conv_maxpooling_pressure2.get_shape().as_list()[-1]*\
conv_maxpooling_pressure2.get_shape().as_list()[-2]
conv_flatter_pressure=Reshape((conv_maxpooling2_size,1))(conv_maxpooling_pressure2)
# Merge Conv1D 1&2
output = keras.layers.concatenate([conv_flatter_pressure, conv_flatter_temp], axis=1)
spatial_model=Model([pnnl_temp,pnnl_pressure],output)
#=============================================================================
# temporal part
#x1.....>
#spatial_model ....> time distributed layer .....>lstm ......
#x2....>
# =============================================================================
x1 = Input(shape=(224, 200, 1))
x2 = Input(shape=(224, 200, 1))
new_input=keras.layers.concatenate([x1,x2],axis=3)
encoded_frame_sequence = TimeDistributed(Lambda(lambda x:spatial_model([x[:,:,0:1],x[:,:,1:]] )))(new_input) # used lambda to allow multiple input for TimeDistributed
new_encoded_frame_sequence=Reshape((224,42))(encoded_frame_sequence)
lastm_1=LSTM(52)(new_encoded_frame_sequence)
Temporal_model =Model([x1,x2],lastm_1)
以下是时态图模型的概述。如您所见,TimeDistributed的参数数量为零,而它应该等于空间模型的参数。在
^{pr2}$除了使用lambda之外,有没有其他方法可以向TimeDistributed输入多个张量? 如何使lambda层可训练? 感谢任何帮助或建议。在
如果有人有类似的问题,你应该注意Lambda层不能训练,你必须使用定制的keras层,这可能很棘手。因此,一个简单的解决方案是输入一个输入并在模型中分割输入。以下是我使用的技巧:
通过这种方式,您不需要在TimeDistributed内部使用lambda层,并且您的TimeDistributed可以接受多个输入,因为它们将在模型中分割。在
相关问题 更多 >
编程相关推荐