在中找不到输入张量tf图(保存的\u模型)

2024-03-28 21:23:31 发布

您现在位置:Python中文网/ 问答频道 /正文

培训结束后,我将我的模型保存为保存的\模型格式(我想保存为这种格式,而不是.h5)。当加载模型并打印图形时,我找不到输入张量(只提供默认输入)。能够做出预测。你知道吗

第一次,我使用keras.applications.VGG16定义了我的模型,然后添加了keras.Input(),但没有任何变化。你知道吗

我是这样定义我的模型的:

model = keras.applications.VGG16(weights = "imagenet",
                     include_top = False,
                     input_shape = (IMG_SIZE[0],IMG_SIZE[1], 3))
for layer in model.layers:
    layer.trainable = False

x = model.output
x = Dense(16 , activation="relu")(x)
x = Flatten()(x)
predictions = Dense(1, activation = "sigmoid")(x)
model = Model(inputs = model.input, outputs = predictions)
model.summary()  #in the first attempt :

Model: "model_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 512, 512, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 512, 512, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 256, 256, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 256, 256, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 256, 256, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 128, 128, 128)     0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 128, 128, 256)     295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 64, 64, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 64, 64, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 64, 64, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 64, 64, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 32, 32, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 16, 16, 512)       0         
_________________________________________________________________
dense_5 (Dense)              (None, 16, 16, 16)        8208      
_________________________________________________________________
flatten_3 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 4097      
=================================================================
Total params: 14,726,993
Trainable params: 12,305
Non-trainable params: 14,714,688
_________________________________________________________________
None

model.summary()  #in the second attempt :

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_5 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
vgg16 (Model)                (None, 16, 16, 512)       14714688  
_________________________________________________________________
dense_3 (Dense)              (None, 16, 16, 16)        8208      
_________________________________________________________________
flatten_2 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 4097      
=================================================================
Total params: 14,726,993
Trainable params: 12,305
Non-trainable params: 14,714,688
_________________________________________________________________
None

从凯拉斯的角度来看。 转换为保存的模型后

tf.reset_default_graph()
graph = tf.Graph()
sess =  tf.Session(graph=graph) 
tf.saved_model.loader.load(sess, [tf.saved_model.SERVING], "SavedModel")
sess.graph.get_operations()
[<tf.Operation 'dense_3_1/kernel' type=VarHandleOp>,
 <tf.Operation 'dense_3_1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
<tf.Operation 'dense_3_1/bias' type=VarHandleOp>,
 <tf.Operation 'dense_3_1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'dense_4_1/kernel' type=VarHandleOp>,
 <tf.Operation 'dense_4_1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'dense_4_1/bias' type=VarHandleOp>,
 <tf.Operation 'dense_4_1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block1_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block1_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block1_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block1_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block2_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block2_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block2_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block2_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'NoOp' type=NoOp>,
 <tf.Operation 'Const' type=Const>,
 <tf.Operation 'serving_default_input_5' type=Placeholder>,
 <tf.Operation 'StatefulPartitionedCall' type=StatefulPartitionedCall>,
 <tf.Operation 'saver_filename' type=Placeholder>,
 <tf.Operation 'StatefulPartitionedCall_1' type=StatefulPartitionedCall>,
 <tf.Operation 'StatefulPartitionedCall_2' type=StatefulPartitionedCall>]

所以当我试图做出预测时:

in_t = sess.graph.get_tensor_by_name('serving_default_input_5:0')
out  = sess.graph.get_tensor_by_name('dense_4_1/bias/Read/ReadVariableOp:0')
...
pred = sess.run([out], feed_dict={ in_t: image}) # image has the right shape

如何将形状(512512,3)的图像传递到已加载的保存的\u模型?你知道吗

提前支付


Tags: nonereadtftypeoperationkernelbiasconv1