我试图创建一个热图,显示我的CNN正在看的地方,以便对图像进行分类。对于分类任务,我将在故障零件和非故障零件之间做出决定(因此只有二进制分类)。 我试图复制这个code。然而,我看到的是,他们使用了整个Inception网络,而没有改变顶层。我现在的问题是,我不知道如何正确连接这些层,以便我可以使用梯度函数将损失从我的模型末端(一个神经元的密集层)反向传播到初始网络中的最后一个卷积层(“mixed10”)。到目前为止,我得到了一个带有未连接梯度信息的断言错误
我训练的模特:
def create_model():
model_inception= InceptionV3(include_top=False, weights='imagenet',input_shape=(299,299,3))
model_inception.trainable=False
model = Sequential()
model.add(model_inception)
model.add(GlobalAveragePooling2D())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy', metrics=['accuracy'])
return model
梯度凸轮代码:
layer_name = 'mixed10'
image = np.expand_dims(X_valid[0], 0)
#input layter to inception
input_lay = model.get_layer(index=0).layers[0].input
#Heatmap creatd from ConvLayer
conv_output_lay = model.get_layer(index=0).get_layer(layer_name).output
#Output Layer of the network
output_lay = model.get_layer(index=-1).output
#Connect conv_output with model.input
incept_part_till_conv = Model(input_lay,conv_output_lay)
conv_output = incept_part_till_conv(model.input)
gradModel = Model(
inputs=[model.input],
outputs=[conv_output,
model.output])
# record operations for automatic differentiation
with tf.GradientTape() as tape:
# cast the image tensor to a float-32 data type, pass the
# image through the gradient model, and grab the loss
# associated with the specific class index
inputs = tf.cast(image, tf.float32)
(convOutputs, predictions) = gradModel(inputs)
loss = predictions[:]
# use automatic differentiation to compute the gradients
grads = tape.gradient(loss, convOutputs)
然后我得到了错误信息。如果有人能给我一些建议,告诉我如何让它发挥作用,那就太好了。谢谢
唯一让我印象深刻的是,您需要计算磁带范围之外的梯度(如示例中所示):
相关问题 更多 >
编程相关推荐