ValueError:无法为具有形状“(?)的张量“占位符\u 1:0”输入形状(50,10)的值?, 12)'

2024-05-19 22:47:25 发布

您现在位置:Python中文网/ 问答频道 /正文

嗯,我试着运行下面的python代码,它有cnn模型。 请帮我找出错误。你知道吗

# Create the model
# placeholder
x = tf.compat.v1.placeholder(tf.float32, [None,784])
y_= tf.compat.v1.placeholder(tf.float32,[None, CLASS_NUM])

# first convolutinal layer
w_conv1 = weight_variable([1, 25, 1, 32])
b_conv1 = bias_variable([32])

x_image = tf.reshape(x, [-1, 1, 784, 1])

h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

# second convolutional layer
w_conv2 = weight_variable([1, 25, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

# densely connected layer
w_fc1 = weight_variable([1*88*64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 1*88*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)

# dropout
keep_prob = tf.compat.v1.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, rate=1 - keep_prob)

# readout layer
w_fc2 = weight_variable([1024, CLASS_NUM])
b_fc2 = bias_variable([CLASS_NUM])

y_conv=tf.compat.v1.nn.softmax(tf.matmul(h_fc1_drop, w_fc2) + b_fc2)

# define var&op of training&testing
actual_label = tf.argmax(y_, 1)
label,idx,count = tf.unique_with_counts(actual_label)
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.GradientDescentOptimizer(1e-4).minimize(cross_entropy)                     
predict_label = tf.argmax(y_conv, 1)
label_p,idx_p,count_p = tf.unique_with_counts(predict_label)
correct_prediction = tf.equal(predict_label, actual_label)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
correct_label=tf.boolean_mask(actual_label,correct_prediction)
label_c,idx_c,count_c=tf.unique_with_counts(correct_label)

# if model exists: restore it
# else: train a new model and save it
saver = tf.train.Saver()
model_name = "model_" + str(CLASS_NUM) + "class_" + folder
model =  model_name + '/' + model_name + ".ckpt"
if not os.path.exists(model + ".meta"):
   sess.run(tf.global_variables_initializer())
   if not os.path.exists(model_name):
      os.makedirs(model_name)
      for i in range(int(TRAIN_ROUND)+1):
         batch = mnist.train.next_batch(50)
         if i%100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x:batch[0],   y_:batch[1], keep_prob:1.0})
            s = "step %d, train accuracy %g" %(i, train_accuracy)
            print(s)
            train_step.run(feed_dict={x:batch[0], y_:batch[1], keep_prob:0.5})

        save_path = saver.save(sess, model)
        print("Model saved in file:", save_path)
else:     
        saver.restore(sess, model)
        print("Model restored: " + model)

好吧,'训练精度=准确度.eval(feed\u dict={x:batch[0],y\u:batch[1],保留)_概率:1.0})'此行触发错误。 这是弹出的错误: ValueError:无法为输入形状(50,10)的值 张量'Placeholder_1:0',它有形状'(?,12)'


Tags: namemodeltfbatchtrainnnvariablelabel
1条回答
网友
1楼 · 发布于 2024-05-19 22:47:25

错误说明您试图将形状为(50,10)的张量输入形状为(?)的占位符张量?,12). 这个?意味着形状可以是任何东西。 根据您发布的代码,我猜错误发生在尝试填充y琰时,它要求张量(?)?,num\ u classes),我想你的班级是12个。你知道吗

不幸的是,我认为您为我们提供了很少的信息来帮助您:您可以添加一些额外的日志或代码部分,您正在尝试运行什么?你知道吗

相关问题 更多 >