OutOfRangeError(回溯见上文):RandomShuffleEQUEUE'_5_shuffle_batch_1/random_shuffle_queue'已关闭且元素不足

2024-04-25 07:21:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用TFrecord作为输入。
现在我需要三次批量输入。image_batch和{}正常。但第二个posimage_batchposlabel_batch是错误的。 我读过很多关于RandomShuffleQueue错误问题的帖子。
答案tf.local_variables_initializer()无法解决我的错误
因为我只搜索一个batch_databatch_label作为输入。所以我不知道三重输入。
我在网上找了很久。但没用。请帮助或尝试给出一些想法如何实现这一点。在

def real_read_and_decode(filename):
    filename_queue = tf.train.string_input_producer([filename])

    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(serialized_example,
                                       features={
                                           'label': tf.FixedLenFeature([], tf.int64),
                                           'img_raw' : tf.FixedLenFeature([], tf.string),
                                       })
    img = tf.decode_raw(features['img_raw'], tf.uint8)
    img = tf.reshape(img, [WIDTH,HEIGHT, 3])
    label = tf.cast(features['label'], tf.int32)
    labels = tf.one_hot(label, NUM_CLASSES)
    return img, labels    

def main():

    image, label = read_and_decode("sketch_train.tfrecords")
    posimage, poslabel = real_read_and_decode("pos_train.tfrecords")
    negimage, neglabel = real_read_and_decode("neg_train.tfrecords")

    image_batch, label_batch =tf.train.shuffle_batch([image, label],batch_size=BATCH_SIZE,capacity=1500, min_after_dequeue=1000)
    posimage_batch, poslabel_batch = tf.train.shuffle_batch([posimage, poslabel],batch_size=BATCH_SIZE,capacity=1500, min_after_dequeue=1000)
    negimage_batch, neglabel_batch = tf.train.shuffle_batch([negimage, neglabel],batch_size=BATCH_SIZE,capacity=1500, min_after_dequeue=1000)

    with tf.Session(config=config) as sess:
        sess.run(tf.local_variables_initializer())
        sess.run(tf.global_variables_initializer())
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess,coord=coord)
        for i in range(ITERATION):
            if coord.should_stop():
                print('corrd break!!!!!!')
                break
            #sess.run(tf.local_variables_initializer())
            example_train, l_train = sess.run([image_batch, label_batch])
            example_train2, l_train2= sess.run([posimage_batch, poslabel_batch])
            example_train3, l_train3 = sess.run([negimage_batch, neglabel_batch])
            _, loss_v = sess.run([train_step, loss],feed_dict={x1: example_train,y1: l_train,x2: example_train2, y2: l_train2,x3: example_train3, y3: l_train3})

This is my log

因为我是新用户,而且我的英语不好。
希望你不介意。在


Tags: runimageimgreadexampletfbatchtrain
1条回答
网友
1楼 · 发布于 2024-04-25 07:21:20

{cd1>可能需要尽早或稍后添加一些异常:

try:
    sess.run(tf.local_variables_initializer())
    sess.run(tf.global_variables_initializer())
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess,coord=coord)
    for i in range(ITERATION):
    #....
except tf.errors.OutOfRangeError:
    print('Done training   limit reached')
finally:
    coord.request_stop()

相关问题 更多 >