带浮点输入的进纸dict TensorFlow出错

2024-03-29 06:43:58 发布

您现在位置:Python中文网/ 问答频道 /正文

这是一段代码

def train(x):
    prediction = cnn(x)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, labels=y))
    optimizer = tf.train.AdadeltaOptimizer().minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in xrange(num_epochs):
            epoch_loss = 0
            for _ in xrange(int(1020/batch_size)):
                epoch_x, epoch_y = train_iterator.get_next()
                _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
                epoch_loss += c

            print('Epoch {} completed out of {} - loss {}'.format(epoch + 1, num_epochs, epoch_loss))

错误是这样的

^{pr2}$

我使用以下代码从tfrecord文件读取数据

def read_image_dataset_tfrecordfile(filenames, color=False, resize=False, width=100, height=100):

    def parser(record):
        keys_to_features = {
            "image": tf.FixedLenFeature([], tf.string),
            "label": tf.FixedLenFeature([], tf.int64)
        }
        parsed = tf.parse_single_example(record, keys_to_features)
        image = tf.decode_raw(parsed["image"], tf.uint8)
        image = tf.cast(image, tf.float32)
        if resize:
            if color:
                image = tf.reshape(image, shape=[width, height, 3])
            else:
                image = tf.reshape(image, shape=[width, height, 1])
        label = tf.cast(parsed["label"], tf.int32)
        label = tf.one_hot(label, 17)

        return {'image': image}, label

    dataset = tf.data.TFRecordDataset(filenames)
    dataset = dataset.map(parser)

    return dataset

例如,我在这里打印了一张图片和它的标签

[[59.],
        [94.],
        [79.],
        ...,
        [41.],
        [42.],
        [43.]],

       [[56.],
        [86.],
        [91.],
        ...,
        [43.],
        [41.],
        [33.]],

       [[53.],
        [69.],
        [63.],
        ...,
        [56.],
        [59.],
        [51.]]], dtype=float32)}, array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
      dtype=float32))

图像是float32作为网络的输入。你可以看到

x = tf.placeholder(tf.float32, [None, 10000])
def cnn(x):
    weights = {
        'W_conv1': tf.Variable(tf.random_normal([5, 5, 1, 16])),
        'W_conv2': tf.Variable(tf.random_normal([5, 5, 16, 16])),
        'W_conv3': tf.Variable(tf.random_normal([5, 5, 16, 32])),
        'W_conv4': tf.Variable(tf.random_normal([5, 5, 32, 32])),
        'W_fc': tf.Variable(tf.random_normal([24 * 24 * 32, 1024])),
        'out': tf.Variable(tf.random_normal([1024, n_classes]))
    }

    biases = {
        'b_conv1': tf.Variable(tf.random_normal([16])),
        'b_conv2': tf.Variable(tf.random_normal([16])),
        'b_conv3': tf.Variable(tf.random_normal([32])),
        'b_conv4': tf.Variable(tf.random_normal([32])),
        'b_fc': tf.Variable(tf.random_normal([1024])),
        'b_out': tf.Variable(tf.random_normal([n_classes]))
    }

    x = tf.reshape(x, [-1, 100, 100, 1])

    conv1 = tf.nn.relu(tf.nn.conv2d(x, weights['W_conv1'], strides=[1, 1, 1, 1], padding='SAME') + biases['b_conv1'])
    conv2 = tf.nn.relu(tf.nn.conv2d(conv1, weights['W_conv2'], strides=[1, 1, 1, 1], padding='SAME') +
                       biases['b_conv2'])
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

    conv3 = tf.nn.relu(tf.nn.conv2d(conv2, weights['W_conv3'], strides=[1, 1, 1, 1], padding='SAME') +
                       biases['b_conv3'])
    conv4 = tf.nn.relu(tf.nn.conv2d(conv3, weights['W_conv4'], strides=[1, 1, 1, 1], padding='SAME') +
                       biases['b_conv4'])
    conv4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

    fc = tf.reshape(conv4, [-1, 24 * 24 * 32])
    fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
    fc = tf.nn.dropout(fc, dropout_rate)

    out = tf.matmul(fc, weights['out']) + biases['b_out']

    return out

我使用的网络与TensorFlow示例中MNIST数据集使用的网络相同。权重和偏差是浮动的,所以我的输入必须是浮动的对吗?使用MNIST数据集工作起来很有魅力,但是现在它给了我这个错误,我不知道为什么。在

编辑1

Traceback (most recent call last):
  File "/Users/user/PycharmProjects/ProveTF/main.py", line 109, in <module>
    train(x)
  File "/Users/user/PycharmProjects/ProveTF/main.py", line 84, in train
    _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
    run_metadata_ptr)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1106, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
  File "/Users/user/venv/lib/python2.7/site-packages/numpy/core/numeric.py", line 492, in asarray
    return array(a, dtype, copy=False, order=order)
TypeError: float() argument must be a string or a number

编辑2

Traceback (most recent call last):
  File "/Users/user/PycharmProjects/ProveTF/main.py", line 111, in <module>
    train(x)
  File "/Users/user/PycharmProjects/ProveTF/main.py", line 84, in train
    _, c = sess.run([optimizer, cost])
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
    run_metadata_ptr)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1137, in _run
    feed_dict_tensor, options, run_metadata)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1355, in _do_run
    options, run_metadata)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1374, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,100,100,1], [?,17]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]

Caused by op u'IteratorGetNext', defined at:
  File "/Users/user/PycharmProjects/ProveTF/main.py", line 109, in <module>
    x, y = train_iterator.get_next()
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 330, in get_next
    name=name)), self._output_types,
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 866, in iterator_get_next
    output_shapes=output_shapes, name=name)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3271, in create_op
    op_def=op_def)
  File "/Users/user/venv/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1650, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): End of sequence
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,100,100,1], [?,17]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]

编辑3

def train(input):
    prediction = cnn(input)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, labels=y))
    optimizer = tf.train.AdadeltaOptimizer().minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in xrange(num_epochs):
            epoch_loss = 0
            for _ in xrange(int(1020/batch_size)):
                try:
                    _, c = sess.run([optimizer, cost])
                    epoch_loss += c
                except tf.errors.OutOfRangeError:
                    train_set.repeat()

            print('Epoch {} completed out of {} - loss {}'.format(epoch + 1, num_epochs, epoch_loss))

Tags: runinpyimagevenvtflinerandom
1条回答
网友
1楼 · 发布于 2024-03-29 06:43:58

dict/array转换错误

有太多的代码和依赖项无法重现您的问题。 但是在我看来,您的错误可能来自于您的parser(record)函数,该函数返回包装在dict(c.f.{'image': image})中的图像,而您的label则不是这样。由于epoch_x将包含dict元素,Tensorflow(和numpy)将无法将它们转换为预期的数据类型(atf.float32tensor,c.f.占位符x的定义),这可能解释与转换相关的错误。在

长话短说,尝试在解析器中将return {'image': image}, label替换为return image, label。在


Tensorflow数据集API与feed_dict

不知怎么的,起初我没想到这个问题。给定基于Tensorflow数据集的输入管道,不应该使用placeholder/feed_dict。后者的目的是将CPU上的数据传递给Tensorflow(假设运行在gpu上)。通过feed_dict完成的输入的复制/转换是一个很大的开销,因此开发了Tensorflow数据集API,它通过在实际图形运行的同时读取和转换数据来缩短这一切。换句话说,你的epoch_x, epoch_y不需要被输入到Tensorflow;它们已经是它的图形的一部分了。在

基本上,您的管道应该类似于以下内容:

train_dataset = read_image_dataset_tfrecordfile(my_filenames)
train_dataset = train_dataset.repeat() # if you want to loop indefinitely
train_iterator = train_dataset.make_one_shot_iterator()

x, y = train_iterator.get_next()
# x, y will represent your data batches, fed with the next ones every time 
# they are called.
# So you just use them directly instead of placeholders:
prediction = cnn(x) 
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
    logits=prediction, labels=y))
optimizer = tf.train.AdadeltaOptimizer().minimize(cost)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for epoch in xrange(num_epochs):
        epoch_loss = 0
        for _ in xrange(int(1020/batch_size)):
            _, c = sess.run([optimizer, cost])
            # ...

相关问题 更多 >