为什么我的1隐藏层自动编码器与tensorflow不能工作?

2024-04-26 12:53:49 发布

您现在位置:Python中文网/ 问答频道 /正文

我想做一个只有1层的自动编码器,它有100个隐藏单元。我使用了tensorflow给出的MNIST数据集。在

但是,它不起作用。我不知道是什么问题。 当我调试时,我的译码器层就填满了所有的1

反向传播更新是否不起作用? 或者,单层自动编码器不能工作?在

请帮帮我。在

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data

if __name__ == "__main__":
    # load data
    mnist = input_data.read_data_sets("../neural_network/data/mnist", one_hot=True)

    # make placeholder
    X = tf.placeholder("float32", [None, 784])

    # define constant
    learning_rate = 0.01
    training_epochs = 10
    batch_size = 100
    display_step = 1

    # make variables / encoding,decoding layer
    W_encoder = tf.Variable(tf.random_uniform([784, 200], 0.45, 0.55), name="encoder")
    W_decoder = tf.Variable(tf.random_uniform([200, 784], 0.45, 0.55), name="decoder")
    b_encoder = tf.Variable(tf.random_uniform([200], 0.005, 0.015))
    b_decoder = tf.Variable(tf.random_uniform([784], 0.005, 0.015))

    # construct encoder / decoder model
    encoder_layer = tf.nn.sigmoid(tf.matmul(X, W_encoder) + b_encoder)
    decoder_layer = tf.nn.sigmoid(tf.matmul(encoder_layer, W_decoder) + b_decoder)

    # predict / optimization
    y_pred = decoder_layer
    y_true = X

    # cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred, labels=y_true)
    cost = tf.reduce_mean(tf.square(y_true - y_pred))
    # cost = tf.reduce_mean(-1. * X * tf.log(decoder) - (1. - X)* tf.log(1 - decoder))
        optimizer =     tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)

    init = tf.global_variables_initializer()

    with tf.Session() as sess:
        sess.run(init)

        total_batch = int(mnist.train.num_examples/batch_size)

        # total training cycle
        for epoch in range(training_epochs):
            # total batch cycle
            for i in range(total_batch):
                batch_x, batch_y = mnist.train.next_batch(batch_size)
                print("before fetch")
                print(sess.run(y_pred, feed_dict={X: batch_x}))
                _, c = sess.run([optimizer, cost], feed_dict={X : batch_x})
                print("after fetch")
                print(sess.run(y_pred, feed_dict={X: batch_x}))

            if epoch % display_step == 0:
                print("Epoch : %04d" % (epoch+1), "cost : {:.9f}".format(c))
        print("training finished")

        encode_decode =sess.run(y_pred, feed_dict={X : mnist.test.images[:100]})
    # 출력.
    fig, ax = plt.subplots(nrows=10, ncols=20, figsize=(20, 10))
    for i in range(10):
        for j in range(10):
            ax[i][j].imshow(np.reshape(mnist.test.images[i*10 + j], (28, 28)))
            ax[i][j+10].imshow(np.reshape(encode_decode[i*10 + j], (28, 28)))

    fig.show()
    plt.draw()
    plt.waitforbuttonpress()

Tags: runimportlayerencoderdatatfasbatch
1条回答
网友
1楼 · 发布于 2024-04-26 12:53:49

你能帮我试试吗? 您可以对权重进行随机统一初始化: https://www.tensorflow.org/api_docs/python/tf/random_uniform

你能试着设置权重层,使其随机均匀的数字也是负数吗?在

W_decoder = tf.Variable(tf.random_uniform([200, 784], -0.45, 0.55), name="decoder")

另外,试着清理一下你的代码,这样我们就可以更好地了解到底发生了什么。在

祝你好运,如果有用就告诉我。在

相关问题 更多 >