从Tensorflow变量中提取值

2024-04-27 02:41:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我是Python和Tensorflow的新手,在培训阶段之后,我面临一些从NN获取值的困难。在

import tensorflow as tf
import numpy as np
import input_data

mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)

n_nodes_hl1 = 50
n_nodes_hl2 = 50

n_classes = 10
batch_size = 128

x = tf.placeholder('float',[None, 784])
y = tf.placeholder('float')

def neural_network_model(data):

    hidden_1_layer = {'weights': tf.Variable(tf.random_normal([784,n_nodes_hl1]),name='weights1'),
                      'biases': tf.Variable(tf.random_normal([n_nodes_hl1]),name='biases1')}
    hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2]),name='weights2'),
                      'biases': tf.Variable(tf.random_normal([n_nodes_hl2]),name='biases2')}
    output_layer =   {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes]),name='weights3'),
                      'biases': tf.Variable(tf.random_normal([n_classes]),name='biases3')}

    l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']) , hidden_1_layer['biases'])
    l1 = tf.nn.relu(l1)

    l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']) , hidden_2_layer['biases'])
    l2 = tf.nn.relu(l2)

    output = tf.add(tf.matmul(l2, output_layer['weights']) , output_layer['biases'])

     return output


def train_neural_network(x):
    prediction = neural_network_model(x)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction,labels=y))
    optimizer = tf.train.AdamOptimizer().minimize(cost)

    hm_epochs = 100
    init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer() )
    with tf.Session() as sess:
        sess.run(init)
        for epoch in range(hm_epochs):
            epoch_loss = 0
            for _ in range(int(mnist.train.num_examples / batch_size)) :
                 ep_x, ep_y = mnist.train.next_batch(batch_size)
                _, c = sess.run([optimizer, cost], feed_dict = {x: ep_x, y: ep_y})
                epoch_loss += c
            print('Epoch', epoch+1, 'completed out of', hm_epochs, 'loss:',epoch_loss)


        correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
        print('Accuracy:', accuracy.eval({x:mnist.test.images, y: mnist.test.labels}))


train_neural_network(x)

我尝试使用以下方法从层1提取权重:

^{pr2}$

但是这个抛出错误Attempting to use uninitialized value weights1_1

这是因为我的变量在dict类型的“hidden_1_layer”中吗?在

我还不习惯使用Python和Tensorflow数据类型,所以我完全搞不懂!在


Tags: namelayeroutputdatatftrainrandomvariable
2条回答

当你写作的时候

w = tf.get_variable('weights1',shape=[784,50])
b = tf.get_variable('biases1',shape=[50,])

您正在定义两个新变量:

  1. weights1变成{}
  2. ^{cd3}

因为图中已经存在名为weights1和{}的变量,所以tensorflow为您添加了_<counter>后缀,以避免命名冲突。在

如果您想创建对已经存在的变量的引用,您必须熟悉variable scope的概念。在

简而言之,必须明确表示要重用某个变量,并且可以使用[tf.variable_scope]2及其重用参数来实现这一点。在

^{pr2}$

使用以下代码:

tensor_1 = tf.get_default_graph().get_tensor_by_name("weights1:0")
tesnor_2 = tf.get_default_graph().get_tensor_by_name("biases1:0")
sess = tf.Session()
np_arrays = sess.run([tensor_1, tensor_2])

此外,还有其他方法来存储变量以供以后使用或分析。请说明提取权重和偏差的目的。如果需要进一步讨论,请进一步评论。在

相关问题 更多 >