如何将函数映射成十个可变形状的加权张量

2024-04-23 23:31:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在计算我的神经网络的损失。我把层数作为一个参数来计算损失,类似于:

  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_train_labels, logits=logits) + 
      L2_beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2))
  )

如果我有层进来作为一个参数,这个转换将不起作用。我可以使用for循环来计算所有的体重减轻,但这并不优雅。我想把nn.l2_loss映射到list weights的每个元素。但我不能让它工作!你知道吗

import tensorflow as tf

weights = []
weights.append(tf.Variable(tf.truncated_normal([784, 1024])))
weights.append(tf.Variable(tf.truncated_normal([1024, 512])))
weights.append(tf.Variable(tf.truncated_normal([512, 10])))

print(weights)

# this works
tf.nn.l2_loss(weights[0]) + tf.nn.l2_loss(weights[1]) + tf.nn.l2_loss(weights[2])

# this is what I need
tf.map_fn(tf.nn.l2_loss, weights)

想法?你知道吗


Tags: 参数labelstf神经网络nnthisvariablenormal
1条回答
网友
1楼 · 发布于 2024-04-23 23:31:50

在下面的示例中,我只使用了正则的map。我不知道它的性能是否与tf.map_fn一样好,但是在没有for循环的情况下就可以完成这项工作。你知道吗

import tensorflow as tf

weights = []
weights.append(tf.Variable(tf.truncated_normal([784, 1024])))
weights.append(tf.Variable(tf.truncated_normal([1024, 512])))
weights.append(tf.Variable(tf.truncated_normal([512, 10])))
init_op = tf.global_variables_initializer()

required=tf.nn.l2_loss(weights[0]) + tf.nn.l2_loss(weights[1]) + tf.nn.l2_loss(weights[2])    
required2=tf.reduce_sum(map(tf.nn.l2_loss,weights))

with tf.Session() as sess:
  sess.run(init_op)
  your_result=sess.run(required)
  my_result=sess.run(required2)

print 'your res ::{}, My res ::{}'.format(your_result,my_result)

对于python3,请改用:

required2=tf.reduce_sum(list(map(tf.nn.l2_loss,weights)))

相关问题 更多 >