我想按照以下步骤进行培训:
为“线性”创建张量:target=Weight*X
选择顶部目标值并删除所有剩余采样。
获取相应的标签,即Y。
使用GradientDescentOptimizer,最小化和(Y)和拟合变量(W)
代码
from tensorflow.python.framework import ops
import numpy as np
import tensorflow as tf
sess = tf.Session()
X=tf.placeholder(shape=[None,2], dtype=tf.float32)
Y=tf.placeholder(shape=[None,1], dtype=tf.float32)
W = tf.Variable(tf.random_normal(shape=[2, 1]), dtype=tf.float32)
target=tf.matmul(X, W)
flattened=tf.reshape(target,[-1])
selected_targets, keys=tf.nn.top_k(flattened, k=100)
#get corresponding Y
selected_y = tf.gather(Y, keys)
#now we have top 100 selected_targets, and selected_y, train and evaluate W, and fit minimal sum(Y)
train_target = tf.reduce_sum(selected_y) #But if use selected_targets instead of selected_y, it would run successfully, why?
optimizer = tf.train.GradientDescentOptimizer(1)
train = optimizer.minimize(train_target)
# training
x_vals = np.random.rand(1000,2)
y_vals = np.random.rand(1000,1)
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
sess.run(train, {X: x_vals, Y:y_vals})
print(sess.run([W]))
我得到了这个错误:
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [""] and loss Tensor("Sum:0", shape=(), dtype=float32).
有人能帮上忙吗?我发现在张量上应用tf.nn.top_k
时会发生这种情况。但为什么呢?在
它这么说是因为没有梯度。您创建的图是不可微的,因为键是唯一连接损失与W变量的东西,而键是不可微的整数。在
相关问题 更多 >
编程相关推荐