磁带=如果磁带不是None-else,则为磁带backprop.GradientTape()

2024-05-13 21:25:11 发布

您现在位置:Python中文网/ 问答频道 /正文

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
def line(x):
  return 2*x+4
X = np.arange(0,20)
y = [k for k in line(X)]
a = tf.Variable(1.0)
b = tf.Variable(0.2)

y_in = a*X + b
loss = tf.reduce_mean(tf.square(y_in - y))
#this is my old code
#optimizer = tf.train.GradientDescentOptimizer(0.2)
#train = optimizer.minimize(loss)

#new Code
optimizer = tf.optimizers.SGD (0.2)
train = optimizer.minimize(loss,var_list=[a,b])

///错误

ValueError回溯(最近一次调用上次) 在() ----&燃气轮机;1列=优化器。最小化(损失,变量列表=[a,b])

1帧 /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer\u v2/optimizer\u v2.py in\u compute\u梯度(自、损耗、变量列表、梯度损耗、磁带) 530#TODO(josh11b):测试我们是否以合理的方式处理重量衰减。 531如果不可调用(丢失)且磁带无: --&燃气轮机;532 raise VALUE ERROR(“tape在传递Tensor丢失时需要tape”) 533磁带=如果磁带不是None else,则为磁带backprop.GradientTape() 534

ValueError:tape是传递Tensor丢失时必需的


Tags: inimporttftensorflowasnplinetrain
1条回答
网友
1楼 · 发布于 2024-05-13 21:25:11

你还有更多的路要走!您需要计算梯度,然后使用优化器更改变量。我修改了你的代码。另外,你的损失函数也不太好用

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf

def line(x):
  return 2*x+4
X = np.arange(0,20)
y = tf.constant(np.array([k for k in line(X)], dtype=np.float32))
a = tf.Variable(1.0, trainable=True)
b = tf.Variable(0.2, trainable=True)

def objective_fun(X):
    y_in = a * X + b
    return y_in


def loss_fun(y_true, y_pred):
    # loss = tf.reduce_mean(tf.square(y_true - y_pred))
    loss = tf.reduce_mean(tf.abs(y_pred - y_true))
    return loss

optimizer = tf.optimizers.SGD (0.01)

MAX_ITER = 1000
for it in range(MAX_ITER):
    with tf.GradientTape() as tape:
        y_pred = objective_fun(X)
        loss = loss_fun(y_pred, y)
    grad = tape.gradient(loss, [a, b])
    optimizer.apply_gradients(zip(grad, [a, b]))
    print(loss.numpy())

这是优化的结果:

a.numpy(), b.numpy()

(1.9880208,3.8429925)

相关问题 更多 >