优化器最小化错误:“float”对象没有属性“dtype”

2024-05-23 20:41:44 发布

您现在位置:Python中文网/ 问答频道 /正文

我是张量流的初学者。用tensorflow 2.0进行梯度计算时存在一些问题。有人能帮我吗?在

这是我的密码。错误提示是:

if not t.dtype.is_floating:
AttributeError: 'float' object has no attribute 'dtype'

我试过:

^{pr2}$

消息变为:

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

# w = tf.Variable([1.0,1.0],dtype = tf.float32)
w = [1.0,1.0]https://www.cybertec-postgresql.com/en/?p=9102&preview=true
opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()
for i in range(20):
    print("epoch:",i,"w:", w)
    with tf.GradientTape() as tape:
        logit = w[0] * train_X + w[1]
        loss= mse(train_Y,logit)
    w = opt.minimize(loss, var_list=w)

我不知道怎么修理它。谢谢你有什么意见吗。在


Tags: importobjectistftensorflowasnpnot
1条回答
网友
1楼 · 发布于 2024-05-23 20:41:44

您没有正确使用GradientTape。我已经演示了你应该如何应用它。 我已经创建了一个模拟w变量的单单元密集层模型。在

import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_X = np.expand_dims(train_X, axis=-1)
print(train_X.shape)    # (100, 1)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
print(train_Y.shape)    # (100, 1)

# First create a  model with one unit of dense and one bias
input = tf.keras.layers.Input(shape=(1,))
w = tf.keras.layers.Dense(1)(input)   # use_bias is True by default
model = tf.keras.Model(inputs=input, outputs=w)

opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()

for i in range(20):
    print('Epoch: ', i)
    with tf.GradientTape() as grad_tape:
        logits = model(train_X, training=True)
        model_loss = mse(train_Y, logits)
        print('Loss =', model_loss.numpy())

    gradients = grad_tape.gradient(model_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))

相关问题 更多 >