如何使用tensorflow 2.0解决线性回归代码中的此错误?

2024-03-28 22:59:52 发布

您现在位置:Python中文网/ 问答频道 /正文

我用tensorflow 2.0学习线性回归,我打算在keras工具中使用SGD优化器。这是我的密码

import tensorflow as tf
from tensorflow import keras

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline

x_train = [1,2,3]
y_train = [1,2,3]

W = tf.Variable(np.random.normal([1]),name='weight')
b = tf.Variable(np.random.normal([1]),name='bias')

cost = tf.reduce_mean(tf.square(x_train*W + b-y_train))

opt = keras.optimizers.SGD(learning_rate=0.1)
fig=plt.grid()
plt.scatter(x_train,y_train)
plt.xlabel('x')
plt.ylabel('y')
for i in range(20):
    plt.title('hypothesis: epoch {}'.format(i+1))
    plt.plot(hypothesis, 'r.-',label='hypothesis')
    plt.legend(loc='best')
    opt.minimize(cost, var_list=[W,b])

我本打算为每个历元打印一个绘图,但我在循环的最后一句中得到了这个错误

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-8-be257fb20d71> in <module>
      8     plt.plot(hypothesis, 'r.-',label='hypothesis')
      9     plt.legend(loc='best')
---> 10     opt.minimize(cost, var_list=[W,b])

~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\optimizer_v2.py in minimize(self, loss, var_list, grad_loss, name)
    315     """
    316     grads_and_vars = self._compute_gradients(
--> 317         loss, var_list=var_list, grad_loss=grad_loss)
    318 
    319     return self.apply_gradients(grads_and_vars, name=name)

~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\optimizer_v2.py in _compute_gradients(self, loss, var_list, grad_loss)
    349       if not callable(var_list):
    350         tape.watch(var_list)
--> 351       loss_value = loss()
    352     if callable(var_list):
    353       var_list = var_list()

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable

我怎样才能解决这个问题


Tags: nameinimportvartftensorflowastrain
1条回答
网友
1楼 · 发布于 2024-03-28 22:59:52

cost:cost = tf.reduce_mean(tf.square(x_train*W + b-y_train))实际上是一个张量,但是opt.minimize(cost, var_list=[W,b])中的cost参数需要一个函数

因此,您应该通过调用lambda将成本创建为函数而不是张量:

cost = lambda: tf.reduce_mean(tf.square(x_train*W + b-y_train))

而且,你应该训练更长的时间

相关问题 更多 >