求解张量流线性梯度下降优化的NAN值

2024-04-27 03:00:12 发布

您现在位置:Python中文网/ 问答频道 /正文

我用Tensorflow编写了一个ML模型,并使用了它的线性梯度下降优化器。它通常工作,但当我给出以下输入时,它会给出NaN值:

0.699999988079,1.5,0.03
-0.20000000298,2.40000009537,-0.3
-0.40000000596,8.30000019073,0.02

输出为:

0 90.976654 [0.42633438] [-1.7960052] [-0.4994047]
20 1.4133478e+27 [2.5259817e+11] [-7.2311757e+12] [-1.1611477e+12]
40 inf [9.9565155e+23] [-2.85027e+25] [-4.5768283e+24]
60 nan [inf] [-inf] [-inf]
80 nan [nan] [nan] [nan]
100 nan [nan] [nan] [nan]
120 nan [nan] [nan] [nan]

我是Tensorflow的新手,无法解决这个问题。我请求你帮我解决这个问题。代码如下:

import tensorflow as tf
import csv
import numpy as np

from tensorflow.python import debug as tf_debug


x1_data=[]
x2_data=[]
y_data=[]
with open('Dataset.csv') as csvfile:
    readCSV=csv.reader(csvfile,delimiter=',')
    for row in readCSV:
        x1_data.extend([float(row[0])])
        x2_data.extend([float(row[1])])
        y_data.extend([float(row[2])])

print x1_data
print x2_data
print y_data

W1 = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
W2 = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.random_uniform([1], -1.0, 1.0))

hypothesis = W1 * x1_data + W2 * x2_data + b

cost = tf.reduce_mean(tf.square(hypothesis - y_data))

a = tf.Variable(0.1)
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost)

init = tf.global_variables_initializer()

sess = tf.Session()
sess.run(init)

for step in xrange(2001):
    sess.run(train)
    if step % 20 == 0:
        print step, sess.run(cost), sess.run(W1), sess.run(W2), sess.run(b)

Tags: csvrunimportdatatfasnanvariable