python numpy上的梯度下降函数

2024-05-13 01:01:45 发布

您现在位置:Python中文网/ 问答频道 /正文

def gradientDescent(X,y,theta,alpha,num_iters):
    print(X.shape,y.shape,theta.shape)
    m = len(y)
    for iter in range(num_iters):
        hypothesis = np.dot(X,theta)
        loss = hypothesis - y
        print("loss {}".format(loss[0]))
        gradient = np.dot(X.transpose(),loss)/m
        theta = theta - alpha*gradient
    return(theta)

我已经打印了X,y,θ和损耗的形状以供澄清,alpha输入=0.01,num_iters输入=150。结果在步骤6后发散,如下所示:

(97, 2) (97, 1) (2, 1)
loss [-17.592]
loss [-13.5419506]
loss [-12.82427147]
loss [-12.69896095]
loss [-12.67894766]
loss [-12.67764826]
loss [-12.67967143]
loss [-12.68228117]
loss [-12.68499113]
loss [-12.68771485]
loss [-12.69043697]
loss [-12.69315478]
...
loss [-13.01638377]
loss [-13.01851416]
loss [-13.0206407]
loss [-13.02276341]
loss [-13.0248823]

theta = [[-0.86287834]
 [ 0.88834569]]```
theta should have been  [[-3.6303]
  [1.1664]])

Tags: alphaforlendefnpnumdothypothesis