我试图通过计算函数中的值来得到梯度下降。我的代码出错了
def gradient_descent(X, y, theta, alpha, num_iters):
m = len(y)
cost_history = np.zeros(num_iters)
theta_history = np.zeros((num_iters,2))
for i in range(num_iters):
prediction = np.reshape(np.dot(np.transpose(theta), X),97)
theta = theta -(1/m)*alpha*( X.T.dot((prediction - y)))
theta_history[i,:] =theta.T
J_history[i] = cal_cost(theta,X,y)
return theta, J_history
"""Args
----
X (numpy mxn array) - The example inputs, first column is expected
to be all 1's.
y (numpy m array) - A vector of the correct outputs of length m
theta (numpy nx1 array) - An array of the set of theta parameters
to evaluate
alpha (float) - The learning rate to use for the iterative gradient
descent
num_iters (int) - The number of gradient descent iterations to perform
Returns
-------
theta (numpy nx1 array) - The final theta parameters discovered after
out gradient descent.
J_history (numpy num_itersx1 array) - A history of the calculated
cost for each iteration of our descent.
"""
下面是我传递给函数和变量的参数
theta = np.zeros( (2, 1) )
iterations = 1500;
alpha = 0.01
theta, J = gradient_descent(X, y, theta, alpha, iterations)
错误消息是:
ValueError: shapes (97,2) and (97,) not aligned: 2 (dim 1) != 97 (dim 0)
我不确定从何处得到ValueError,但是带有形状(97,)的ndarray需要在其上运行
np.expand_dims
,如下所示:np.expand_dims(vector, axis=-1)
这将使向量具有形状(97,1),因此应该对齐/能够广播。你知道吗
相关问题 更多 >
编程相关推荐