多变量线性回归 - Python - 实现问题

1 投票
1 回答
1867 浏览
提问于 2025-04-17 10:53

我正在尝试实现一个多变量的线性回归(其实就是两个变量)。我使用的是斯坦福大学机器学习课程的数据。我已经成功地在单变量的情况下运行了这个代码。理论上,这段代码也应该能在多变量的情况下正常工作,但实际上并没有。

数据链接:

http://s3.amazonaws.com/mlclass-resources/exercises/mlclass-ex1.zip

特征归一化:

''' This is for the regression with multiple variables problem .  You have to normalize features before doing anything. Lets get started'''
from __future__ import division
import os,sys
from math import *

def mean(f,col):
    #This is to find the mean of a feature
    sigma = 0
    count = 0
    data = open(f,'r')
    for line  in data:
        points = line.split(",")
        sigma = sigma + float(points[col].strip("\n"))
        count+=1
    data.close()
    return sigma/count
def size(f):
    count = 0
    data = open(f,'r')

    for line in data:
        count +=1
    data.close()
    return count
def standard_dev(f,col):
    #Calculate the standard_dev . Formula : Sqrt ( Sigma ( x - x') ** (x-x') ) / N ) 
    data = open(f,'r')
    sigma = 0
    mean = 0
    if(col==0):
        mean = mean_area
    else:
        mean = mean_bedroom
    for line in data:
        points = line.split(",")
        sigma  = sigma + (float(points[col].strip("\n")) - mean) ** 2
    data.close()
    return sqrt(sigma/SIZE)

def substitute(f,fnew):
    ''' Take the old file.  
        1. Subtract the mean values from each feature
        2. Scale it by dividing with the SD
    '''
    data = open(f,'r')
    data_new = open(fnew,'w')
    for line in data:
        points = line.split(",")
        new_area = (float(points[0]) - mean_area ) / sd_area
        new_bedroom = (float(points[1].strip("\n")) - mean_bedroom) / sd_bedroom
        data_new.write("1,"+str(new_area)+ ","+str(new_bedroom)+","+str(points[2].strip("\n"))+"\n")
    data.close()
    data_new.close()
global mean_area
global mean_bedroom
mean_bedroom = mean(sys.argv[1],1)
mean_area = mean(sys.argv[1],0)
print 'Mean number of bedrooms',mean_bedroom
print 'Mean area',mean_area
global SIZE
SIZE = size(sys.argv[1])
global sd_area
global sd_bedroom
sd_area = standard_dev(sys.argv[1],0)
sd_bedroom=standard_dev(sys.argv[1],1)
substitute(sys.argv[1],sys.argv[2])

我在代码中实现了均值和标准差的计算,而不是使用NumPy或SciPy库。在将这些值存储到文件后,以下是一个快照:

X1 X2 X3 房屋成本

1,0.131415422021,-0.226093367578,399900
1,-0.509640697591,-0.226093367578,329900
1,0.507908698618,-0.226093367578,369000
1,-0.743677058719,-1.5543919021,232000
1,1.27107074578,1.10220516694,539900
1,-0.0199450506651,1.10220516694,299900
1,-0.593588522778,-0.226093367578,314900
1,-0.729685754521,-0.226093367578,198999
1,-0.789466781548,-0.226093367578,212000
1,-0.644465992588,-0.226093367578,242500

我在这些数据上运行回归分析,以找到参数。相关的代码如下:

''' The plan is to rewrite and this time, calculate cost each time to ensure its reducing. Also make it  enough to handle multiple variables '''
from __future__ import division
import os,sys

def computecost(X,Y,theta):
    #X is the feature vector, Y is the predicted variable
    h_theta=calculatehTheta(X,theta)
    delta = (h_theta - Y) * (h_theta - Y)
    return (1/194) * delta 



def allCost(f,no_features):
    theta=[0,0]
    sigma=0
    data = open(f,'r')
    for line in data:
        X=[]
        Y=0
        points=line.split(",")
        for i in range(no_features):
            X.append(float(points[i]))
        Y=float(points[no_features].strip("\n"))
        sigma=sigma+computecost(X,Y,theta)
    return sigma

def calculatehTheta(points,theta):
    #This takes a file which has  (1,feature1,feature2,so ... on)
    #print 'Points are',points
    sigma  = 0 
    for i in range(len(theta)):

        sigma = sigma + theta[i] * float(points[i])
    return sigma



def gradient_Descent(f,no_iters,no_features,theta):
    ''' Calculate ( h(x) - y ) * xj(i) . And then subtract it from thetaj . Continue for 1500 iterations and you will have your answer'''


    X=[]
    Y=0
    sigma=0
    alpha=0.01
    for i in range(no_iters):
        for j in range(len(theta)):
            data = open(f,'r')
            for line in data:
                points=line.split(",")
                for i in range(no_features):
                    X.append(float(points[i]))
                Y=float(points[no_features].strip("\n"))
                h_theta = calculatehTheta(points,theta)
                delta = h_theta - Y
                sigma = sigma + delta * float(points[j])
            data.close()
            theta[j] = theta[j] - (alpha/97) * sigma

            sigma = 0
    print theta

print allCost(sys.argv[1],2)
print gradient_Descent(sys.argv[1],1500,2,[0,0,0])

它输出的参数如下:

[-3.8697149722857996e-14, 0.02030369056348706, 0.979706406501678]

这三个参数都非常错误 :( 在单变量的情况下,完全相同的代码可以正常工作。

谢谢!

1 个回答

2

我对全局变量和四层嵌套循环感到担心。还有就是多次读写文件的数据处理。

你的数据是不是太大了,放不下在内存里?

为什么不使用csv模块来处理文件呢?

为什么不使用Numpy来处理数字部分呢?

别重复造轮子。

假设你的数据条目是按行排列的,你可以用两行代码来规范化数据并进行最小二乘拟合:

normData = (data-data.mean(axis = 0))/data.std(axis = 0)
c = numpy.dot(numpy.linalg.pinv(normData),prices)

对原作者评论的回复

好的,那我能给你的唯一建议就是把它拆分成更小的部分,这样更容易看清楚发生了什么,也更容易检查这些小部分是否正常。

这可能不是问题所在,但你在四层循环中有两个循环使用了i作为索引。这种问题可以通过将代码分成更小的范围来避免。

我觉得我已经很多年没写过显式的嵌套循环或者声明全局变量了。

撰写回答