线性回归的矩阵预处理

2024-04-25 23:32:39 发布

您现在位置:Python中文网/ 问答频道 /正文

不清楚为什么在使用sklearn.linear_model.LinearRegression之前必须标准化数据集。我不明白为什么尽管原始数据标准化了,它还是必须得到正确的结果。 作为测试,我准备了一些数据集:

import numpy as np
import pandas as pd
size = 700
data['x_1'] = x_1
data['x_2'] = x_2
data['y'] = map(lambda i : x_1[i]*7.5 - 2*x_2[i] + noise[i], range(size))

其中:

^{pr2}$

然后我试着用线性回归和标准化矩阵求系数:

from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(data, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["y"]

结果如下:

linear_regressor.fit(X,y)
(14.951827073780766, 'x_1')
(-1.9171042297858722, 'x_2')

之后,我重复了没有scale()函数的所有步骤,得到了更好的结果:

(7.5042271168341887, 'x_1')
(-1.9835960918124507, 'x_2')

只是个例外还是我犯了些错误?在


Tags: 数据fromimportdfdatasize原始数据model
2条回答

标准化并不是线性回归的真正要求。下面是一个例子,我将数据分成训练/测试分割,然后对测试进行预测。在

>>> df = pd.DataFrame({'x_1': np.random.normal(0, 1, size), 'x_2': np.random.normal(2, 1, size)})
>>> df['y'] = map(lambda i: df['x_1'][i] * 7.5 - 2 * df['x_2'][i] + np.random.normal(0, 1, size)[i], range(size))
>>> lr = LinearRegression()
>>> X_scaled = scale(df[['x_1', 'x_2']])
>>> X_ns = df[['x_1', 'x_2']]
>>> y = df['y']
>>> train_X_scaled = X_scaled[:-100]
>>> test_X_scaled = X_scaled[-100:]
>>> train_X_ns = X_ns[:-100]
>>> test_X_ns = X_ns[-100:]
>>> train_y = y[:-100]
>>> test_y = y[-100:]
>>> lr.fit(train_X_scaled, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.coef_
array([ 7.38189303, -2.04137514])
>>> lr.predict(test_X_scaled)
array([ -5.12130597, -21.58547658, -10.59483732, -10.56241312,
-16.88790301,   0.61347437,  -7.28207791,  -9.37464865,
-5.12411501, -14.79287322,  -9.84583896,   0.61183408,
-9.00695481,  -0.42201284, -20.50254306,   0.1984764 ,
-9.57419381,   1.39035118,   9.66405865, -10.18972252,
-8.76733834,  -7.33179222, -10.53075411,   0.51671133,
 3.65140463, -16.86740729,   7.86837224,   4.61310894,
-3.80289123, -11.92948864,  -6.55643999, -10.77231532,
 1.97181141,  15.75089958,   2.71987359,  -5.49740398,
-6.59654793,  -6.39042298,  -8.86057313,  12.63031921,
-8.05054779, -11.04476828,  -3.70610232,  -4.81986166,
-3.09909457,  10.3576317 ,  -6.48789854,  -4.05243726,
-4.11076559,  -9.21957658,  -4.36368549,   2.13365208,
-19.24153319,   6.52751487,  -3.48801127,   2.01989782,
-1.00673834, -10.33590131,  -9.25592347, -16.91433355,
 3.58685085,  -6.30149903,  -2.23264539,   6.86114404,
 8.33602945, -14.25656579, -22.24380384, -14.50287259,
-6.64710009, -17.40421316, -12.7734427 ,  -3.76204612,
-0.05843445,  -5.0349674 ,  -6.86404519,  -6.8523112 ,
-14.9479788 ,   1.6120415 ,  -6.24457762,  -7.11712009,
-5.57018237,  -2.89811595,  -5.44008672,   8.19302959,
-1.78437334, -19.32108323,   1.00091276,   4.79161569,
 1.65685676,  -8.68406543,   7.27219645,  -2.90941943,
 2.4613977 ,   2.94533763,  -6.35486958,  -1.01281799,
 2.13959957,  -6.73934486,  -1.65493937,  13.2605013 ])
>>> lr.fit(train_X_ns, train_y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.coef_
array([ 7.52554825, -1.98783572])
>>> lr.predict(test_X_ns)
array([ -5.12130597, -21.58547658, -10.59483732, -10.56241312,
-16.88790301,   0.61347437,  -7.28207791,  -9.37464865,
-5.12411501, -14.79287322,  -9.84583896,   0.61183408,
-9.00695481,  -0.42201284, -20.50254306,   0.1984764 ,
-9.57419381,   1.39035118,   9.66405865, -10.18972252,
-8.76733834,  -7.33179222, -10.53075411,   0.51671133,
 3.65140463, -16.86740729,   7.86837224,   4.61310894,
-3.80289123, -11.92948864,  -6.55643999, -10.77231532,
 1.97181141,  15.75089958,   2.71987359,  -5.49740398,
-6.59654793,  -6.39042298,  -8.86057313,  12.63031921,
-8.05054779, -11.04476828,  -3.70610232,  -4.81986166,
-3.09909457,  10.3576317 ,  -6.48789854,  -4.05243726,
-4.11076559,  -9.21957658,  -4.36368549,   2.13365208,
-19.24153319,   6.52751487,  -3.48801127,   2.01989782,
-1.00673834, -10.33590131,  -9.25592347, -16.91433355,
 3.58685085,  -6.30149903,  -2.23264539,   6.86114404,
 8.33602945, -14.25656579, -22.24380384, -14.50287259,
-6.64710009, -17.40421316, -12.7734427 ,  -3.76204612,
-0.05843445,  -5.0349674 ,  -6.86404519,  -6.8523112 ,
-14.9479788 ,   1.6120415 ,  -6.24457762,  -7.11712009,
-5.57018237,  -2.89811595,  -5.44008672,   8.19302959,
-1.78437334, -19.32108323,   1.00091276,   4.79161569,
 1.65685676,  -8.68406543,   7.27219645,  -2.90941943,
 2.4613977 ,   2.94533763,  -6.35486958,  -1.01281799,
 2.13959957,  -6.73934486,  -1.65493937,  13.2605013 ])

分数也一样:

^{pr2}$

那么为什么要标准化呢?因为它不痛。在管道中,您可以添加额外的步骤,如集群或PCA,这将需要扩展。只要记住,如果你想应用缩放,你也需要将它应用到你的评分数据集。在本例中,需要使用StandardScaler,因为它有一个fit和{}。在我的例子中,我使用了scale,因为我在拆分之前将其应用于我的训练和测试。然而,在现实生活中,您未来的数据是未知的,所以您需要使用StandardScaler根据从训练集中找到的mu和std进行转换。在

sklearn.preprocessing.scale()通过减去平均值(mu)并除以标准差(sigma)来转换变量:

x_scaled = (x - mu) / sigma

在您的例子中,mu和{}的值分别是5和2。所以调用scale,将从每个x1中减去5,然后除以2。在

这种变化并不影响线性回归系数,它只是改变截距。但规模不同。如果x 1和y之间的关系是:

^{pr2}$

我们把x1除以2,然后你需要加倍系数来保持相同的关系。在

在本例中,x2不受影响,因为它的sigma为1。在

系数/无标度截距:

linear_regressor = LinearRegression()
linear_regressor.fit(X,y)
print(linear_regressor.coef_)
print(linear_regressor.intercept_)
#[ 7.48676034 -1.99400201]
#0.0253066229528

缩放比例:

X_scaled = scale(df_shuffled[df_shuffled.columns[:-1]])
linear_regressor2 = LinearRegression()
linear_regressor2.fit(X_scaled,y)
print(linear_regressor2.coef_)
print(linear_regressor2.intercept_)
#[ 14.90368565  -1.94029573]
#33.7451724511

在第二种情况下,您将获得x1x2缩放版本的系数和截距。在

这不是问题,也不是错误。这意味着,如果您使用拟合模型进行预测,您只需对新数据应用相同的转换。在

相关问题 更多 >