在PCA之后解释OLS权重(在Python中)

2024-05-01 21:56:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我想解释一个模型中的回归模型权重,在这个模型中,输入数据已经用PCA进行了预处理。实际上,我有100个高度相关的输入维度,所以我知道PCA很有用。不过,为了便于说明,我将使用Iris数据集。你知道吗

下面的代码说明了我的问题:

import numpy as np
import sklearn.datasets, sklearn.decomposition
from sklearn.linear_model import LinearRegression

# load data
X = sklearn.datasets.load_iris().data
w = np.array([0.3, 10, -0.1, -0.01])
Y = np.dot(X, w)

# set number of components to keep from PCA
n_components = 4

# reconstruct w
reg = LinearRegression().fit(X, Y)
w_hat = reg.coef_
print(w_hat)

# apply PCA
pca = sklearn.decomposition.PCA(n_components=n_components)
pca.fit(X)
X_trans = pca.transform(X)

# reconstruct w
reg_trans = LinearRegression().fit(X_trans, Y)
w_trans_hat = np.dot(reg_trans.coef_, pca.components_)
print(w_trans_hat)

运行此代码,可以看到权重被很好地复制。你知道吗

但是,如果我将组件的数量设置为3(即n_components = 3),那么打印出来的权重与真实值相差很大。你知道吗

我是否误解了如何将这些重量转换回来?或者是因为PCA的信息丢失从4个分量变成了3个分量?你知道吗


Tags: 数据代码模型importtranshatnpcomponents
1条回答
网友
1楼 · 发布于 2024-05-01 21:56:02

我认为这很好,只是我看到的是w_trans_hat,而不是重建的Y

import numpy as np
import sklearn.datasets, sklearn.decomposition
from sklearn.linear_model import LinearRegression

# load data
X = sklearn.datasets.load_iris().data
# create fake loadings
w = np.array([0.3, 10, -0.1, -0.01])
# centre X
X = np.subtract(X, np.mean(X, 0))
# calculate Y
Y = np.dot(X, w)

# set number of components to keep from PCA
n_components = 3

# reconstruct w using linear regression
reg = LinearRegression().fit(X, Y)
w_hat = reg.coef_
print(w_hat)

# apply PCA
pca = sklearn.decomposition.PCA(n_components=n_components)
pca.fit(X)
X_trans = pca.transform(X)

# regress Y on principal components
reg_trans = LinearRegression().fit(X_trans, Y)
# reconstruct Y using regressed weights and transformed X
Y_trans = np.dot(X_trans, reg_trans.coef_)
# show MSE to original Y
print(np.mean((Y - Y_trans) ** 2))

# show w implied by reduced model in original space
w_trans_hat = np.dot(reg_trans.coef_, pca.components_)
print(w_trans_hat)

相关问题 更多 >