在BaggingRegress中使用xgboost

2024-06-07 21:50:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我需要在BaggingRegressor中运行xgboost,我使用xgboost

import xgboost

D_train = xgboost.DMatrix(X_train, lab_train)
D_val = xgboost.DMatrix(X_train[test_index], lab_train[test_index])
D_pred =xgboost.DMatrix( X_train[test_index])
D_test = xgboost.DMatrix(X_test)
D_ttest = xgboost.DMatrix(ttest)


xgb_params = dict()
xgb_params["objective"] = "reg:linear"
xgb_params["eta"] = 0.01
xgb_params["min_child_weight"] = 6
xgb_params["subsample"] = 0.7
xgb_params["colsample_bytree"] = 0.6
xgb_params["scale_pos_weight"] = 0.8
xgb_params["silent"] = 1
xgb_params["max_depth"] = 10
xgb_params["max_delta_step"]=2
watchlist = [(D_train, 'train')]
xg = xgboost.Booster()

print('1000')
model = xgboost.train(params=xgb_params, dtrain=D_train, num_boost_round=1000, 
                      evals=watchlist, verbose_eval=1, early_stopping_rounds=20)

y_pred1 = model.predict(D_ttest)

如何在BaggingRegressor中使用所有相同的参数?你知道吗

如果我这样做了

gdr = BaggingRegressor(base_estimator= xgboost.train( params=xgb_params,
dtrain=D_train,
num_boost_round=3000,
evals=watchlist,
verbose_eval=1,
early_stopping_rounds=20))

然后xgboost训练开始,然后是代码

gdr_model = gdr
print(gdr_model)
gdr_model.fit(X_train, lab_train)
train_pred = gdr_model.predict(X_test)

print('mse from log: ', mean_squared_error(lab_train, train_pred))

train_pred = gdr_model.predict(ttest)

没有意义,还是我错了?告诉我如何解决这个问题


Tags: testindexmodellabtrainparamsprintxgboost
1条回答
网友
1楼 · 发布于 2024-06-07 21:50:02

Xgboost有一个Sklearn包装器。尝试使用以下模板!你知道吗

import xgboost
from sklearn.datasets import load_boston
from xgboost.sklearn import XGBRegressor
from sklearn.ensemble import BaggingRegressor

X,y = load_boston(return_X_y=True)

reg = BaggingRegressor(base_estimator=XGBRegressor())

reg.fit(X,y)

相关问题 更多 >

    热门问题