如何在交叉验证和GridSearchCV中实现SMOTE

2024-04-28 20:51:40 发布

您现在位置:Python中文网/ 问答频道 /正文

我对Python比较陌生。你能帮我把SMOTE的实现改进到一个合适的管道吗?我想要的是在每一次k次迭代的训练集上应用过采样和欠采样,以便在一个平衡的数据集上训练模型,并在不平衡的遗漏部分上进行评估。问题是,当我这样做时,我不能使用熟悉的sklearn接口进行计算和网格搜索。

是否有可能制造出类似于model_selection.RandomizedSearchCV的东西。我的看法是:

df = pd.read_csv("Imbalanced_data.csv") #Load the data set
X = df.iloc[:,0:64]
X = X.values
y = df.iloc[:,64]
y = y.values
n_splits = 2
n_measures = 2 #Recall and AUC
kf = StratifiedKFold(n_splits=n_splits) #Stratified because we need balanced samples
kf.get_n_splits(X)
clf_rf = RandomForestClassifier(n_estimators=25, random_state=1)
s =(n_splits,n_measures)
scores = np.zeros(s)
for train_index, test_index in kf.split(X,y):
   print("TRAIN:", train_index, "TEST:", test_index)
   X_train, X_test = X[train_index], X[test_index]
   y_train, y_test = y[train_index], y[test_index]
   sm = SMOTE(ratio = 'auto',k_neighbors = 5, n_jobs = -1)
   smote_enn = SMOTEENN(smote = sm)
   x_train_res, y_train_res = smote_enn.fit_sample(X_train, y_train)
   clf_rf.fit(x_train_res, y_train_res)
   y_pred = clf_rf.predict(X_test,y_test)
   scores[test_index,1] = recall_score(y_test, y_pred)
   scores[test_index,2] = auc(y_test, y_pred)

Tags: csvtestdfindextrainres集上clf