GridSearchCV的并行误差,可以与其他方法配合使用

2024-04-28 23:26:04 发布

您现在位置:Python中文网/ 问答频道 /正文

我在使用GridSearchCV时遇到了以下问题:它在使用n_jobs > 1时给了我一个并行错误。同时n_jobs > 1也适用于像RadonmForestClassifier这样的单个模型。

下面是一个显示错误的简单工作示例:

train = np.random.rand(100,10)
targ = np.random.randint(0,2,100)

clf = ensemble.RandomForestClassifier(n_jobs = 2)
clf.fit(train,targ)
train = np.random.rand(100,10)
targ = np.random.randint(0,2,100)
​
clf = ensemble.RandomForestClassifier(n_jobs = 2)
clf.fit(train,targ)
Out[349]: RandomForestClassifier(bootstrap=True, class_weight=None,     criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=10, n_jobs=2, oob_score=False, random_state=None,
            verbose=0, warm_start=False)

这个例子很好。

同时,以下操作不起作用:

clf = ensemble.RandomForestClassifier()
param_grid = {'n_estimators': [10,20]}
grid_s= model_selection.GridSearchCV(clf, param_grid=param_grid_gb,n_jobs=-1,verbose=1)
grid_s.fit(train, targ)

并给出以下错误:

Fitting 3 folds for each of 2 candidates, totalling 6 fits

ImportErrorTraceback (most recent call last)
<ipython-input-351-b8bb45396026> in <module>()
      2 param_grid = {'n_estimators': [10,20]}
      3 grid_s= model_selection.GridSearchCV(clf, param_grid=param_grid_gb,n_jobs=-1,verbose=1)
----> 4 grid_s.fit(train, targ)

/root/anaconda3/envs/python2/lib/python2.7/site-packages/sklearn/model_selection/_search.pyc in fit(self, X, y, groups)
    943             train/test set.
    944         """
--> 945         return self._fit(X, y, groups, ParameterGrid(self.param_grid))
    946 
    947 

/root/anaconda3/envs/python2/lib/python2.7/site-packages/sklearn/model_selection/_search.pyc in _fit(self, X, y, groups, parameter_iterable)
    562                                   return_times=True, return_parameters=True,
    563                                   error_score=self.error_score)
--> 564           for parameters in parameter_iterable
    565           for train, test in cv_iter)
    566 

/root/anaconda3/envs/python2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
    726         self._aborting = False
    727         if not self._managed_backend:
--> 728             n_jobs = self._initialize_backend()
    729         else:
    730             n_jobs = self._effective_n_jobs()

/root/anaconda3/envs/python2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in _initialize_backend(self)
    538         try:
    539             return self._backend.configure(n_jobs=self.n_jobs, parallel=self,
--> 540                                            **self._backend_args)
    541         except FallbackToBackend as e:
    542             # Recursively initialize the backend in case of requested fallback.

/root/anaconda3/envs/python2/lib/python2.7/site-packages/sklearn/externals/joblib/_parallel_backends.pyc in configure(self, n_jobs, parallel, **backend_args)
    297         if already_forked:
    298             raise ImportError(
--> 299                 '[joblib] Attempting to do parallel computing '
    300                 'without protecting your import on a system that does '
    301                 'not support forking. To use parallel-computing in a '

ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if __name__ == '__main__'". Please see the joblib documentation on Parallel for more information

Tags: inselfbackendparamparalleljobstrainrandom