我想在一个拥有5000行和6个特性的数据集上运行几个回归类型(Lasso、Ridge、ElasticNet和SVR)。线性回归。使用GridSearchCV进行交叉验证。代码是广泛的,但这里有一些关键部分:
def splitTrainTestAdv(df):
y = df.iloc[:,-5:] # last 5 columns
X = df.iloc[:,:-5] # Except for last 5 columns
#Scaling and Sampling
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.8, random_state=0)
return X_train, X_test, y_train, y_test
def performSVR(x_train, y_train, X_test, parameter):
C = parameter[0]
epsilon = parameter[1]
kernel = parameter[2]
model = svm.SVR(C = C, epsilon = epsilon, kernel = kernel)
model.fit(x_train, y_train)
return model.predict(X_test) #prediction for the test
def performRidge(X_train, y_train, X_test, parameter):
alpha = parameter[0]
model = linear_model.Ridge(alpha=alpha, normalize=True)
model.fit(X_train, y_train)
return model.predict(X_test) #prediction for the test
MODELS = {
'lasso': (
linear_model.Lasso(),
{'alpha': [0.95]}
),
'ridge': (
linear_model.Ridge(),
{'alpha': [0.01]}
),
)
}
def performParameterSelection(model_name, feature, X_test, y_test, X_train, y_train):
print("# Tuning hyper-parameters for %s" % feature)
print()
model, param_grid = MODELS[model_name]
gs = GridSearchCV(model, param_grid, n_jobs= 1, cv=5, verbose=1, scoring='%s_weighted' % feature)
gs.fit(X_train, y_train)
print("Best parameters set found on development set:")
print(gs.best_params_)
print()
print("Grid scores on development set:")
print()
for params, mean_score, scores in gs.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
y_true, y_pred = y_test, gs.predict(X_test)
print(classification_report(y_true, y_pred))
soil = pd.read_csv('C:/training.csv', index_col=0)
soil = getDummiedSoilDepth(soil)
np.random.seed(2015)
soil = shuffleData(soil)
soil = soil.drop('Depth', 1)
X_train, X_test, y_train, y_test = splitTrainTestAdv(soil)
scores = ['precision', 'recall']
for score in scores:
for model in MODELS.keys():
print '####################'
print model, score
print '####################'
performParameterSelection(model, score, X_test, y_test, X_train, y_train)
您可以假设所有必需的导入都已完成
我得到这个错误,不知道为什么:
ValueError Traceback (most recent call last)
在() 18打印模型,分数 19打印 --->;20性能参数选择(型号、分数、X_测试、y_测试、X_训练、y_训练) 21岁
<ipython-input-27-304555776e21> in performParameterSelection(model_name, feature, X_test, y_test, X_train, y_train)
12 # cv=5 - constant; verbose - keep writing
13
---> 14 gs.fit(X_train, y_train) # Will get grid scores with outputs from ALL models described above
15
16 #pprint(sorted(gs.grid_scores_, key=lambda x: -x.mean_validation_score))
C:\Users\Tony\Anaconda\lib\site-packages\sklearn\grid_search.pyc in fit(self, X, y)
C:\Users\Tony\Anaconda\lib\site-packages\sklearn\metrics\classification.pyc in _check_targets(y_true, y_pred)
90 if (y_type not in ["binary", "multiclass", "multilabel-indicator",
91 "multilabel-sequences"]):
---> 92 raise ValueError("{0} is not supported".format(y_type))
93
94 if y_type in ["binary", "multiclass"]:
ValueError: continuous-multioutput is not supported
我对Python还很陌生,这个错误让我困惑。当然,这不应该是因为我有6个特征。我试着遵循标准的内置函数。
求求你,救命
首先让我们复制这个问题。
首先导入所需的库:
然后创建一些数据:
现在,我们可以复制错误并查看不复制错误的选项:
运行正常
这不是
事实上,这个错误和上面的完全一样;“不支持连续多输出”。
如果你考虑召回措施,它是与二进制或分类数据有关的-关于这些数据,我们可以定义诸如误报等。至少在我复制您的数据时,我使用了连续数据,而回忆只是没有定义。如果你使用默认的分数,它会起作用,正如你在上面看到的。
因此,您可能需要查看您的预测并理解它们为什么是连续的(即使用分类器而不是回归)。或者用不同的分数。
另外,如果只使用一组(列的)y值运行回归,仍然会得到一个错误。这一次它更简单地说“不支持连续输出”,即问题是对连续数据使用回调(或精度)(无论是否为多输出)。
相关问题 更多 >
编程相关推荐