from matplotlib import pyplot as plt
from sklearn import svm
def f_importances(coef, names, top=-1):
imp = coef
imp, names = zip(*sorted(list(zip(imp, names))))
# Show all features
if top == -1:
top = len(names)
plt.barh(range(top), imp[::-1][0:top], align='center')
plt.yticks(range(top), names[::-1][0:top])
plt.show()
# whatever your features are called
features_names = ['input1', 'input2', ...]
svm = svm.SVC(kernel='linear')
svm.fit(X_train, y_train)
# Specify your top n features you want to visualize.
# You can also discard the abs() function
# if you are interested in negative contribution of features
f_importances(abs(clf.coef_[0]), feature_names, top=10)
只有一行代码:
拟合支持向量机模型:
并按以下步骤进行绘图:
结果是:
the most contributing features of the SVM model in absolute values
我创建了一个同样适用于Python 3的解决方案,它基于Jakub Macina的代码片段。
是的,支持向量机分类器有属性
coef_
,但它只适用于具有线性核的支持向量机。对于其他内核,这是不可能的,因为数据由内核方法转换到与输入空间无关的另一个空间,请检查explanation。函数的输出如下所示:
相关问题 更多 >
编程相关推荐