擅长:python、mysql、java
<p>我创建了一个同样适用于Python 3的解决方案,它基于Jakub Macina的代码片段。</p>
<pre><code>from matplotlib import pyplot as plt
from sklearn import svm
def f_importances(coef, names, top=-1):
imp = coef
imp, names = zip(*sorted(list(zip(imp, names))))
# Show all features
if top == -1:
top = len(names)
plt.barh(range(top), imp[::-1][0:top], align='center')
plt.yticks(range(top), names[::-1][0:top])
plt.show()
# whatever your features are called
features_names = ['input1', 'input2', ...]
svm = svm.SVC(kernel='linear')
svm.fit(X_train, y_train)
# Specify your top n features you want to visualize.
# You can also discard the abs() function
# if you are interested in negative contribution of features
f_importances(abs(clf.coef_[0]), feature_names, top=10)
</code></pre>
<p><a href="https://i.stack.imgur.com/YqpRm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YqpRm.png" alt="Feature importance"/></a></p>