使用scikit-learn时出现属性错误

4 投票
1 回答
12473 浏览
提问于 2025-04-17 17:58

我想用scikit来找一些相似的问题,使用的是余弦相似度。我在网上找到了一个示例代码,链接在这里:Link1Link2

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from nltk.corpus import stopwords
import numpy as np
import numpy.linalg as LA

train_set = ["The sky is blue.", "The sun is bright."]
test_set = ["The sun in the sky is bright."]
stopWords = stopwords.words('english')

vectorizer = CountVectorizer(stop_words = stopWords)
transformer = TfidfTransformer()

trainVectorizerArray = vectorizer.fit_transform(train_set).toarray()
trainVectorizerArray = vectorizer.
testVectorizerArray = vectorizer.transform(test_set).toarray()
print 'Fit Vectorizer to train set', trainVectorizerArray
print 'Transform Vectorizer to test set', testVectorizerArray
cx = lambda a, b : round(np.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3)

for vector in trainVectorizerArray:
    print vector
    for testV in testVectorizerArray:
        print testV
        cosine = cx(vector, testV)
        print cosine

transformer.fit(trainVectorizerArray)
print transformer.transform(trainVectorizerArray).toarray()

transformer.fit(testVectorizerArray)
tfidf = transformer.transform(testVectorizerArray)
print tfidf.todense()

但是我总是遇到这个错误

Traceback (most recent call last):
File "C:\Users\Animesh\Desktop\NLP\ngrams2.py", line 14, in <module>
trainVectorizerArray = vectorizer.fit_transform(train_set).toarray()
File "C:\Python27\lib\site-packages\scikit_learn-0.13.1-py2.7-win32.egg\sklearn  \feature_extraction\text.py", line 740, in fit_transform
raise ValueError("empty vocabulary; training set may have"
ValueError: empty vocabulary; training set may have contained only stop words or min_df  (resp. max_df) may be too high (resp. too low).

我甚至检查了在这个链接上找到的代码。结果我得到了一个错误:AttributeError: 'CountVectorizer' object has no attribute 'vocabulary'

请问这个问题该怎么解决呢?

我在Windows 7 32位上使用的是Python 2.7.3和scikit_learn 0.13.1。

1 个回答

7

因为我在使用开发版本(0.14之前的版本),这个版本的 feature_extraction.text 模块进行了大改,所以我没有遇到同样的错误信息。不过,我觉得你可以通过以下方法来解决这个问题:

vectorizer = CountVectorizer(stop_words=stopWords, min_df=1)

min_df 这个参数会让 CountVectorizer 忽略那些出现在太少文档中的词汇(因为这些词没有什么预测价值)。默认情况下,这个参数设置为2,这就意味着所有的词汇都会被丢掉,结果就是你得到了一个空的词汇表。

撰写回答