如何解决来自nltk.classify ClassifierI的NotImplementedError?

2024-05-23 15:34:12 发布

您现在位置:Python中文网/ 问答频道 /正文

我不熟悉编程,但我反复查看了我的代码,没有发现任何错误。我不知道该怎么做,因为不管我怎么做,这个错误都会出现。我会在这里发布完整的代码。在

任何帮助都将不胜感激,谢谢!在

import nltk
import random
from nltk.corpus import movie_reviews
import pickle
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.naive_bayes import MultinomialNB,BernoulliNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
from nltk.classify import ClassifierI
from statistics import mode 

class VoteClassifier(ClassifierI):
    def __init__(self, *classifiers):
        self._classifiers = classifiers

        def classify(self, features):
            votes = []
            for c in self._classifiers:
                v = c.classify(features)
                votes.append(v)
            return mode(votes)


        def confidence(self, features):
            votes = []
            for c in self._classifiers:
                v = c.classify(features)
                votes.append(v)


            choice_votes = votes.count(mode(votes))
            conf = choice_votes / len(votes)
            return conf


documents = [(list(movie_reviews.words(fileid)), category)
             for category in movie_reviews.categories()
             for fileid in movie_reviews.fileids(category)]

random.shuffle(documents)

all_words = []

for w in movie_reviews.words():
        all_words.append(w.lower())

all_words = nltk.FreqDist(all_words)

word_features = list(all_words.keys())[:3000]

def find_features(document):
    words = set(document)
    features = {}
    for w in word_features:
        features[w] = (w in words)

    return features

featuresets = [(find_features(rev), category) for (rev, category) in documents]

training_set = featuresets[:1900]
testing_set = featuresets[1900:]

# classifier = nltk.NaiveBayesClassifier.train(training_set)
classifier_f = open("naivebayes.pickle", "rb")
classifier = pickle.load(classifier_f)
classifier_f.close()

print("Original NaiveBayes accuracy percent:",(nltk.classify.accuracy(classifier, testing_set))*100)
classifier.show_most_informative_features(10)

MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100)

BernoulliNB_classifier = SklearnClassifier(BernoulliNB())
BernoulliNB_classifier.train(training_set)
print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100)

LogisticRegression_classifier = SklearnClassifier(LogisticRegression())
LogisticRegression_classifier.train(training_set)
print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100)

SGDClassifier_classifier = SklearnClassifier(SGDClassifier())
SGDClassifier_classifier.train(training_set)
print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100)

##SVC_classifier = SklearnClassifier(SVC())
##SVC_classifier.train(training_set)
##print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100)

LinearSVC_classifier = SklearnClassifier(LinearSVC())
LinearSVC_classifier.train(training_set)
print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100)

NuSVC_classifier = SklearnClassifier(NuSVC())
NuSVC_classifier.train(training_set)
print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100)


voted_classifier = VoteClassifier(classifier,
                                  NuSVC_classifier,
                                  LinearSVC_classifier,
                                  SGDClassifier_classifier,
                                  MNB_classifier,
                                  BernoulliNB_classifier,
                                  LogisticRegression_classifier)

print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100)

我还尝试在顶部的类上引发一个NotImplementedError异常,但它没有改变Python中的输出

这是错误:

^{pr2}$

Tags: inimporttrainingtestingfeatureswordsprintpercent
1条回答
网友
1楼 · 发布于 2024-05-23 15:34:12

如注释中所述,ClassiferIapi中有一些不好的类似意大利面条的代码,在重写时,classify调用{}。当考虑到ClassifierINaiveBayesClassifier对象紧密关联时,这可能不是一件坏事。在

但是对于OP中的特殊用途,这里的意大利面代码并不受欢迎。在

TL;DR

看看https://www.kaggle.com/alvations/sklearn-nltk-voteclassifier

很长时间内

从回溯来看,错误从nltk.classify.util.accuracy()开始调用ClassifierI.classify()。在

ClassifierI.classify()通常用于对一个文档进行分类,输入是一个包含二进制值的特征集字典。在

{/cdstrong>是一个多个文档的特征集。在

所以快速的方法是重写accuracy()的函数,这样VotedClassifier就不会依赖于classify()vsclassify_many()ClassifierI定义。这也意味着我们不能从ClassifierI继承。IMHO,如果您不需要除了classify()之外的其他功能,那么就没有必要继承ClassifierI可能附带的行李:

def my_accuracy(classifier, gold):
    documents, labels = zip(*gold)
    predictions = classifier.classify_documents(documents)
    correct = [y == y_hat for y, y_hat in zip(labels, predictions)]
    if correct:
        return sum(correct) / len(correct)
    else:
        return 0

class VotraClassifier:
    def __init__(self, *classifiers):
        self._classifiers = classifiers

    def classify_documents(self, documents):
        return [self.classify_many(doc) for doc in documents]

    def classify_many(self, features):
        votes = []
        for c in self._classifiers:
            v = c.classify(features)
            votes.append(v)
        return mode(votes)

    def confidence(self, features):
        votes = []
        for c in self._classifiers:
            v = c.classify(features)
            votes.append(v)

        choice_votes = votes.count(mode(votes))
        conf = choice_votes / len(votes)
        return conf

现在,如果我们用新的VotedClassifier对象调用新的my_accuracy()

^{pr2}$

[出来]:

0.86

注意:当涉及到洗牌文档,然后拿出一个集合来测试分类器的准确性时,有一定的随机性。在

我的建议是执行以下操作,而不是简单的random.shuffle(documents)

  • 用不同的随机种子重复实验。在
  • 对于每个随机种子,进行10倍交叉验证。在

相关问题 更多 >