Python决策树分类器batch_prob_classify函数

3 投票
1 回答
1189 浏览
提问于 2025-04-18 00:34

我正在尝试运行这个nltk网站上提供的决策树代码,链接是 http://www.nltk.org/howto/classify.html

>>> train = [
...      (dict(a=1,b=1,c=1), 'y'),
...      (dict(a=1,b=1,c=1), 'x'),
...      (dict(a=1,b=1,c=0), 'y'),
...      (dict(a=0,b=1,c=1), 'x'),
...      (dict(a=0,b=1,c=1), 'y'),
...      (dict(a=0,b=0,c=1), 'y'),
...      (dict(a=0,b=1,c=0), 'x'),
...      (dict(a=0,b=0,c=0), 'x'),
...      (dict(a=0,b=1,c=1), 'y'),
...      ]
>>>
>>>
>>> test = [
...      (dict(a=1,b=0,c=1)), # unseen
...      (dict(a=1,b=0,c=0)), # unseen
...      (dict(a=0,b=1,c=1)), # seen 3 times, labels=y,y,x
...      (dict(a=0,b=1,c=0)), # seen 1 time, label=x
...      ]
>>>
>>>
>>> import nltk
>>> classifier = nltk.classify.DecisionTreeClassifier.train(train, entropy_cutoff=0, support_cutoff=0)
>>> sorted(classifier.labels())
['x', 'y']
>>> print(classifier)
c=0? .................................................. x
  a=0? ................................................ x
  a=1? ................................................ y
c=1? .................................................. y

>>> classifier.batch_classify(test)
['y', 'y', 'y', 'x']
>>> for pdist in classifier.batch_prob_classify(test):
...      print('%.4f %.4f' % (pdist.prob('x'), pdist.prob('y')))
...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "//anaconda/lib/python2.7/site-packages/nltk/classify/api.py", line 87, in batch_prob_classify
    return [self.prob_classify(fs) for fs in featuresets]
  File "//anaconda/lib/python2.7/site-packages/nltk/classify/api.py", line 67, in prob_classify
    raise NotImplementedError()
NotImplementedError
>>>

问题出在batch_prob_classify这个函数上。有没有人能建议我怎么解决这个问题,以及如何获取概率分布的值呢?

1 个回答

1

DecisionTreeClassifier 是一种决策树分类器,它使用的概率类叫 MLEProbDist,这个类没有提供 prob 方法。而 NaiveBayesClassifier 是另一种分类器,它使用的概率类叫 ELEProbDist,这个类是从 LidstoneProbDist 继承来的,并且提供prob 方法。

所以,如果你不想自己去创建一个 DecisionTreeClassifier 的子类并添加 prob 方法的话,可能更适合使用 NaiveBayesClassifier

>>> classifier = nltk.classify.NaiveBayesClassifier.train(train)  # note the use of NaiveBayesClassifier here
>>> for pdist in classifier.batch_prob_classify(test):
      print('%.4f %.4f' % (pdist.prob('x'), pdist.prob('y')))


0.3104 0.6896
0.5746 0.4254
0.3685 0.6315
0.6365 0.3635

正如 @Mike 所提到的,你得到了预期的结果。你可能是因为页面上之前有一个非常相似的例子而感到困惑。

撰写回答