使用NLTK无法标记带有Unicode字符的西班牙文本吗?

2 投票
1 回答
1498 浏览
提问于 2025-04-18 04:37

我正在尝试解析一些包含非ASCII字符的西班牙语句子(主要是单词中的重音符号,比如:película(电影)、atención(注意)等)。

我从一个用utf-8编码的文件中读取这些行。下面是我脚本的一部分:

# -*- coding: utf-8 -*-

import nltk
import sys
from nltk.corpus import cess_esp as cess
from nltk import UnigramTagger as ut
from nltk import BigramTagger as bt

f = codecs.open('spanish_sentences', encoding='utf-8')
results_file = codecs.open('tagging_results', encoding='utf-8', mode='w+')

for line in iter(f):

    output_line =  "Current line contents before tagging->" + str(line.decode('utf-8', 'replace'))
    print output_line
    results_file.write(output_line.encode('utf8'))

    output_line = "Unigram tagger->"
    print output_line
    results_file.write(output_line)

    s = line.decode('utf-8', 'replace')
    output_line = tagger.uni.tag(s.split())
    print output_line
    results_file.write(str(output_line).encode('utf8'))

f.close()
results_file.close()

在这一行:

output_line = tagger.uni.tag(s.split())

我遇到了这个错误:

/usr/local/lib/python2.7/dist-packages/nltk-2.0.4-py2.7.egg/nltk/tag/sequential.py:138: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
return self._context_to_tag.get(context)

这是一个简单句子的输出:

Current line contents before tagging->tengo una queja y cada que hablo a atención me dejan en la linea media hora y cortan la llamada!!

Unigram tagger->
[(u'tengo', 'vmip1s0'), (u'una', 'di0fs0'), (u'queja', 'ncfs000'), (u'y', 'cc'), (u'cada', 'di0cs0'), (u'que', 'pr0cn000'), (u'hablo', 'vmip1s0'), (u'a', 'sps00'), (u'atenci\xf3n', None), (u'me', 'pp1cs000'), (u'dejan', 'vmip3p0'), (u'en', 'sps00'), (u'la', 'da0fs0'), (u'linea', None), (u'media', 'dn0fs0'), (u'hora', 'ncfs000'), (u'y', 'cc'), (u'cortan', None), (u'la', 'da0fs0'), (u'llamada!!', None)]

如果我从这一章理解得没错……这个过程是正确的……我把这一行从utf-8解码成Unicode,然后标记,再从Unicode编码回utf-8……我不明白这个错误是什么原因。

你们觉得我哪里做错了?

谢谢,
Alejandro

编辑:找到了问题……基本上,西班牙语的cess_esp语料库是用Latin-2编码的。下面的代码可以让你正确训练标记器。

tagged_sents = (
[(word.decode('Latin2'), tag) for (word, tag) in sent]
for sent in cess.tagged_sents()
)
tagger = UT(tagged_sents)  # training a tagger

更好的方法是使用CorpusReader类来询问语料库的编码,这样你就不需要事先知道它。

1 个回答

1

可能是你的标记器对象有问题,或者是你读取文件的方式不对。我重新写了你代码的一部分,运行起来没有错误:

# -*- coding: utf-8 -*-

import urllib2, codecs

from nltk.corpus import cess_esp as cess
from nltk import word_tokenize
from nltk import UnigramTagger as ut
from nltk import BigramTagger as bt

tagger = ut(cess.tagged_sents())

url = 'https://db.tt/42Lt5M5K'
fin = urllib2.urlopen(url).read().strip().decode('utf8')
fout = codecs.open('tagger.out', 'w', 'utf8')
for line in fin.split('\n'):
    print>>fout, "Current line contents before tagging->", line
    print>>fout, "Unigram tagger->",
    print>>fout, tagger.tag(word_tokenize(line))
    print>>fout, ""

[输出]:

http://pastebin.com/n0NK574a

撰写回答