词干提取与词形还原的比较

1 投票
1 回答
40 浏览
提问于 2025-04-12 21:24

根据一些研究,我发现了以下重要的比较分析:

比较分析

如果我们看看文本,通常来说,词形还原(lemmatization)应该能返回更正确的结果,对吧?不仅正确,而且是简化的版本。我在这方面做了一个实验:

sentence ="having playing  in today gaming ended with greating victorious"

但是当我运行词干提取(stemming)和词形还原的代码时,得到了以下结果:

['have', 'play', 'in', 'today', 'game', 'end', 'with', 'great', 'victori'] ['having', 'playing', 'in', 'today', 'gaming', 'ended', 'with', 'greating', 'victorious']

第一个结果是词干提取,整体看起来还不错,除了“victori”(应该是“victory”对吧)。第二个结果是词形还原(所有的都是正确的,但保持了原始形式)。那么在这种情况下,哪个选项更好呢?是简短且大部分不正确的版本,还是较长但正确的版本呢?

        import nltk
        from nltk.tokenize import word_tokenize,sent_tokenize
        from nltk.corpus import stopwords
        from sklearn.feature_extraction.text import  CountVectorizer
        from nltk.stem import PorterStemmer,WordNetLemmatizer
        mylematizer =WordNetLemmatizer()
        mystemmer =PorterStemmer()
        nltk.download('stopwords')
        sentence ="having playing  in today gaming ended with greating victorious"
        words =word_tokenize(sentence)
        # print(words)
        stemmed =[mystemmer.stem(w)  for w in words]
        lematized=[mylematizer.lemmatize(w) for w in words ]
        print(stemmed)
        print(lematized)
        # mycounter =CountVectorizer()
        # mysentence ="i love ibsu. because ibsu is great university"
        # # print(word_tokenize(mysentence))
        # # print(sent_tokenize(mysentence))
        # individual_words=word_tokenize(mysentence)
        # stops =list(stopwords.words('english'))
        # words =[w  for w in  individual_words if w not in  stops  and  w.isalnum() ]
        # reduced =[mystemmer.stem(w) for w  in words]
        
        # new_sentence =' '.join(words)
        # frequencies =mycounter.fit_transform([new_sentence])
        # print(frequencies.toarray())
        # print(mycounter.vocabulary_)
        # print(mycounter.get_feature_names_out())
        # print(new_sentence)
        # print(words)
        # # print(list(stopwords.words('english')))

1 个回答

0

这里有一个例子,展示了词形还原器是如何处理你字符串中的单词的词性:

import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import word_tokenize, pos_tag
from collections import defaultdict

tag_map = defaultdict(lambda : wordnet.NOUN)
tag_map['J'] = wordnet.ADJ
tag_map['V'] = wordnet.VERB
tag_map['R'] = wordnet.ADV

sentence = "having playing in today gaming ended with greating victorious"
tokens = word_tokenize(sentence)
wnl = WordNetLemmatizer()
for token, tag in pos_tag(tokens):
    print('found tag', tag[0])
    lemma = wnl.lemmatize(token, tag_map[tag[0]])
    print(token, "lemmatized to", lemma)

输出结果:

found tag V
having lemmatized to have
found tag N
playing lemmatized to playing
found tag I
in lemmatized to in
found tag N
today lemmatized to today
found tag N
gaming lemmatized to gaming
found tag V
ended lemmatized to end
found tag I
with lemmatized to with
found tag V
greating lemmatized to greating
found tag J
victorious lemmatized to victorious

词形还原是把单词简化到它们的基本形式。这有点像词干提取,但词形还原会考虑单词的上下文,把意思相近的单词联系在一起。这个复杂的语言学术语叫做“形态学”。那么,在某种语言中,单词之间是如何关联的呢?如果你看看上面的输出,带有ing的动词被当作名词来解析。虽然这些动词是动词,但它们也可以作为名词使用:比如我喜欢游泳。在这个例子中,动词是“喜欢”,名词是“游泳”。这就是上面标签被解释的方式。老实说,你上面的句子根本就不是一个完整的句子。我不会说哪个是对的,哪个是错的,但当句子中正确使用词性时,词形还原会显得更强大,尤其是在有独立从句或依赖从句的情况下。

撰写回答