使用n从文本文件中提取所有名词

2024-05-15 23:52:45 发布

您现在位置:Python中文网/ 问答频道 /正文

有没有更有效的方法? 我的代码读取文本文件并提取所有名词。

import nltk

File = open(fileName) #open file
lines = File.read() #read all lines
sentences = nltk.sent_tokenize(lines) #tokenize sentences
nouns = [] #empty to array to hold all nouns

for sentence in sentences:
     for word,pos in nltk.pos_tag(nltk.word_tokenize(str(sentence))):
         if (pos == 'NN' or pos == 'NNP' or pos == 'NNS' or pos == 'NNPS'):
             nouns.append(word)

如何降低此代码的时间复杂性?有没有办法避免使用嵌套for循环?

提前谢谢!


Tags: orto代码posforreadsentencesopen
3条回答
import nltk

lines = 'lines is some string of words'
# function to test if something is a noun
is_noun = lambda pos: pos[:2] == 'NN'
# do the nlp stuff
tokenized = nltk.word_tokenize(lines)
nouns = [word for (word, pos) in nltk.pos_tag(tokenized) if is_noun(pos)] 

print nouns
>>> ['lines', 'string', 'words']

有用提示:通常情况下,与在“for”循环中使用.insert()或append()方法向列表中添加元素相比,列表理解是构建列表的更快方法。

如果您打开了NLTK以外的选项,请签出^{}。它很容易提取所有名词和名词短语:

>>> from textblob import TextBlob
>>> txt = """Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the inter
actions between computers and human (natural) languages."""
>>> blob = TextBlob(txt)
>>> print(blob.noun_phrases)
[u'natural language processing', 'nlp', u'computer science', u'artificial intelligence', u'computational linguistics']

使用nltkTextblobSpaCy或任何其他库都可以获得良好的结果。这些图书馆都能胜任这项工作,但效率不同。

import nltk
from textblob import TextBlob
import spacy
nlp = spacy.load('en')
nlp1 = spacy.load('en_core_web_lg')

txt = """Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages."""

在我的Windows102内核、4个处理器、8GBRAM i5hp笔记本电脑上,在jupyter笔记本电脑上,我进行了一些比较,结果如下。

对于TextBlob:

%%time
print([w for (w, pos) in TextBlob(txt).pos_tags if pos[0] == 'N'])

输出为

>>> ['language', 'processing', 'NLP', 'field', 'computer', 'science', 'intelligence', 'linguistics', 'inter', 'actions', 'computers', 'languages']
    Wall time: 8.01 ms #average over 20 iterations

对于nltk:

%%time
print([word for (word, pos) in nltk.pos_tag(nltk.word_tokenize(txt)) if pos[0] == 'N'])

输出为

>>> ['language', 'processing', 'NLP', 'field', 'computer', 'science', 'intelligence', 'linguistics', 'inter', 'actions', 'computers', 'languages']
    Wall time: 7.09 ms #average over 20 iterations

对于痉挛:

%%time
print([ent.text for ent in nlp(txt) if ent.pos_ == 'NOUN'])

输出为

>>> ['language', 'processing', 'field', 'computer', 'science', 'intelligence', 'linguistics', 'inter', 'actions', 'computers', 'languages']
    Wall time: 30.19 ms #average over 20 iterations

似乎nltkTextBlob速度相当快,这是意料之中的,因为没有存储关于输入文本txt的其他内容。痉挛要慢得多。还有一件事。SpaCy错过了名词NLP,而nltkTextBlob得到了它。我会为nltkTextBlob开枪,除非我想从输入中提取其他内容txt


快速进入spacyhere
查看有关TextBlobhere的一些基础知识。
查看nltk如何here

相关问题 更多 >