将语料库中的频率附加到

2024-06-16 10:17:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在处理用NLTK POS tagger进行POS标记的tweet数据。 我的代币看起来像:

[['wasabi', 'NN'], 
['juice', 'NN']]

我还有美国国家语料库频率,一个单词列表,词性标签和它们的频率。我想从令牌中查找单词和pos标记,如果找到了,将ANC的频率附加到令牌中。你知道吗

来自SO的优秀建议很有帮助,但我发现有几个标记没有附加频率(可能是因为NLTK标记器非常不准确,例如称“silent”为名词,而不是形容词),当我尝试仅附加频率时,我不断得到一个关键错误,因为NLTK将“jill”标记为NN,不是NNP。你知道吗

最后,我决定如果找到了这个词,就用第一个频率。现在的问题是我得到了这个词出现的所有频率。我只想要第一个,所以输出是:

[['wasabi', 'NN', '5'], 
['juice', 'NN', '369']]

代码

with open('ANC-all-count.txt', 'r', errors='ignore') as f:
    freqs = csv.reader(f, delimiter='\t')

    freqs = {}
    for word, pos, f in freq_list:
        if word not in freqs: freqs[word] = {}
        freqs[word][pos] = f

        for i, (word, pos) in enumerate(tokens):
            if word not in freqs: 
                tokens[i].append(0)
                continue
            if pos not in freqs[word]:
                tokens[i] = [tokens[i][0:2]]
                single_token = tokens[i][0]
                if single_token[0] in freqs:
                    tokens[i].append(freqs[word].values())
                continue
            tokens[i].append(freqs[word][pos])

Tags: in标记posifnotnnwordjuice
1条回答
网友
1楼 · 发布于 2024-06-16 10:17:57

TL;博士

>>> from itertools import chain
>>> from collections import Counter

>>> from nltk.corpus import brown
>>> from nltk import pos_tag, word_tokenize

# Access first hundred tokenized sentence from brown corpus
# POS tag these sentences.
>>> tagged_sents = [pos_tag(tokenized_sent) for tokenized_sent in brown.sents()[:100]]

# Sanity check that the tagged_sents are what we want.
>>> list(chain(*tagged_sents))[:10]
[('The', 'DT'), ('Fulton', 'NNP'), ('County', 'NNP'), ('Grand', 'NNP'), ('Jury', 'NNP'), ('said', 'VBD'), ('Friday', 'NNP'), ('an', 'DT'), ('investigation', 'NN'), ('of', 'IN')]

# Use a collections.Counter to get the counts.
>>> freq = Counter(chain(*tagged_sents))

# Top 20 most common words.
>>> dict(freq.most_common(20))
{('the', 'DT'): 128, ('.', '.'): 89, (',', ','): 88, ('of', 'IN'): 67, ('to', 'TO'): 55, ('a', 'DT'): 50, ('and', 'CC'): 40, ('in', 'IN'): 39, ('``', '``'): 35, ("''", "''"): 34, ('The', 'DT'): 28, ('said', 'VBD'): 24, ('that', 'IN'): 24, ('for', 'IN'): 22, ('be', 'VB'): 21, ('was', 'VBD'): 18, ('jury', 'NN'): 17, ('Fulton', 'NNP'): 14, ('election', 'NN'): 14, ('will', 'MD'): 14}

# All the words from most to least common.
>>> dict(freq.most_common())


# To print out the word, pos and counts to file.
>>> with open('freq-counts', 'w') as fout:
...     for (word,pos), count in freq.most_common(20):
...         print('\t'.join([word, pos, str(count)]))

相关问题 更多 >