使用字典统计词频

0 投票
3 回答
5091 浏览
提问于 2025-04-18 05:28

我的问题是,我不知道怎么用字典来显示单词的数量,并且参考键的长度。比如,考虑下面这段文字:

   "This is the sample text to get an idea!. "

那么,所需的输出应该是:

3 2
2 3
0 5

因为在给定的示例文本中,有3个长度为2的单词,2个长度为3的单词,以及0个长度为5的单词。

我已经做到可以显示单词出现频率的列表:

def word_frequency(filename):
    word_count_list = []
    word_freq = {}
    text = open(filename, "r").read().lower().split()
    word_freq = [text.count(p) for p in text]
    dictionary = dict(zip(text,word_freq))
    return dictionary

print word_frequency("text.txt")

这会以这种格式显示字典:

{'all': 3, 'show': 1, 'welcomed': 1, 'not': 2, 'availability': 1, 'television,': 1, '28': 1, 'to': 11, 'has': 2, 'ehealth,': 1, 'do': 1, 'get': 1, 'they': 1, 'milestone': 1, 'kroes,': 1, 'now': 3, 'bringing': 2, 'eu.': 1, 'like': 1, 'states.': 1, 'them.': 1, 'european': 2, 'essential': 1, 'available': 4, 'because': 2, 'people': 3, 'generation': 1, 'economic': 1, '99.4%': 1, 'are': 3, 'eu': 1, 'achievement,': 1, 'said': 3, 'for': 3, 'broadband': 7, 'networks': 2, 'access': 2, 'internet': 1, 'across': 2, 'europe': 1, 'subscriptions': 1, 'million': 1, 'target.': 1, '2020,': 1, 'news': 1, 'neelie': 1, 'by': 1, 'improve': 1, 'fixed': 2, 'of': 8, '100%': 1, '30': 1, 'affordable': 1, 'union,': 2, 'countries.': 1, 'products': 1, 'or': 3, 'speeds': 1, 'cars."': 1, 'via': 1, 'reached': 1, 'cloud': 1, 'from': 1, 'needed': 1, '50%': 1, 'been': 1, 'next': 2, 'households': 3, 'commission': 5, 'live': 1, 'basic': 1, 'was': 1, 'said:': 1, 'more': 1, 'higher.': 1, '30mbps': 2, 'that': 4, 'but': 2, 'aware': 1, '50mbps': 1, 'line': 1, 'statement,': 1, 'with': 2, 'population': 1, "europe's": 1, 'target': 1, 'these': 1, 'reliable': 1, 'work': 1, '96%': 1, 'can': 1, 'ms': 1, 'many': 1, 'further.': 1, 'and': 6, 'computing': 1, 'is': 4, 'it': 2, 'according': 1, 'have': 2, 'in': 5, 'claimed': 1, 'their': 1, 'respective': 1, 'kroes': 1, 'areas.': 1, 'responsible': 1, 'isolated': 1, 'member': 1, '100mbps': 1, 'digital': 2, 'figures': 1, 'out': 1, 'higher': 1, 'development': 1, 'satellite': 4, 'who': 1, 'connected': 2, 'coverage': 2, 'services': 2, 'president': 1, 'a': 1, 'vice': 1, 'mobile': 2, "commission's": 1, 'points': 1, '"access': 1, 'rural': 1, 'the': 16, 'agenda,': 1, 'having': 1}

3 个回答

1

如果你想统计一段文字中,特定长度的单词有多少个,也就是从某个长度(size)到出现的次数(frequency)的分布,你可以用正则表达式来提取单词:

#!/usr/bin/env python3
import re
from collections import Counter

text = "This is the sample text to get an idea!. "
words = re.findall(r'\w+', text.casefold())
frequencies = Counter(map(len, words)).most_common() 
print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in frequencies]))

输出结果

3 word(s) of length 2
3 word(s) of length 4
2 word(s) of length 3
1 word(s) of length 6

注意:它会忽略像!.这样的标点符号,这点和基于.split()的方法不同,后者会自动处理这些标点。

如果你想从文件中读取单词,可以先读取每一行,然后用和第一个代码示例中相同的方法提取单词:

from itertools import chain

with open(filename) as file:
    words = chain.from_iterable(re.findall(r'\w+', line.casefold())
                                for line in file)
    # use words here.. (the same as above)
    frequencies = Counter(map(len, words)).most_common()

print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in frequencies]))

在实际操作中,如果你想找出单词的长度分布,可以用一个列表来统计,但要忽略那些超过某个长度限制的单词:

def count_lengths(words, maxlen=100):
    frequencies = [0] * (maxlen + 1)
    for length in map(len, words):
        if length <= maxlen:
            frequencies[length] += 1
    return frequencies

示例

import re

text = "This is the sample text to get an idea!. "
words = re.findall(r'\w+', text.casefold())
frequencies = count_lengths(words)
print("\n".join(["%d word(s) of length %d" % (n, length) 
                 for length, n in enumerate(frequencies) if n > 0]))

输出结果

3 word(s) of length 2
2 word(s) of length 3
3 word(s) of length 4
1 word(s) of length 6
2

使用 collections.Counter

import collections

sentence = "This is the sample text to get an idea"

Count = collections.Counter([len(a) for a in sentence.split()])

print Count
2
def freqCounter(infilepath):
    answer = {}
    with open(infilepath) as infile:
        for line in infilepath:
            for word in line.strip().split():
                l = len(word)
                if l not in answer:
                    answer[l] = 0
                answer[l] += 1
    return answer
import collections
def freqCounter(infilepath):
    with open(infilepath) as infile:
        return collections.Counter(len(word) for line in infile for word in line.strip().split())

或者

撰写回答