使用语料库计算tf-idf

0 投票
1 回答
1532 浏览
提问于 2025-04-17 22:31

我复制了一段关于如何创建一个可以运行tf-idf的系统的源代码,代码如下:

    #module import
    from __future__ import division, unicode_literals
    import math
    import string
    import re
    import os

    from text.blob import TextBlob as tb
    #create a new array
    words = {} 
    def tf(word, blob):
       return blob.words.count(word) / len(blob.words)

    def n_containing(word, bloblist):
       return sum(1 for blob in bloblist if word in blob)

    def idf(word, bloblist):
       return math.log(len(bloblist) / (1 + n_containing(word, bloblist)))

    def tfidf(word, blob, bloblist):
       return tf(word, blob) * idf(word, bloblist)

    regex = re.compile('[%s]' % re.escape(string.punctuation))

    f = open('D:/article/sport/a.txt','r')
    var = f.read()
    var = regex.sub(' ', var)
    var = var.lower()

    document1 = tb(var)

    f = open('D:/article/food/b.txt','r')
    var = f.read()
    var = var.lower()
    document2 = tb(var)


    bloblist = [document1, document2]
    for i, blob in enumerate(bloblist):
       print("Top words in document {}".format(i + 1))
    scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
    sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
    for word, score in sorted_words[:50]:
    print("Word: {}, TF-IDF: {}".format(word, round(score, 5)))

但是,我遇到一个问题,我想把运动文件夹里的所有文件放到一个语料库里,把食物文件夹里的文章放到另一个语料库里,这样系统就能分别给出每个语料库的结果。目前,我只能比较文件,但我想在语料库之间进行比较。很抱歉问这个问题,任何帮助都非常感谢。

谢谢

1 个回答

0

我理解的是,你想计算两个文件中单词出现的频率,并把结果存储到不同的文件中以便进行比较。要做到这一点,你可以使用终端。下面是一个简单的代码,可以用来计算单词的频率。

import string
import collections
import operator
keywords = []
i=0
def removePunctuation(sentence):
    sentence = sentence.lower()
    new_sentence = ""
    for char in sentence:
        if char not in string.punctuation:
                new_sentence = new_sentence + char
    return new_sentence
 def wordFrequences(sentence):
    global i
    wordFreq = {}
    split_sentence = new_sentence.split()
    for word in split_sentence:
        wordFreq[word] = wordFreq.get(word,0) + 1
    wordFreq.items()
  # od = collections.OrderedDict(sorted(wordFreq.items(),reverse=True))
  # print od
    sorted_x= sorted(wordFreq.iteritems(), key=operator.itemgetter(1),reverse = True)
    print sorted_x
    for key, value in sorted_x:
        keywords.append(key)
    print keywords
f = open('D:/article/sport/a.txt','r')
sentence = f.read()
# sentence = "The first test of the function some some some some"
new_sentence = removePunctuation(sentence)
wordFrequences(new_sentence)

你需要运行这个代码两次,每次都要更改你文本文件的路径。在控制台运行代码时,输入类似这样的命令。

python abovecode.py > destinationfile.txt

就像你这个情况一样。

python abovecode.py > sportfolder/file1.txt
python abovecode.py > foodfolder/file2.txt

重要提示:如果你想要单词和它们的频率,那么就去掉那部分。

print keywords

重要提示:如果你需要根据频率来排列单词,那么就去掉。

print sorted_x

撰写回答