Python Gensim:如何使用LDA模型计算文档相似度?
我有一个训练好的LDA模型,现在我想计算我用这个模型训练的语料库中两个文档之间的相似度分数。看了很多Gensim的教程和函数,我还是搞不懂。有没有人能给我一点提示?谢谢!
3 个回答
6
提供的答案很好,但对初学者来说不太友好。我想从训练LDA模型开始,并计算余弦相似度。
训练模型的部分:
docs = ["latent Dirichlet allocation (LDA) is a generative statistical model",
"each document is a mixture of a small number of topics",
"each document may be viewed as a mixture of various topics"]
# Convert document to tokens
docs = [doc.split() for doc in docs]
# A mapping from token to id in each document
from gensim.corpora import Dictionary
dictionary = Dictionary(docs)
# Representing the corpus as a bag of words
corpus = [dictionary.doc2bow(doc) for doc in docs]
# Training the model
model = LdaModel(corpus=corpus, id2word=dictionary, num_topics=10)
要提取每个文档分配给每个主题的概率,通常有两种方法。我在这里提供这两种方法:
# Some preprocessing for documents like the training the model
test_doc = ["LDA is an example of a topic model",
"topic modelling refers to the task of identifying topics"]
test_doc = [doc.split() for doc in test_doc]
test_corpus = [dictionary.doc2bow(doc) for doc in test_doc]
# Method 1
from gensim.matutils import cossim
doc1 = model.get_document_topics(test_corpus[0], minimum_probability=0)
doc2 = model.get_document_topics(test_corpus[1], minimum_probability=0)
print(cossim(doc1, doc2))
# Method 2
doc1 = model[test_corpus[0]]
doc2 = model[test_corpus[1]]
print(cossim(doc1, doc2))
输出结果:
#Method 1
0.8279631530869963
#Method 2
0.828066885140262
如你所见,这两种方法基本上是相同的,区别在于第二种方法返回的概率有时加起来不会等于1,具体讨论可以参考这里。对于大型语料库,可以通过传递整个语料库来获得可能性向量:
#Method 1
possibility_vector = model.get_document_topics(test_corpus, minimum_probability=0)
#Method 2
possiblity_vector = model[test_corpus]
注意:文档中分配给每个主题的概率总和可能会稍微超过1,或者在某些情况下稍微低于1。这是因为浮点数运算时的舍入误差。
26
不知道这是否有帮助,但我成功地用实际的文档作为查询,达到了文档匹配和相似度的好结果。
dictionary = corpora.Dictionary.load('dictionary.dict')
corpus = corpora.MmCorpus("corpus.mm")
lda = models.LdaModel.load("model.lda") #result from running online lda (training)
index = similarities.MatrixSimilarity(lda[corpus])
index.save("simIndex.index")
docname = "docs/the_doc.txt"
doc = open(docname, 'r').read()
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lda = lda[vec_bow]
sims = index[vec_lda]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
print sims
你在所有文档中与用作查询的文档之间的相似度分数,将会是每个相似度结果的第二个索引。
36
这要看你想用什么样的相似度衡量标准。
sim = gensim.matutils.cossim(vec_lda1, vec_lda2)
Hellinger距离适合用来衡量概率分布之间的相似性(比如LDA主题):
import numpy as np
dense1 = gensim.matutils.sparse2full(lda_vec1, lda.num_topics)
dense2 = gensim.matutils.sparse2full(lda_vec2, lda.num_topics)
sim = np.sqrt(0.5 * ((np.sqrt(dense1) - np.sqrt(dense2))**2).sum())