用Python实现主题模型(numpy)

2024-04-29 21:21:59 发布

您现在位置:Python中文网/ 问答频道 /正文

最近,我使用numpy在Python上实现了LDA主题模型的Gibbs采样,并引用了一个站点的一些代码。在Gibbs抽样的每次迭代中,我们移除一个(当前)单词,根据从LDA模型推断出的后验条件概率分布为该单词抽取一个新主题,并更新单词主题计数,如下所示:

for m, doc in enumerate(docs): #m: doc id
  for n, t in enumerate(doc): #n: id of word inside document, t: id of the word globally
    # discount counts for word t with associated topic z
    z = z_m_n[m][n]
    n_m_z[m][z] -= 1
    n_z_t[z, t] -= 1 
    n_z[z] -= 1
    n_m[m] -= 1

    # sample new topic for multinomial                
    p_z_left = (n_z_t[:, t] + beta) / (n_z + V * beta)
    p_z_right = (n_m_z[m] + alpha) / ( n_m[m] + alpha * K)
    p_z = p_z_left * p_z_right
    p_z /= numpy.sum(p_z)
    new_z = numpy.random.multinomial(1, p_z).argmax() 

    # set z as the new topic and increment counts
    z_m_n[m][n] = new_z
    n_m_z[m][new_z] += 1
    n_z_t[new_z, t] += 1
    n_z[new_z] += 1
    n_m[m] += 1

在上面的代码中,我们使用多项式scipy函数对新的(单个)z进行采样。在

现在,我想实现一个this paper的联合情感主题模型。现在,我需要以下结构来跟踪所需的计数:

^{pr2}$

现在问题来了:在这个Gibbs取样器中,对于在文档中看到的每个单词,一个新主题和一个情感标签都是从条件后验(论文第4页等式5)中抽取的。 我现在如何在Python中“采样这两个值”?在

提前谢谢。。。在


Tags: 代码模型numpyid主题newfortopic
1条回答
网友
1楼 · 发布于 2024-04-29 21:21:59

试试这个。从主题和情感标签的联合分布中进行抽样仅仅意味着整个T x S矩阵的和应该是1。在

docs=[[0,1],[0,0],[1,0,1]]
D=len(docs)
z_d_n=[[0 for _ in xrange(len(d))] for d in docs]
l_d_n=[[0 for _ in xrange(len(d))] for d in docs]

V=2
T=2
S=2
n_m_j_k=numpy.zeros( (V,T,S) )
n_j_k_d=numpy.zeros( (T,S,D) )
n_j_k=numpy.zeros( (T,S) )
n_k_d=numpy.zeros( (S,D) )
n_d=numpy.zeros( (D) )

beta=.1
alpha=.1
gamma=.1

for d, doc in enumerate(docs): #d: doc id
    for n, m in enumerate(doc): #i: index of the word inside document, m: id of the word in the vocabulary
        # j is the topic
        j = z_d_n[d][n]
        # k is the sentiment
        k = l_d_n[d][n]
        n_m_j_k[m][j][k] += 1
        n_j_k_d[j][k][d] += 1
        n_j_k[j][k] += 1
        n_k_d[k][d] += 1
        n_d[d] += 1 

for d, doc in enumerate(docs): #d: doc id
    for n, m in enumerate(doc): #i: index of the word inside document, m: id of the word in the vocabulary
        # j is the topic
        j = z_d_n[d][n]
        # k is the sentiment
        k = l_d_n[d][n]
        n_m_j_k[m][j][k] -= 1
        n_j_k_d[j][k][d] -= 1
        n_j_k[j][k] -= 1
        n_k_d[k][d] -= 1
        n_d[d] -= 1 

        # sample a new topic and sentiment label jointly
        # T is the number of topics
        # S is the number of sentiments
        p_left = (n_m_j_k[m] + beta) / (n_j_k + V * beta) # T x S array
        p_mid = (n_j_k_d[:,:,d] + alpha) / numpy.tile(n_k_d[:,d] + T * alpha, (T,1) )
        p_right = numpy.tile(n_k_d[:,d] + gamma,(T,1)) /  numpy.tile(n_d[d] + S * gamma,(T,S))
        p = p_left * p_mid * p_right
        p /= numpy.sum(p)
        new_jk = numpy.random.multinomial(1, numpy.reshape(p, (T*S) )).argmax()
        j=new_jk/T
        k=new_jk%T

        z_d_n[d][n]=j
        l_d_n[d][n]=k
        n_m_j_k[m][j][k] += 1
        n_j_k[j][k] += 1
        n_k_d[k][d] += 1
        n_d[d] += 1

相关问题 更多 >