python矢量化:如何提高4层循环的效率

2024-04-26 15:11:38 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试使用Gibbs采样实现LDA,在更新每个主题比例的步骤中,我有一个4层循环,它运行非常慢,我不确定如何提高代码的效率。我现在的代码如下:

N\u W是字数,N\u D是文档数,Z[i,j]是主题分配(1到K个可能的分配),X[i,j]是第i个文档中第j个单词的计数,Beta[K,:]是维度[K,N\u W]

更新如下:

for k in range(K): # iteratively for each topic update
    n_k = np.zeros(N_W) # vocab size

    for w in range(N_W):
        for i in range(N_D):
            for j in range(N_W): 
                # counting number of times a word is assigned to a topic
                n_k[w] += (X[i,j] == w) and (Z[i,j] == k) 

    # update
    Beta[k,:] = np.random.dirichlet(gamma + n_k)

Tags: 代码in文档主题fortopicnp步骤
2条回答

我用以下矩阵做了一些测试:

import numpy as np

K = 90
N_W = 100
N_D = 11
N_W = 12

Z = np.random.randint(0, K, size=(N_D, N_W))
X = np.random.randint(0, N_W, size=(N_D, N_W))

gamma = 1

原始代码:

%%timeit
Beta = numpy.zeros((K, N_W))
for k in range(K): # iteratively for each topic update
    n_k = np.zeros(N_W) # vocab size

    for w in range(N_W):
        for i in range(N_D):
            for j in range(N_W): 
                # counting number of times a word is assigned to a topic
                n_k[w] += (X[i,j] == w) and (Z[i,j] == k) 

    # update
    Beta[k,:] = np.random.dirichlet(gamma + n_k)

865 ms ± 8.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

然后只对内部两个循环进行矢量化:

%%timeit
Beta = numpy.zeros((K, N_W))

for k in range(K): # iteratively for each topic update
    n_k = np.zeros(N_W) # vocab size

    for w in range(N_W):
        n_k[w] = np.sum((X == w) & (Z == k))


    # update
    Beta[k,:] = np.random.dirichlet(gamma + n_k)

21.6 ms ± 542 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

最后,通过一些创造性的广播应用和提取共同元素:

%%timeit
Beta = numpy.zeros((K, N_W))

w = np.arange(N_W)
X_eq_w = np.equal.outer(X, w)

for k in range(K): # iteratively for each topic update
    n_k = np.sum(X_eq_w & (Z == k)[:, :, None], axis=(0, 1))


    # update
    Beta[k,:] = np.random.dirichlet(gamma + n_k)

4.6 ms ± 92.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

这里的取舍是在速度和记忆之间。对于我使用的形状,这不是内存密集型的,但是我在上一个解决方案中构建的中间三维数组可能会非常大

您可以使用逻辑函数除去最后两个for循环:

for k in range(K): # iteratively for each topic update
    n_k = np.zeros(N_W) # vocab size
    for w in range(N_W):
         a = np.logical_not(X-w) # all X(i,j) == w become a True, others a false
         b = np.logical_not(Z-k) # all Z(i,j) == w become a True, others a false
         c = np.logical_and(a,b) # all (i,j) where X(i,j) == w and Z(i,j) == k are True, others false
         n_k[w] = np.sum(c) # sum all True values

甚至作为一个班轮:

n_k = np.array([[np.sum(np.logical_and(np.logical_not(X[:N_D,:N_W]-w), np.logical_not(Z[:N_D,:N_W]-k))) for w in range(N_W)] for k in range(K)])

n_k中的每一行都可以用于beta计算。现在它还包括nuw和nud作为限制,如果它们不等于X和Z的大小

相关问题 更多 >