相互依赖矩阵的矢量化计算

2024-06-16 12:52:53 发布

您现在位置:Python中文网/ 问答频道 /正文

我在多个时间分辨率下跟踪多个离散时间序列,得到一个SxRxB矩阵,其中S是时间序列的数目,R是不同分辨率的数目,B是缓冲区,即每个序列记住多少值。每个序列都是离散的,并且使用有限范围的自然数来表示其值。我把这些叫做“符号”。在

对于每个系列,我想计算在所有测量中,任何先前测量的符号直接位于当前测量的符号之前的频率。我已经用如下所示的for循环解决了这个问题,但出于明显的原因,我想将其矢量化。在

我不确定我构建数据结构的方式是否有效,因此我愿意听取建议。尤其是比率矩阵,我认为可以用不同的方法。在

提前谢谢!在

def supports_loop(data, num_series, resolutions, buffer_size, vocab_size):
    # For small test matrices we can calculate the complete matrix without problems
    indices = []
    indices.append(xrange(num_series))
    indices.append(xrange(vocab_size))
    indices.append(xrange(num_series))
    indices.append(xrange(vocab_size))
    indices.append(xrange(resolutions))

    # This is huge! :/
    # dimensions:
    #   series and value for which we calculate,
    #   series and value which precedes that measurement,
    #   resolution
    ratios = np.full((num_series, vocab_size, num_series, vocab_size, resolutions), 0.0)

    for idx in itertools.product(*indices):
        s0, v0 = idx[0],idx[1]  # the series and symbol for which we calculate
        s1, v1 = idx[2],idx[3]  # the series and symbol which should precede the we're calculating for
        res = idx[4]

        # Find the positions where s0==v0
        found0 = np.where(data[s0, res, :] == v0)[0]
        if found0.size == 0:
            continue
        #print('found {}={} at {}'.format(s0, v0, found0))

        # Check how often s1==v1 right before s0==v0
        candidates = (s1, res, (found0 - 1 + buffer_size) % buffer_size)
        found01 = np.count_nonzero(data[candidates] == v1)
        if found01 == 0:
            continue

        print('found {}={} following {}={} at {}'.format(s0, v0, s1, v1, found01))
        # total01 = number of positions where either s0 or s1 is defined (i.e. >=0)
        total01 = len(np.argwhere((data[s0, res, :] >= 0) & (data[s1, res, :] >= 0)))
        ratio = (float(found01) / total01) if total01 > 0 else 0.0
        ratios[idx] = ratio

    return ratios


def stackoverflow_example(fnc):
    data = np.array([
        [[0, 0, 1],  # series 0, resolution 0
         [1, 3, 2]], # series 0, resolution 1

        [[2, 1, 2],  # series 1, resolution 0
         [3, 3, 3]], # series 1, resoltuion 1
    ])

    num_series = data.shape[0]
    resolutions = data.shape[1]
    buffer_size = data.shape[2]
    vocab_size = np.max(data)+1

    ratios = fnc(data, num_series, resolutions, buffer_size, vocab_size)
    coordinates = np.argwhere(ratios > 0.0)
    nz_values = ratios[ratios > 0.0]
    print(np.hstack((coordinates, nz_values[:,None])))
    print('0/0 precedes 0/0 in 1 out of 3 cases: {}'.format(np.isclose(ratios[0,0,0,0,0], 1.0/3.0)))
    print('1/2 precedes 0/0 in 2 out of 3 cases: {}'.format(np.isclose(ratios[0,0,1,2,0], 2.0/3.0)))

预期输出(21对,5列为坐标,后跟found count):

^{pr2}$

在上面的例子中,序列0中的0在三分之二的情况下跟随序列1中的2(因为缓冲区是圆形的),因此[0,0,1,2,0]处的比率将为~0.6666。同样是序列0,值0在三种情况中有一种会跟随它自己,所以在[0,0,0,0]处的比率将是~0.3333。还有一些是>;0.0的。在


我在两个数据集上测试每个答案:一个很小的(如上图所示)和一个更真实的(100系列,5个分辨率,每个系列10个值,50个符号)。在

结果

Answer        Time (tiny)     Time (huge)     All pairs found (tiny=21)
-----------------------------------------------------------------------
Baseline      ~1ms            ~675s (!)       Yes
Saedeas       ~0.13ms         ~1.4ms          No (!)
Saedeas2      ~0.20ms         ~4.0ms          Yes, +cross resolutions
Elliot_1      ~0.70ms         ~100s (!)       Yes
Elliot_2      ~1ms            ~21s (!)        Yes
Kuppern_1     ~0.39ms         ~2.4s (!)       Yes
Kuppern_2     ~0.18ms         ~28ms           Yes
Kuppern_3     ~0.19ms         ~24ms           Yes
David         ~0.21ms         ~27ms           Yes

赛德亚斯第二种方法是明显的赢家!非常感谢大家:)


Tags: datasizenp序列numyesmsseries
3条回答

如果我正确地理解了您的问题,我认为这段代码将以一种相对快速、矢量化的方式为您提供所需的符号对。在

import numpy as np
import time
from collections import Counter

series = 2
resolutions = 2
buffer_len = 3
symbols = range(3)

#mat = np.random.choice(symbols, size=(series, resolutions, buffer_len)).astype('uint8')

mat = np.array([
        [[0, 0, 1],  # series 0, resolution 0
         [1, 3, 2]],  # series 0, resolution 1
        [[2, 1, 2],  # series 1, resolution 0
         [3, 3, 3]],  # series 1, resoltuion 1
    ])

start = time.time()
index_mat = np.indices(mat.shape)

right_shift_indices = np.roll(index_mat, -1, axis=3)
mat_shifted = mat[right_shift_indices[0], right_shift_indices[1], right_shift_indices[2]]

# These construct all the pairs directly
first_series = np.repeat(range(series), series*resolutions*buffer_len)
second_series = np.tile(np.repeat(range(series), resolutions*buffer_len), series)
res_loop = np.tile(np.repeat(range(resolutions), buffer_len), series*series)
mat_unroll = np.repeat(mat, series, axis=0)
shift_unroll = np.tile(mat_shifted, series)

# Constructs the pairs
pairs = zip(np.ravel(first_series),
            np.ravel(second_series),
            np.ravel(res_loop),
            np.ravel(mat_unroll),
            np.ravel(shift_unroll))

pair_time = time.time() - start
results = Counter(pairs)
end = time.time() - start

print("Mat: {}").format(mat)
print("Pairs: {}").format(results)
print("Number of Pairs: {}".format(len(pairs)))
print("Pair time is: {}".format(pair_time))
print("Count time is: {}".format(end-pair_time))
print("Total time is: {}".format(end))

基本思想是根据每个缓冲区是哪个时间序列循环移动适当的量(我想这就是您当前的代码所做的)。然后,我可以通过简单地沿着序列轴压缩偏移1的列表来生成所有符号对。在

输出示例:

^{pr2}$

编辑:真正的最后尝试。完全矢量化。在

首先,不显式地嵌套for循环会给自己带来一点伤害。结果你重复了很多努力,却没有保存任何记忆。当循环嵌套时,可以将一些计算从一个级别移动到另一个级别,并确定哪些内部循环可以矢量化。在

def supports_5_loop(data, num_series, resolutions, buffer_size, vocab_size):
    ratios = np.full((num_series, vocab_size, num_series, vocab_size, resolutions), 0.0)
    for res in xrange(resolutions):
        for s0 in xrange(num_series):
            # Find the positions where s0==v0
            for v0 in np.unique(data[s0, res]):
                # only need to find indices once for each series and value
                found0 = np.where(data[s0, res, :] == v0)[0]
                for s1 in xrange(num_series):
                    # Check how often s1==v1 right before s0==v0
                    candidates = (s1, res, (found0 - 1 + buffer_size) % buffer_size)
                    total01 = np.logical_or(data[s0, res, :] >= 0, data[s1, res, :] >= 0).sum()
                    # can skip inner loops if there are no candidates
                    if total01 == 0:
                        continue
                    for v1 in xrange(vocab_size):
                        found01 = np.count_nonzero(data[candidates] == v1)
                        if found01 == 0:
                            continue

                        ratio = (float(found01) / total01)
                        ratios[(s0, v0, s1, v1, res)] = ratio

    return ratios

你会在计时中看到,大部分的速度提升来自于不重复的努力。在

一旦创建了嵌套结构,就可以开始研究向量化和其他优化。在

^{pr2}$

不幸的是,我只能对最里面的循环进行矢量化,这只会额外获得10%的加速。在最里面的循环之外,你不能保证所有的向量都是相同的大小,所以你不能构建一个数组。在

In [121]: (np.all(supports_loop(data, num_series, resolutions, buffer_size, vocab_size) == supports_5_loop(data, num_series, resolutions, buffer_size, vocab_size)))
Out[121]: True

In [122]: (np.all(supports_loop(data, num_series, resolutions, buffer_size, vocab_size) == supports_4_loop(data, num_series, resolutions, buffer_size, vocab_size)))
Out[122]: True
In [123]: %timeit(supports_loop(data, num_series, resolutions, buffer_size, vocab_size))
2.29 ms ± 73.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [124]: %timeit(supports_5_loop(data, num_series, resolutions, buffer_size, vocab_size))
949 µs ± 5.37 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [125]: %timeit(supports_4_loop(data, num_series, resolutions, buffer_size, vocab_size))
843 µs ± 3.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

使此向量化的一个技巧是为每对序列生成一个comb[i] = buffer1[i]+buffer2[i-1]*voc_size数组。然后,每个组合在数组中获得一个唯一的值。你可以通过做v1[i] = comb[i] % voc_size, v2[i] = comb[i]//voc_size找到这个组合。只要序列的数量不是很高(我想是10000),就没有必要再做向量化了。在

def support_vectorized(data, num_series, resolutions, buffer_size, vocab_size):
    ratios = np.zeros((num_series, vocab_size, num_series, vocab_size, resolutions))
    prev = np.roll(data, 1, axis=2)  # Get previous values
    prev *= vocab_size  # To separate prev from data
    for i, series in enumerate(data):
        for j, prev_series in enumerate(prev):
            comb = series + prev_series
            for k, buffer in enumerate(comb):
                idx, counts = np.unique(buffer, return_counts=True)
                v = idx % vocab_size    
                v2 = idx // vocab_size
                ratios[i, v, j, v2, k] = counts/buffer_size
    return ratios

但是,如果S或R较大,则可以进行完全矢量化,但这会占用大量内存:

^{pr2}$

但是,对于S=100,这比previos解决方案慢。一个折中的方法是在序列上保持一个for循环,以减少内存使用:

def row_unique2(comb):
    comb.sort(axis=-1)
    changes = np.concatenate((
        np.ones((comb.shape[0], comb.shape[1], 1), dtype="bool"),
        comb[:, :, 1:] != comb[:, :, :-1]), axis=-1)
    vals = comb[changes]
    idxs = np.nonzero(changes)
    tmp = np.hstack((idxs[-1], 0))
    counts = np.where(tmp[1:], np.diff(tmp), comb.shape[-1]-tmp[:-1])
    return idxs, vals, counts


def supports_half_vectorized(data, num_series, resolutions, buffer_size, vocab_size):
    prev = np.roll(data, 1, axis=2)*vocab_size
    ratios = np.zeros((num_series, vocab_size, num_series, vocab_size, resolutions))
    for i, series in enumerate(data):
        comb = series + prev
        idxs, vals, counts = row_unique2(comb)
        ratios[i, vals % vocab_size, idxs[0], vals // vocab_size, idxs[1]] = counts/buffer_size
    return ratios

不同解决方案的运行时间表明,support_half_vectorized是最快的

In [41]: S, R, B, voc_size = (100, 5, 1000, 29)

In [42]: data = np.random.randint(voc_size, size=S*R*B).reshape((S, R, B))

In [43]: %timeit support_vectorized(data, S, R, B, voc_size)
1 loop, best of 3: 4.84 s per loop

In [44]: %timeit supports_full_vectorized(data, S, R, B, voc_size)
1 loop, best of 3: 5.3 s per loop

In [45]: %timeit supports_half_vectorized(data, S, R, B, voc_size)
1 loop, best of 3: 4.36 s per loop

In [46]: %timeit supports_4_loop(data, S, R, B, voc_size)
1 loop, best of 3: 36.7 s per loop

相关问题 更多 >