为什么给共享内存数组增加一列填充能提高40%的内核速度?

1 投票
1 回答
1574 浏览
提问于 2025-04-16 23:22

为什么这个矩阵转置的程序在共享内存数组多加一列的时候会更快呢?

我在这里找到了这个程序:PyCuda/Examples/MatrixTranspose.

源代码

import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy

block_size = 16    

def _get_transpose_kernel(offset):
    mod = SourceModule("""
    #define BLOCK_SIZE %(block_size)d
    #define A_BLOCK_STRIDE (BLOCK_SIZE * a_width)
    #define A_T_BLOCK_STRIDE (BLOCK_SIZE * a_height)

    __global__ void transpose(float *A_t, float *A, int a_width, int a_height)
    {
        // Base indices in A and A_t
        int base_idx_a   = blockIdx.x * BLOCK_SIZE +
        blockIdx.y * A_BLOCK_STRIDE;
        int base_idx_a_t = blockIdx.y * BLOCK_SIZE +
        blockIdx.x * A_T_BLOCK_STRIDE;

        // Global indices in A and A_t
        int glob_idx_a   = base_idx_a + threadIdx.x + a_width * threadIdx.y;
        int glob_idx_a_t = base_idx_a_t + threadIdx.x + a_height * threadIdx.y;

        /** why does the +1 offset make the kernel faster? **/
        __shared__ float A_shared[BLOCK_SIZE][BLOCK_SIZE+%(offset)d]; 

        // Store transposed submatrix to shared memory
        A_shared[threadIdx.y][threadIdx.x] = A[glob_idx_a];

        __syncthreads();

        // Write transposed submatrix to global memory
        A_t[glob_idx_a_t] = A_shared[threadIdx.x][threadIdx.y];
    }
    """% {"block_size": block_size, "offset": offset})

    kernel = mod.get_function("transpose")
    kernel.prepare("PPii", block=(block_size, block_size, 1))
    return kernel 


def transpose(tgt, src,offset):
    krnl = _get_transpose_kernel(offset)
    w, h = src.shape
    assert tgt.shape == (h, w)
    assert w % block_size == 0
    assert h % block_size == 0
    krnl.prepared_call((w / block_size, h /block_size), tgt.gpudata, src.gpudata, w, h)


def run_benchmark():
    from pycuda.curandom import rand
    print pycuda.autoinit.device.name()
    print "time\tGB/s\tsize\toffset\t"
    for offset in [0,1]:
        for size in [2048,2112]:
        
            source = rand((size, size), dtype=numpy.float32)
            target = gpuarray.empty((size, size), dtype=source.dtype)

            start = pycuda.driver.Event()
            stop = pycuda.driver.Event()

            warmup = 2
            for i in range(warmup):
                transpose(target, source,offset)

            pycuda.driver.Context.synchronize()
            start.record()

            count = 10          
            for i in range(count):
                transpose(target, source,offset)

            stop.record()
            stop.synchronize()

            elapsed_seconds = stop.time_since(start)*1e-3
            mem_bw = source.nbytes / elapsed_seconds * 2 * count /1024/1024/1024
            
            print "%6.4fs\t%6.4f\t%i\t%i" %(elapsed_seconds,mem_bw,size,offset)


run_benchmark()

输出结果

Quadro FX 580
time    GB/s    size    offset  
0.0802s 3.8949  2048    0
0.0829s 4.0105  2112    0
0.0651s 4.7984  2048    1
0.0595s 5.5816  2112    1

采用的代码

1 个回答

3

这个问题的原因是共享内存的银行冲突。你使用的CUDA硬件把共享内存分成了16个“银行”,而共享内存是按顺序分布在这16个银行里的。如果两个线程同时想访问同一个银行,就会发生冲突,这时候这两个线程就得排队执行。这就是你现在看到的情况。通过把共享内存数组的步长增加1,你可以确保在共享数组的连续行中,相同的列索引会分布在不同的银行里,这样就能消除大部分可能的冲突。

这种现象(还有一种叫做“分区驻留”的全局内存现象)在《CUDA中优化矩阵转置》的论文中有详细讨论,这篇论文和SDK的矩阵转置示例一起提供,值得一读。

撰写回答