如何像Matlab的blkproc函数那样高效处理numpy数组的块?
我在寻找一种好的方法,能够高效地把一张图片分成小区域,分别处理每个区域,然后把每个处理后的结果重新组合成一张完整的处理过的图片。在Matlab中,有一个叫做 blkproc 的工具(在新版Matlab中被 blockproc
替代了)。
在理想情况下,这个函数或类还应该支持输入矩阵中区域之间的重叠。在Matlab的帮助文档中,blkproc被定义为:
B = blkproc(A,[m n],[mborder nborder],fun,...)
- A 是你的输入矩阵,
- [m n] 是块的大小,
- [mborder, nborder] 是边界区域的大小(可选),
- fun 是要应用于每个块的函数。
我自己拼凑了一个方法,但觉得这个方法很笨拙,我敢打赌还有更好的办法。虽然有点不好意思,但我还是把我的代码贴出来:
import numpy as np
def segmented_process(M, blk_size=(16,16), overlap=(0,0), fun=None):
rows = []
for i in range(0, M.shape[0], blk_size[0]):
cols = []
for j in range(0, M.shape[1], blk_size[1]):
cols.append(fun(M[i:i+blk_size[0], j:j+blk_size[1]]))
rows.append(np.concatenate(cols, axis=1))
return np.concatenate(rows, axis=0)
R = np.random.rand(128,128)
passthrough = lambda(x):x
Rprime = segmented_process(R, blk_size=(16,16),
overlap=(0,0),
fun=passthrough)
np.all(R==Rprime)
6 个回答
我把两种输入方式和我最初的方法进行了比较,结果显示,正如@eat所说,结果是跟你输入的数据类型有关的。令人惊讶的是,在某些情况下,连接操作的表现比视图处理还要好。每种方法都有它最适合的情况。下面是我的基准测试代码:
import numpy as np
from itertools import product
def segment_and_concatenate(M, fun=None, blk_size=(16,16), overlap=(0,0)):
# truncate M to a multiple of blk_size
M = M[:M.shape[0]-M.shape[0]%blk_size[0],
:M.shape[1]-M.shape[1]%blk_size[1]]
rows = []
for i in range(0, M.shape[0], blk_size[0]):
cols = []
for j in range(0, M.shape[1], blk_size[1]):
max_ndx = (min(i+blk_size[0], M.shape[0]),
min(j+blk_size[1], M.shape[1]))
cols.append(fun(M[i:max_ndx[0], j:max_ndx[1]]))
rows.append(np.concatenate(cols, axis=1))
return np.concatenate(rows, axis=0)
from numpy.lib.stride_tricks import as_strided
def block_view(A, block= (3, 3)):
"""Provide a 2D block view to 2D array. No error checking made.
Therefore meaningful (as implemented) only for blocks strictly
compatible with the shape of A."""
# simple shape and strides computations may seem at first strange
# unless one is able to recognize the 'tuple additions' involved ;-)
shape= (A.shape[0]/ block[0], A.shape[1]/ block[1])+ block
strides= (block[0]* A.strides[0], block[1]* A.strides[1])+ A.strides
return as_strided(A, shape= shape, strides= strides)
def segmented_stride(M, fun, blk_size=(3,3), overlap=(0,0)):
# This is some complex function of blk_size and M.shape
stride = blk_size
output = np.zeros(M.shape)
B = block_view(M, block=blk_size)
O = block_view(output, block=blk_size)
for b,o in zip(B, O):
o[:,:] = fun(b);
return output
def view_process(M, fun=None, blk_size=(16,16), overlap=None):
# truncate M to a multiple of blk_size
from itertools import product
output = np.zeros(M.shape)
dz = np.asarray(blk_size)
shape = M.shape - (np.mod(np.asarray(M.shape),
blk_size))
for indices in product(*[range(0, stop, step)
for stop,step in zip(shape, blk_size)]):
# Don't overrun the end of the array.
#max_ndx = np.min((np.asarray(indices) + dz, M.shape), axis=0)
#slices = [slice(s, s + f, None) for s,f in zip(indices, dz)]
output[indices[0]:indices[0]+dz[0],
indices[1]:indices[1]+dz[1]][:,:] = fun(M[indices[0]:indices[0]+dz[0],
indices[1]:indices[1]+dz[1]])
return output
if __name__ == "__main__":
R = np.random.rand(128,128)
squareit = lambda(x):x*2
from timeit import timeit
t ={}
kn = np.array(list(product((8,16,64,128),
(128, 512, 2048, 4096)) ) )
methods = ("segment_and_concatenate",
"view_process",
"segmented_stride")
t = np.zeros((kn.shape[0], len(methods)))
for i, (k, N) in enumerate(kn):
for j, method in enumerate(methods):
t[i,j] = timeit("""Rprime = %s(R, blk_size=(%d,%d),
overlap = (0,0),
fun = squareit)""" % (method, k, k),
setup="""
from segmented_processing import %s
import numpy as np
R = np.random.rand(%d,%d)
squareit = lambda(x):x**2""" % (method, N, N),
number=5
)
print "k =", k, "N =", N #, "time:", t[i]
print (" Speed up (view vs. concat, stride vs. concat): %0.4f, %0.4f" % (
t[i][0]/t[i][1],
t[i][0]/t[i][2]))
接下来是结果:
注意,对于小块大小,分段步幅方法的表现比其他方法快3到4倍。只有在大块大小(128 x 128)和非常大的矩阵(2048 x 2048及以上)时,视图处理的方法才会胜出,而且优势也很小。根据这次比较,看来@eat赢得了认可!感谢你们两个提供了很好的例子!
处理数据时可以分成小块或视图来进行。把数据拼接在一起是非常耗费资源的。
for x in xrange(0, 160, 16):
for y in xrange(0, 160, 16):
view = A[x:x+16, y:y+16]
view[:,:] = fun(view)
这里有一些不同的(没有循环的)处理块的例子:
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
A= np.arange(36).reshape(6, 6)
print A
#[[ 0 1 2 3 4 5]
# [ 6 7 8 9 10 11]
# ...
# [30 31 32 33 34 35]]
# 2x2 block view
B= ast(A, shape= (3, 3, 2, 2), strides= (48, 8, 24, 4))
print B[1, 1]
#[[14 15]
# [20 21]]
# for preserving original shape
B[:, :]= np.dot(B[:, :], np.array([[0, 1], [1, 0]]))
print A
#[[ 1 0 3 2 5 4]
# [ 7 6 9 8 11 10]
# ...
# [31 30 33 32 35 34]]
print B[1, 1]
#[[15 14]
# [21 20]]
# for reducing shape, processing in 3D is enough
C= B.reshape(3, 3, -1)
print C.sum(-1)
#[[ 14 22 30]
# [ 62 70 78]
# [110 118 126]]
所以,单纯地把 matlab
的功能复制到 numpy
并不总是最好的方法。有时候需要一些“灵活”的思维。
注意事项:
一般来说,基于步幅技巧的实现 可能(但不一定)会有一些性能损失。所以要随时准备好测量你的性能。无论如何,首先检查一下所需的功能(或者类似的功能,以便于轻松适应)是否已经在 numpy
或 scipy
中实现是明智的。
更新:
请注意,这里与 strides
没有真正的 魔法
,所以我将提供一个简单的函数,用于获取任何合适的 2D numpy
数组的 block_view
。那么我们开始吧:
from numpy.lib.stride_tricks import as_strided as ast
def block_view(A, block= (3, 3)):
"""Provide a 2D block view to 2D array. No error checking made.
Therefore meaningful (as implemented) only for blocks strictly
compatible with the shape of A."""
# simple shape and strides computations may seem at first strange
# unless one is able to recognize the 'tuple additions' involved ;-)
shape= (A.shape[0]/ block[0], A.shape[1]/ block[1])+ block
strides= (block[0]* A.strides[0], block[1]* A.strides[1])+ A.strides
return ast(A, shape= shape, strides= strides)
if __name__ == '__main__':
from numpy import arange
A= arange(144).reshape(12, 12)
print block_view(A)[0, 0]
#[[ 0 1 2]
# [12 13 14]
# [24 25 26]]
print block_view(A, (2, 6))[0, 0]
#[[ 0 1 2 3 4 5]
# [12 13 14 15 16 17]]
print block_view(A, (3, 12))[0, 0]
#[[ 0 1 2 3 4 5 6 7 8 9 10 11]
# [12 13 14 15 16 17 18 19 20 21 22 23]
# [24 25 26 27 28 29 30 31 32 33 34 35]]