在Python中加速插值
我现在在用Python处理图像,主要用到numpy和scipy。简单来说,我有一张图片,想对它进行很多局部的收缩处理。我的初步代码已经能正常工作,最终的图像效果也很好。不过,处理时间成了我们应用中的一个大问题。你能帮我加速我的图像处理代码吗?
我试着把代码简化成下面的“卡通”版本。性能分析显示,我花费大部分时间在插值上。有没有明显的方法可以加快执行速度呢?
import cProfile, pstats
import numpy
from scipy.ndimage import interpolation
def get_centered_subimage(
center_point, window_size, image):
x, y = numpy.round(center_point).astype(int)
xSl = slice(max(x-window_size-1, 0), x+window_size+2)
ySl = slice(max(y-window_size-1, 0), y+window_size+2)
subimage = image[xSl, ySl]
interpolation.shift(
subimage, shift=(x, y)-center_point, output=subimage)
return subimage[1:-1, 1:-1]
"""In real life, this is experimental data"""
im = numpy.zeros((1000, 1000), dtype=float)
"""In real life, this mask is a non-zero pattern"""
window_radius = 10
mask = numpy.zeros((2*window_radius+1, 2*window_radius+1), dtype=float)
"""The x, y coordinates in the output image"""
new_grid_x = numpy.linspace(0, im.shape[0]-1, 2*im.shape[0])
new_grid_y = numpy.linspace(0, im.shape[1]-1, 2*im.shape[1])
"""The grid we'll end up interpolating onto"""
grid_step_x = new_grid_x[1] - new_grid_x[0]
grid_step_y = new_grid_y[1] - new_grid_y[0]
subgrid_radius = numpy.floor(
(-1 + window_radius * 0.5 / grid_step_x,
-1 + window_radius * 0.5 / grid_step_y))
subgrid = (
window_radius + 2 * grid_step_x * numpy.arange(
-subgrid_radius[0], subgrid_radius[0] + 1),
window_radius + 2 * grid_step_y * numpy.arange(
-subgrid_radius[1], subgrid_radius[1] + 1))
subgrid_points = ((2*subgrid_radius[0] + 1) *
(2*subgrid_radius[1] + 1))
"""The coordinates of the set of spots we we want to contract. In real
life, this set is non-random:"""
numpy.random.seed(0)
num_points = 10000
center_points = numpy.random.random(2*num_points).reshape(num_points, 2)
center_points[:, 0] *= im.shape[0]
center_points[:, 1] *= im.shape[1]
"""The output image"""
final_image = numpy.zeros(
(new_grid_x.shape[0], new_grid_y.shape[0]), dtype=numpy.float)
def profile_me():
for m, cp in enumerate(center_points):
"""Take an image centered on each illumination point"""
spot_image = get_centered_subimage(
center_point=cp, window_size=window_radius, image=im)
if spot_image.shape != (2*window_radius+1, 2*window_radius+1):
continue #Skip to the next spot
"""Mask the image"""
masked_image = mask * spot_image
"""Resample the image"""
nearest_grid_index = numpy.round(
(cp - (new_grid_x[0], new_grid_y[0])) /
(grid_step_x, grid_step_y))
nearest_grid_point = (
(new_grid_x[0], new_grid_y[0]) +
(grid_step_x, grid_step_y) * nearest_grid_index)
new_coordinates = numpy.meshgrid(
subgrid[0] + 2 * (nearest_grid_point[0] - cp[0]),
subgrid[1] + 2 * (nearest_grid_point[1] - cp[1]))
resampled_image = interpolation.map_coordinates(
masked_image,
(new_coordinates[0].reshape(subgrid_points),
new_coordinates[1].reshape(subgrid_points))
).reshape(2*subgrid_radius[1]+1,
2*subgrid_radius[0]+1).T
"""Add the recentered image back to the scan grid"""
final_image[
nearest_grid_index[0]-subgrid_radius[0]:
nearest_grid_index[0]+subgrid_radius[0]+1,
nearest_grid_index[1]-subgrid_radius[1]:
nearest_grid_index[1]+subgrid_radius[1]+1,
] += resampled_image
cProfile.run('profile_me()', 'profile_results')
p = pstats.Stats('profile_results')
p.strip_dirs().sort_stats('cumulative').print_stats(10)
代码大致做了什么的模糊解释:
我们从一张像素化的二维图像开始,还有一组在图像中任意位置的(x, y)点,这些点通常不在整数网格上。对于每个(x, y)点,我想在这个点上精确地乘以一个小的掩膜。接下来,我们对这个掩膜区域进行一定量的收缩或扩展,最后将处理过的子图像添加到最终图像中,而这个最终图像的像素大小可能和原始图像不一样。(这不是我最好的解释,算了)。
1 个回答
3
我很确定,正如你所说,大部分的计算时间都花在了 interpolate.map_coordinates(…)
这个函数上,它会在每次处理 center_points
时被调用,这里总共调用了10,000次。一般来说,当使用numpy或scipy这些工具时,我们希望对大数组进行重复操作时,能用它们的原生函数来处理,也就是在C语言的循环中处理同类数据,而不是用Python逐个处理。
有一种方法可以加快插值的速度,但会增加内存的使用量,步骤如下:
- 首先,把所有的子图像(这里叫
masked_image
)提取到一个三维数组中,大小为window_radius x window_radius x center_points.size
。 - 创建一个 ufunc(建议你看看,这很有用),它会处理每个子图像的工作,使用 numpy.frompyfunc,这个函数应该返回另一个三维数组,大小为
subgrid_radius[0] x subgrid_radius[1] x center_points.size
。简单来说,这样可以创建一个向量化的Python函数,可以在数组上逐元素应用。 - 最后,通过对第三维进行求和来构建最终图像。
希望这些能帮助你更接近你的目标!