我试图在视网膜移动时跟踪视网膜图像上的点。目前,我正在使用OpenCV的模板匹配方法来识别点周围的区域,然后在下一幅图像中定位该区域,以识别点所在的位置
我有两张视网膜图像:
在第一幅图像中,我任意选择了3个我想要检测的点:
然后,我使用以下代码跟踪未来图像中的点:
s = 50
folder = 'P2/'
num_images = 2
#desired tracking point: (x, y)
x1, y1 = 157 - s//2, 130 - s//2
x2, y2 = 182 - s//2, 59 - s//2
x3, y3 = 221 - s//2, 125 - s//2
#template is a 100x100 square with (x,y) as top left corner
template = cv2.imread(folder + '1.png',0)
templates = [template[y1:y1+s, x1:x1+s], template[y3:y3+s, x3:x3+s], template[y2:y2+s, x2:x2+s]]
w, h = templates[0].shape[::-1]
# All the 6 methods for comparison in a list
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
for i in range(1, num_images+1):
img = cv2.imread(folder + str(i)+'.png',0)
meth = 'cv2.TM_CCOEFF_NORMED' #works the best
method = eval(meth)
x_pts = []
y_pts = []
for tem in templates:
# Apply template Matching
res = cv2.matchTemplate(img,tem,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
#cv2.rectangle(img,top_left, bottom_right, 255, 2)
x = top_left[0] + s//2
y = top_left[1] + s//2
x_pts += [x]
y_pts += [y]
#cv2.rectangle(img,(x, y),(x+1,y+1),(0,255,0),2) #new (x, y) point
print(top_left)
plt.plot(x_pts, y_pts,'ro-')
plt.imshow(img, cmap='gray')
plt.title(calculate(x_pts,y_pts)), plt.xticks([]), plt.yticks([])
plt.suptitle(meth)
plt.show()
如果点周围的区域是唯一的,这种跟踪方法是准确的,我想知道是否有更好的方法跟踪图像中的点?例如,如果我正在跟踪图像中较难分辨的模糊区域
目前没有回答
相关问题 更多 >
编程相关推荐