Dlib有一个非常方便、快速和高效的对象检测例程,我想制作一个类似于示例here的酷脸跟踪示例。在
OpenCV得到了广泛的支持,它具有相当快的视频捕获模块(五分之一秒用于快照,而调用某个程序来唤醒网络摄像头并获取图片的时间为1秒或更长)。我将此添加到Dlib中的facedetector Python示例中。在
如果直接显示和处理OpenCV VideoCapture输出,它看起来很奇怪,因为显然OpenCV存储的是BGR而不是RGB顺序。调整后,它可以工作,但速度很慢:
from __future__ import division
import sys
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
if len( sys.argv[1:] ) == 0:
from cv2 import VideoCapture
from time import time
cam = VideoCapture(0) #set the port of the camera as before
while True:
start = time()
retval, image = cam.read() #return a True bolean and and the image if all go right
for row in image:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
print( "readimage: " + str( time() - start ) )
start = time()
dets = detector(image, 1)
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(image)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
从这个程序的计时输出来看,似乎处理和抓取图片都需要五分之一秒的时间,所以你会认为它应该每秒显示一到两个更新——但是,如果你举手,它会在5秒左右后显示在网络摄像头视图中!在
是否有某种内部缓存阻止它获取最新的网络摄像头图像?我可以调整或多线程的网络摄像头输入过程,以修复滞后?这是一个英特尔i5与16gb内存。在
更新
根据这里,它建议read逐帧抓取视频。这就解释了它会抓取下一帧和下一帧,直到它最终赶上在处理时被抓取的所有帧。我想知道是否有一个选项可以设置帧速率或设置它为drop frames,然后在阅读时点击网络摄像头中的人脸图片? http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera
我感觉到你的痛苦。实际上,我最近使用了这个网络摄像头脚本(多次迭代;经过大量编辑)。我想我的工作做得很好。为了让您看到我做了什么,我创建了一个GitHub主旨,其中包含详细信息(代码;HTML自述文件;示例输出):
https://gist.github.com/victoriastuart/8092a3dd7e97ab57ede7614251bf5cbd
也许问题是有一个门槛。 如所述here
应改为
^{pr2}$为了避开门槛。 这对我来说是可行的,但同时,还有一个问题是帧处理得太快。在
如果您想在OpenCV中显示一个读取的帧,可以通过
cv2.imshow()
函数来实现,而无需更改颜色顺序。另一方面,如果您仍然想在matplotlib中显示图片,则无法避免使用以下方法:这是我目前唯一能帮你的事=)
相关问题 更多 >
编程相关推荐