如何使用OpenCV覆盖图像和文本/形状?

2024-05-15 17:51:10 发布

您现在位置:Python中文网/ 问答频道 /正文

我遇到了一个关于OpenCV透明覆盖层的问题。这是我目前的代码:

import cv2

cap = cv2.VideoCapture('Sample_Vid.mp4')
stat_overlay = cv2.imread('overlay.png')
fps = 21

if cap.isOpened():
    while cap.isOpened():
        ret, frame = cap.read()
        overlay = frame.copy()
        output = frame.copy()

        cv2.rectangle(overlay, (0, 0), (730, 50), (0, 0, 0), -1)
        cv2.putText(overlay, fps, (1230, 20), cv2.FONT_HERSHEY_DUPLEX, 0.5, (255, 255, 255), 1)
        cv2.addWeighted(overlay, 1.0, output, 0, 0, output)

        cv2.imshow('frame', output)

所以我有一个矩形的框架,上面显示FPS。现在我想先覆盖我的stat_覆盖图像,然后是文本和形状,因为它们是动态的。在我阅读的每一个解释中,都有人告诉我用cv2.addWeighted(stat_overlay, 1.0, output, 0, 0, output)来做,但是我已经有了一个类似于动态覆盖使用的命令,如果我在上面插入第二个命令,它就不起作用了。有什么办法解决这个问题吗

提前感谢您的回答


Tags: 代码命令output动态cv2frameopencvstat
1条回答
网友
1楼 · 发布于 2024-05-15 17:51:10

您正在使用的命令:cv2.addWeighted(overlay, 1.0, output, 0, 0, output),使用了alpha = 1.0beta = 0,因此没有透明度。
您基本上是将overlay图像复制到output图像中

^{} documentation:

cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]])
src1 – first input array.
alpha – weight of the first array elements.
src2 – second input array of the same size and channel number as src1.
beta – weight of the second array elements.
dst – output array that has the same size and number of channels as the input arrays.

也可以使用以下代码覆盖文本:

output = frame.copy()
cv2.rectangle(output, (0, 0), (730, 50), (0, 0, 0), -1)
cv2.putText(output, fps, (1230, 20), cv2.FONT_HERSHEY_DUPLEX, 0.5, (255, 255, 255), 1)

对于覆盖stat_overlay,可以使用类似Alpha blending代码示例的解决方案

我不知道'overlay.png'是RGB格式还是RGBA格式。
如果图像具有alpha通道,则可以将其用作透明平面。
如果图像为RGB,则可以创建所需的alpha平面

如果'overlay.png'是一个小图像(如徽标),您可能不需要这些,您可以将小图像“放置”在output图像上


我创建了一个基于alpha blending示例的自包含代码示例。
为了使代码自包含,代码使用:

  • ffmpeg-python用于生成合成视频(用于测试)
  • 代码将绘制一个红色圆圈,替换'overlay.png'

代码如下:

import ffmpeg
import cv2
import numpy as np

in_filename = 'Sample_Vid.mp4' # Input file for testing (".264" or ".h264" is a convention for elementary h264 video stream file)

## Build synthetic video, for testing:
################################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=192x108:rate=1 -c:v libx264 -crf 23 -t 50 test_vid.264

width, height = 640, 480

(
    ffmpeg
    .input('testsrc=size={}x{}:rate=1'.format(width, height), f='lavfi')
    .output(in_filename, vcodec='libx264', crf=23, t=5)
    .overwrite_output()
    .run()
)
################################################


cap = cv2.VideoCapture('Sample_Vid.mp4')
#stat_overlay = cv2.imread('overlay.png')

# Create image with green circle, instead of reaing a file
# The image is created as RGBA (the 4'th plane is the transparency).
stat_overlay = np.zeros((height, width, 4), np.uint8)
cv2.circle(stat_overlay, (320, 240), 80, (0, 0, 255, 255), thickness=20) # Draw red circle (with alpha = 255) 

# https://www.learnopencv.com/alpha-blending-using-opencv-cpp-python/
stat_alpha = stat_overlay[:, :, 3] # Take 4'th plane as alpha channel
stat_alpha = cv2.cvtColor(stat_alpha, cv2.COLOR_GRAY2BGR) # Duplicate alpha channel 3 times (to match output dimensions)

# https://www.learnopencv.com/alpha-blending-using-opencv-cpp-python/
# Normalize the alpha mask to keep intensity between 0 and 1
stat_alpha = stat_alpha.astype(float) / 255

stat_overlay = stat_overlay[:, :, 0:3] # Get RGB channels

fps = 21


if cap.isOpened():
    while cap.isOpened():
        ret, frame = cap.read()
        if ret:            
            output = frame.copy()

            # https://www.learnopencv.com/alpha-blending-using-opencv-cpp-python/
            # Alpha blending:
            foreground = stat_overlay.astype(float)
            background = output.astype(float)

            # Multiply the foreground with the alpha matte
            foreground = cv2.multiply(stat_alpha, foreground)

            # Multiply the background with ( 1 - alpha )
            background = cv2.multiply(1.0 - stat_alpha, background)

            # Add the masked foreground and background.
            output = cv2.add(foreground, background).astype(np.uint8)

            cv2.rectangle(output, (0, 0), (230, 50), (0, 0, 0), -1)
            cv2.putText(output, str(fps), (123, 20), cv2.FONT_HERSHEY_DUPLEX, 0.5, (255, 255, 255), 1)

            cv2.imshow('frame', output)
            cv2.waitKey(1000)

        else:
            break

cv2.destroyAllWindows()

结果(最后一帧):
enter image description here

相关问题 更多 >