将捕获的图像从Python服务器发送到JavaScript客户端

1 投票
1 回答
7027 浏览
提问于 2025-04-17 23:15

现在我正在尝试用树莓派搭建一个服务器,这个服务器可以把实时的图像数据发送到浏览器上。服务器端是用Python和Tornado写的,而客户端是用HTML和JavaScript写的。两者都使用了WebSocket。(我对JavaScript还是个新手。)

以下是代码:

服务器端:

class WSHandler(WebSocketHandler):
    def initialize(self, camera):
        self.camera = camera
        cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_WIDTH, 480)
        cv.SetCaptureProperty(self.capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 360)

    def open(self):
        print("connection opened")
        while True:
            self.loop()

    def loop(self):
        img = self.camera.takeImage()
        self.write_message(img, binary=True)

class Camera():
    def __init__(self):
        self.capture = cv.CaptureFromCAM(0)

    def takeImage(self):
        img = cv.QueryFrame(self.capture)
        img = cv.EncodeImage(".jpg", img).tostring()
        return img

def main():
    camera = Camera()
    app = tornado.web.Application([
        (r"/camera", WSHandler, dict(camera=camera)),
    ])
    http_server = tornado.httpserver.HTTPServer(app)
    http_server.listen(8080)
    IOLoop.instance().start()

if __name__ == "__main__":
    main()

客户端:

JavaScript(client.js):

var canvas =  document.getElementById("liveCanvas");;
var context =  canvas.getContext("2d");

var ws = new WebSocket("ws://localhost:8080/camera");
ws.onopen = function(){
        console.log("connection was established");
};
ws.onmessage = function(evt){   
    context.drawImage(evt.data,0,0);
};

HTML(index.html):

<html>
 <head>
  <title>livecamera</title>
  <canvas id="liveCanvas" width="480" height="360"></canvas>
  <script type="text/javascript" src="./client.js"></script>
 </head>
</html>

当我在服务器运行时访问这个'index.html',就出现了下面的错误:

Uncaught TypeError: Failed to execute 'drawImage' on 'CanvasRenderingContext2D': No function was found that matched the signature provided. 

我猜,这个错误是因为服务器发送的数据格式搞错了。

我想问的是,应该使用什么数据格式?服务器应该如何发送数据?客户端又应该如何接收数据?

1 个回答

2

我发现C++和JavaScript之间有类似的问题,具体可以参考这个链接:使用JavaScript和WebSocket从Blob显示图像

服务器端的内容和之前一样。

客户端这边,'ws.binaryType'需要设置为'arraybuffer',这样才能接收到Blob对象。而且这个Blob对象需要用base64编码,并且要用到我上面提到的'encode'函数。

代码如下:

JavaScript代码:

var img = document.getElementById("liveImg");
var arrayBuffer;

var ws = new WebSocket("ws://localhost:8080/camera");
ws.binaryType = 'arraybuffer';

ws.onopen = function(){
    console.log("connection was established");
};
ws.onmessage = function(evt){
arrayBuffer = evt.data;
img.src = "data:image/jpeg;base64," + encode(new Uint8Array(arrayBuffer));
};

function encode (input) {
    var keyStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
    var output = "";
    var chr1, chr2, chr3, enc1, enc2, enc3, enc4;
    var i = 0;

    while (i < input.length) {
        chr1 = input[i++];
        chr2 = i < input.length ? input[i++] : Number.NaN; // Not sure if the index
        chr3 = i < input.length ? input[i++] : Number.NaN; // checks are needed here

        enc1 = chr1 >> 2;
        enc2 = ((chr1 & 3) << 4) | (chr2 >> 4);
        enc3 = ((chr2 & 15) << 2) | (chr3 >> 6);
        enc4 = chr3 & 63;

        if (isNaN(chr2)) {
            enc3 = enc4 = 64;
        } else if (isNaN(chr3)) {
            enc4 = 64;
        }
        output += keyStr.charAt(enc1) + keyStr.charAt(enc2) +
                  keyStr.charAt(enc3) + keyStr.charAt(enc4);
    }
    return output;
}

HTML代码:

我把canvas标签换成了img标签。

<html>
 <head>
  <title>livecamera</title>
  <img id="liveImg" width="480" height="360"></canvas>
  <script type="text/javascript" src="./client.js"></script>
 </head>
</html>

撰写回答