ConnectionResetError:远程主机强制关闭了现有连接

2024-05-13 10:22:19 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在写一个脚本来下载一组文件。我成功地完成了这项工作,它的工作是体面的。现在我试着添加一个动态打印的下载进度。

对于像5MB这样的小下载(顺便说一句是.mp4文件的下载),progression工作得很好,并且文件成功关闭,从而生成一个完整的、工作正常的下载的.mp4文件。对于更大的文件,例如250MB及更大的文件,它无法成功工作,我得到以下错误:

enter image description here

这是我的代码:

import urllib.request
import shutil
import os
import sys
import io

script_dir = os.path.dirname('C:/Users/Kenny/Desktop/')
rel_path = 'stupid_folder/video.mp4'
abs_file_path = os.path.join(script_dir, rel_path)
url = 'https://archive.org/download/SF145/SF145_512kb.mp4'
# Download the file from `url` and save it locally under `file_name`:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    resp = urllib.request.urlopen(url)
    length = resp.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up

    # print(length, blocksize)

    buf = io.BytesIO()
    size = 0
    while True:
        buf1 = resp.read(blocksize)
        if not buf1:
            break
        buf.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

    shutil.copyfileobj(response, out_file)

这对小文件很有效,但大文件我会出错。但是,现在,如果我注释掉进度指示符代码,就不会得到较大文件的错误:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    # eventID = 123456
    # 
    # resp = urllib.request.urlopen(url)
    # length = resp.getheader('content-length')
    # if length:
    #     length = int(length)
    #     blocksize = max(4096, length//100)
    # else:
    #     blocksize = 1000000 # just made something up
    # 
    # # print(length, blocksize)
    # 
    # buf = io.BytesIO()
    # size = 0
    # while True:
    #     buf1 = resp.read(blocksize)
    #     if not buf1:
    #         break
    #     buf.write(buf1)
    #     size += len(buf1)
    #     if length:
    #         print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    # print()

    shutil.copyfileobj(response, out_file)

有人有什么想法吗?这是我项目的最后一部分,我非常希望能够看到进展。再一次,这是Python3.5。感谢您的帮助!


Tags: 文件pathimporturlsizeifrequesturllib
1条回答
网友
1楼 · 发布于 2024-05-13 10:22:19

您将打开您的url两次,一次作为response,一次作为resp。使用进度条的东西,您正在使用数据,因此当使用copyfileobj复制文件时,数据是空的(这可能不准确,因为它适用于小文件,但您在这里执行两次操作,这可能是问题的根源)

要获取进度条和有效文件,请执行以下操作:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    length = response.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up


    size = 0
    while True:
        buf1 = response.read(blocksize)
        if not buf1:
            break
        out_file.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

代码简化:

  • 只有一个urlopen,作为response
  • BytesIO,直接写入out_file

相关问题 更多 >