Python 限制多线程

1 投票
2 回答
593 浏览
提问于 2025-04-16 00:33

你肯定知道,我可以通过多线程来更快地从互联网下载文件。可是,如果我向同一个网站发送很多请求,就有可能被列入黑名单。

所以你能帮我实现一个功能吗?比如说:“我有一份网址列表。我想让你下载这些文件,但如果已经有10个下载在进行中,就等着有空位再下载。”

我会很感激任何帮助。谢谢。

binoua

这是我正在使用的代码(不管用)。

class PDBDownloader(threading.Thread):

    prefix = 'http://www.rcsb.org/pdb/files/'

    def __init__(self, queue):
        threading.Thread.__init__(self)
        self.queue = queue
        self.pdbid = None
        self.urlstr = ''
        self.content = ''

    def run(self):
        while True:
            self.pdbid = self.queue.get()
            self.urlstr = self.prefix + pdbid + '.pdb'
            print 'downloading', pdbid
            self.download()

            filename = '%s.pdb' %(pdbid)
            f = open(filename, 'wt')
            f.write(self.content)
            f.close()

            self.queue.task_done()

    def download(self):
        try:
            f = urllib2.urlopen(self.urlstr)
        except urllib2.HTTPError, e:
            msg = 'HTTPError while downloading file %s at %s. '\
                    'Details: %s.' %(self.pdbid, self.urlstr, str(e))
            raise OstDownloadException, msg
        except urllib2.URLError, e:
            msg = 'URLError while downloading file %s at %s. '\
                    'RCSB erveur unavailable.' %(self.pdbid, self.urlstr)
            raise OstDownloadException, msg
        except Exception, e:
            raise OstDownloadException, str(e)
        else:
            self.content = f.read()
if __name__ == '__main__':

    pdblist = ['1BTA', '3EAM', '1EGJ', '2BV9', '2X6A']

    for i in xrange(len(pdblist)):
        pdb = PDBDownloader(queue)
        pdb.setDaemon(True)
        pdb.start()

    while pdblist:
        pdbid = pdblist.pop()
        queue.put(pdbid)

    queue.join()

2 个回答

0

使用一个线程池,里面有一个共享的URL列表。每个线程会尝试从这个列表中取出一个URL,然后下载它,直到列表里没有URL可下载为止。这里提到的从列表中取出(pop())是线程安全的,也就是说多个线程可以同时进行这个操作而不会出问题。

while True:
    try:
        url = url_list.pop()
        # download URL here
    except IndexError:
        break
4

使用线程并不能让你“从互联网下载文件更快”。因为你只有一张网卡和一个互联网连接,所以这个说法是不对的。

线程的作用是用来等待的,而你无法加快等待的速度

你可以只用一个线程,速度一样快,甚至更快——只要在开始下载一个文件之前,不要等另一个文件的响应。换句话说,就是使用异步的、非阻塞的网络编程。

下面是一个完整的脚本,它使用 twisted.internet.task.coiterate 来同时启动多个下载,而不使用任何线程,并且遵循连接池的大小(我在演示中使用了2个同时下载,但你可以更改这个数字):

from twisted.internet import defer, task, reactor
from twisted.web import client
from twisted.python import log

@defer.inlineCallbacks
def deferMap(job, dataSource, size=1):
    successes = []
    failures = []

    def _cbGather(result, dataUnit, succeeded):
        """This will be called when any download finishes"""
        if succeeded:
            # you could save the file to disk here
            successes.append((dataUnit, result))
        else:
            failures.append((dataUnit, result))

    @apply
    def work():
        for dataUnit in dataSource:
            d = job(dataUnit).addCallbacks(_cbGather, _cbGather,
                callbackArgs=(dataUnit, True),  errbackArgs=(dataUnit, False))
            yield d

    yield defer.DeferredList([task.coiterate(work) for i in xrange(size)])
    defer.returnValue((successes, failures))

def printResults(result):
    successes, failures = result
    print "*** Got %d pages total:" % (len(successes),)
    for url, page in successes:
        print '  * %s -> %d bytes' % (url, len(page))
    if failures:
        print "*** %d pages failed download:" % (len(failures),)
        for url, failure in failures:
            print '  * %s -> %s' % (url, failure.getErrorMessage())

if __name__ == '__main__':
    import sys
    log.startLogging(sys.stdout)
    urls = ['http://twistedmatrix.com',
            'XXX',
            'http://debian.org',
            'http://python.org',
            'http://python.org/foo',
            'https://launchpad.net',
            'noway.com',
            'somedata',
        ]
    pool = deferMap(client.getPage, urls, size=2) # download 2 at once
    pool.addCallback(printResults)
    pool.addErrback(log.err).addCallback(lambda ign: reactor.stop())
    reactor.run()

注意,我故意包含了一些错误的链接,这样我们可以在结果中看到一些失败的情况:

...
2010-06-29 08:18:04-0300 [-] *** Got 4 pages total:
2010-06-29 08:18:04-0300 [-]   * http://twistedmatrix.com -> 16992 bytes
2010-06-29 08:18:04-0300 [-]   * http://python.org -> 17207 bytes
2010-06-29 08:18:04-0300 [-]   * http://debian.org -> 13820 bytes
2010-06-29 08:18:04-0300 [-]   * https://launchpad.net -> 18511 bytes
2010-06-29 08:18:04-0300 [-] *** 4 pages failed download:
2010-06-29 08:18:04-0300 [-]   * XXX -> Connection was refused by other side: 111: Connection refused.
2010-06-29 08:18:04-0300 [-]   * http://python.org/foo -> 404 Not Found
2010-06-29 08:18:04-0300 [-]   * noway.com -> Connection was refused by other side: 111: Connection refused.
2010-06-29 08:18:04-0300 [-]   * somedata -> Connection was refused by other side: 111: Connection refused.
...

撰写回答