如何使用多处理来循环访问一个大的URL列表?

2024-04-25 18:22:21 发布

您现在位置:Python中文网/ 问答频道 /正文

问题:检查1000多个url的列表并获取url返回代码(状态代码)。在

我有剧本,但很慢。在

我认为必须有一个更好的,pythonic(更漂亮)的方法来完成这个任务,我可以生成10到20个线程来检查url并收集响应。 (即:

200 -> www.yahoo.com
404 -> www.badurl.com
...

输入文件:Url10.txt在

^{pr2}$

。。。。在

import requests

with open("url10.txt") as f:
    urls = f.read().splitlines()

print(urls)
for url in urls:
    url =  'http://'+url   #Add http:// to each url (there has to be a better way to do this)
    try:
        resp = requests.get(url, timeout=1)
        print(len(resp.content), '->', resp.status_code, '->', resp.url)
    except Exception as e:
        print("Error", url)

挑战: 通过多处理提高速度。在


多处理

但这不管用吗。 我得到以下错误:(注意:我不确定我是否已经正确地实现了这一点)

AttributeError: Can't get attribute 'checkurl' on <module '__main__' (built-in)>

import requests
from multiprocessing import Pool

with open("url10.txt") as f:
    urls = f.read().splitlines()

def checkurlconnection(url):

    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)

if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)

Tags: to代码inimporttxtcomhttpurl
2条回答

checkurlconnection函数中,参数必须是urls,而不是url。 否则,在for循环中,urls将指向全局变量,这不是您想要的。在

import requests
from multiprocessing import Pool

with open("url10.txt") as f:
    urls = f.read().splitlines()

def checkurlconnection(urls):
    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)

if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)

在这种情况下,你的任务是I/O绑定的,而不是处理器绑定的——网站回复的时间比CPU在脚本中循环一次(不包括TCP请求)所花的时间长。这意味着并行执行此任务不会获得任何加速(这就是multiprocessing所做的)。你想要的是多线程。实现这一点的方法是使用很少有文档记录的,也许命名不好的multiprocessing.dummy

import requests
from multiprocessing.dummy import Pool as ThreadPool 

urls = ['https://www.python.org',
        'https://www.python.org/about/']

def get_status(url):
    r = requests.get(url)
    return r.status_code

if __name__ == "__main__":
    pool = ThreadPool(4)  # Make the Pool of workers
    results = pool.map(get_status, urls) #Open the urls in their own threads
    pool.close() #close the pool and wait for the work to finish 
    pool.join() 

See here查看Python中多处理与多线程处理的示例。在

相关问题 更多 >