用硒刮网页时间太长,太漂亮了

2024-04-27 05:14:14 发布

您现在位置:Python中文网/ 问答频道 /正文

我想刮一个网站及其子页面,但这是太长了。如何优化请求或使用替代解决方案?你知道吗

下面是我正在使用的代码。仅仅加载Google主页就需要10秒。所以如果我给它280个链接,它显然是不可伸缩的

from selenium import webdriver
import time
# prepare the option for the chrome driver
options = webdriver.ChromeOptions()
options.add_argument('headless')

# start chrome browser
browser = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver" ,chrome_options=options)
start=time.time()
browser.get('http://www.google.com/xhtml')
print(time.time()-start)
browser.quit()


Tags: the代码importbrowsertime网站链接google
3条回答

使用python requestsBeautiful soup模块。你知道吗

import requests
from bs4 import BeautifulSoup
url="https://tajinequiparle.com/dictionnaire-francais-arabe-marocain/"
url1="https://tajinequiparle.com/dictionnaire-francais-arabe-marocain/{}/"
req = requests.get(url,verify=False)
soup = BeautifulSoup(req.text, 'html.parser')
print("Letters : A")
print([item['href'] for item in soup.select('.columns-list a[href]')])

letters=['B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z']

for letter in letters:

    req = requests.get(url1.format(letter), verify=False)
    soup = BeautifulSoup(req.text, 'html.parser')
    print('Letters : ' + letter)
    print([item['href'] for item in soup.select('.columns-list a[href]')])

试着像这样使用urllib

import urllib.request
start=time.time()
page = urllib.request.urlopen("https://google.com/xhtml")
print(time.time()-start)

不过,这也取决于你所拥有的连接质量

你可以用这个脚本来提高速度。多线程爬虫优于所有爬虫:

https://edmundmartin.com/multi-threaded-crawler-in-python/

之后您必须更改代码:

def run_scraper(self):
    with open("francais-arabe-marocain.csv", 'a') as file:
        file.write("url")
        file.writelines("\n")
        for i in range(50000):
            try:
                target_url = self.to_crawl.get(timeout=600)
                if target_url not in self.scraped_pages and "francais-arabe-marocain" in target_url:
                    self.scraped_pages.add(target_url)
                    job = self.pool.submit(self.scrape_page, target_url)
                    job.add_done_callback(self.post_scrape_callback)
                    df = pd.DataFrame([{'url': target_url}])
                    df.to_csv(file, index=False, header=False)
                    print(target_url)
            except Empty:
                return
            except Exception as e:
                print(e)
                continue

如果url包含“francais arabe marocain”,请将url保存在csv文件中。 CSV File Screenshot

之后,你可以刮在一个循环读取csv逐行相同的方式网址

相关问题 更多 >