刮擦分页时间

2024-04-26 00:09:52 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我设置了一个蜘蛛,非常类似于scrapy上的例子。你知道吗

我要蜘蛛在进入下一页之前抓取所有的引号。我还希望它每秒只解析1个引号。因此,如果一页上有20个引号,则需要20秒来刮取引号,然后1秒才能转到下一页。你知道吗

到目前为止,我当前的实现是在实际获取报价信息之前先遍历每个页面。你知道吗

import scrapy

class AuthorSpider(scrapy.Spider):
name = 'author'

start_urls = ['http://quotes.toscrape.com/']

def parse(self, response):
    # follow links to author pages
    for href in response.css('.author+a::attr(href)').extract():
        yield scrapy.Request(response.urljoin(href),
                             callback=self.parse_author)

    # follow pagination links
    next_page = response.css('li.next a::attr(href)').extract_first()
    if next_page is not None:
        next_page = response.urljoin(next_page)
        yield scrapy.Request(next_page, callback=self.parse)

def parse_author(self, response):
    def extract_with_css(query):
        return response.css(query).extract_first().strip()

    yield {
        'name': extract_with_css('h3.author-title::text'),
        'birthdate': extract_with_css('.author-born-date::text'),
        'bio': extract_with_css('.author-description::text'),
    }

以下是我的基本知识设置.py文件

ROBOTSTXT_OBEY = True
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 2

Tags: textselfparseresponsedefwithpageextract
1条回答
网友
1楼 · 发布于 2024-04-26 00:09:52

你可以安排很差劲。请求他们屈服了。你知道吗

例如,您可以创建下一个页面请求,但只有在所有作者请求终止并删除其项目时才产生该请求。你知道吗

示例:

import scrapy

# Store common info about pending request
pending_authors = {}

class AuthorSpider(scrapy.Spider):
name = 'author'

start_urls = ['http://quotes.toscrape.com/']

def parse(self, response):

    # process pagination links
    next_page = response.css('li.next a::attr(href)').extract_first()
    next_page_request = None
    if next_page is not None:
        next_page = response.urljoin(next_page)
        # Create the Request object, but does not yield it now
        next_page_request = scrapy.Request(next_page, callback=self.parse)

    # Requests scrapping of authors, and pass reference to the Request for next page
    for href in response.css('.author+a::attr(href)').extract():
        pending_authors[href] = False  # Marks this author as 'not processed'
        yield scrapy.Request(response.urljoin(href), callback=self.parse_author,
                             meta={'next_page_request': next_page_request})


def parse_author(self, response):
    def extract_with_css(query):
        return response.css(query).extract_first().strip()

    item = {
        'name': extract_with_css('h3.author-title::text'),
        'birthdate': extract_with_css('.author-born-date::text'),
        'bio': extract_with_css('.author-description::text'),
    }

    # marks this author as 'processed'
    pending_authors[response.url] = True

    # checks if finished processing of all authors
    if len([value for key, value in pending_authors.iteritems() if value == False]) == 0:
        yield item
        next_page_request = response.meta['next_page_request']

        # Requests next page, after finishinr all authors
        yield next_page_request
    else:
        yield item

相关问题 更多 >