Scrapy:根据条件停止上一个解析函数

2024-03-28 09:36:52 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个非常具体的情况与一个刮刀,我正在开发。第一个函数parse_posts_pages遍历特定论坛页面中的所有页面,对于每个页面,它调用第二个函数parse_posts。在

def parse_posts_pages(self, response):
    thread_id = response.meta['thread_id']
    thread_link = response.meta['thread_link']
    thread_name = response.meta['thread_name']
    if len(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')) == 3:
        posts_per_page = int(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')[1])
        total_posts = int(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')[2])
        if posts_per_page > 0:
            post_mod = total_posts % posts_per_page
            pages = total_posts / posts_per_page
            if post_mod > 0: pages += 1
        else: pages = 1

    for page in range(pages, 0, -1):
        cur_page = '' if page == 1 else '/page' + str(page)
        post_page_link = thread_link + cur_page
        return scrapy.Request(post_page_link, self.parse_posts, meta={'thread_id': thread_id, 'thread_name': thread_name})


def parse_posts(self, response):
    global maxPostIDByThread, executeFullSpider
    thread_id = response.meta['thread_id']
    thread_name = response.meta['thread_name']
    for post in response.xpath('//*[@id="posts"]/li'):
        post_id = post.xpath('@id').re(r'(\d.*)')[0]
        if not executeFullSpider and post_id in maxPostIDByThread:
            break #<- I need this break to also cancel the for from parse_posts_pages function
        ...

在第二个函数中有一个if条件。当这个条件解决为true时,我需要断开当前的for循环和parse_posts_页面中的for循环,因为不需要继续分页。在

有没有办法从第二个函数停止第一个函数中的for循环?在


Tags: 函数nameidforifparseresponsepage
2条回答

只需按手册所述升起闭合蜘蛛

How can I instruct a spider to stop itself?

Raise the CloseSpider from a callback.

from scrapy.exceptions import CloseSpider

def parse_page(self, response):
    if 'Bandwidth exceeded' in response.body:
        raise CloseSpider('bandwidth_exceeded')

http://doc.scrapy.org/en/latest/faq.html#how-can-i-instruct-a-spider-to-stop-itselfhttp://doc.scrapy.org/en/latest/topics/exceptions.html#scrapy.exceptions.CloseSpider

Note that requests that are still in progress (HTTP request sent, response not yet received) will still be parsed. No new request will be processed though.

https://stackoverflow.com/a/23895143/5041915

更新: 实际上我发现了一些有趣的东西,如果停止蜘蛛在主函数。在

新的有效工作线程可能没有时间启动,因为引发异常的速度更快。在

我建议在回调函数中检查条件并尽早引发异常。在

声明一个全局parse_status变量,默认值为False。如果第二个函数满足所需条件,请将parse_status更改为True,并在第一个函数中中断循环

    def parse_posts_pages(self, response):
    thread_id = response.meta['thread_id']
    thread_link = response.meta['thread_link']
    thread_name = response.meta['thread_name']
    if len(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')) == 3:
        posts_per_page = int(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')[1])
        total_posts = int(response.xpath('//*[@id="postpagestats_above"]/text()').re(r'(\d+)')[2])
        if posts_per_page > 0:
            post_mod = total_posts % posts_per_page
            pages = total_posts / posts_per_page
            if post_mod > 0: pages += 1
        else: pages = 1



    for page in range(pages, 0, -1):
            if self.parse_status == True:
                break
            cur_page = '' if page == 1 else '/page' + str(page)
            post_page_link = thread_link + cur_page
            return scrapy.Request(post_page_link, self.parse_posts, meta={'thread_id': thread_id, 'thread_name': thread_name})


def parse_posts(self, response):
    global maxPostIDByThread, executeFullSpider
    thread_id = response.meta['thread_id']
    thread_name = response.meta['thread_name']
    for post in response.xpath('//*[@id="posts"]/li'):
        post_id = post.xpath('@id').re(r'(\d.*)')[0]
        if not executeFullSpider and post_id in maxPostIDByThread:
            self.parse_status=True
            break #<- I need this break to also can

相关问题 更多 >