Scrapy - 如何基于抓取项中的链接爬取新页面

7 投票
1 回答
6868 浏览
提问于 2025-04-18 07:40

我刚接触Scrapy,想从抓取到的内容中的链接中提取新页面。具体来说,我想从谷歌搜索结果中抓取一些Dropbox的文件分享链接,并把这些链接存储到一个JSON文件里。在获取到这些链接后,我想为每个链接打开一个新页面,以验证这个链接是否有效。如果有效,我还想把文件名也存到JSON文件里。

我使用了一个DropboxItem,里面有'link'(链接)、'filename'(文件名)、'status'(状态)、'err_msg'(错误信息)这些属性来存储每个抓取到的项目。我尝试在解析函数中为每个抓取到的链接发起异步请求。但似乎parse_file_page这个函数从来没有被调用。有没有人知道怎么实现这种两步抓取?

    class DropboxSpider(Spider):
        name = "dropbox"
        allowed_domains = ["google.com"]
        start_urls = [
            "https://www.google.com/#filter=0&q=site:www.dropbox.com/s/&start=0"
    ]

        def parse(self, response):
            sel = Selector(response)
            sites = sel.xpath("//h3[@class='r']")
            items = []
            for site in sites:
                item = DropboxItem()
                link = site.xpath('a/@href').extract()
                item['link'] = link
                link = ''.join(link)
                #I want to parse a new page with url=link here
                new_request = Request(link, callback=self.parse_file_page)
                new_request.meta['item'] = item
                items.append(item)
            return items

        def parse_file_page(self, response):
            #item passed from request
            item = response.meta['item']
            #selector
            sel = Selector(response)
            content_area = sel.xpath("//div[@id='shmodel-content-area']")
            filename_area = content_area.xpath("div[@class='filename shmodel-filename']")
            if filename_area:
                filename = filename_area.xpath("span[@id]/text()").extract()
                if filename:
                    item['filename'] = filename             
                    item['status'] = "normal"
            else:
                err_area = content_area.xpath("div[@class='err']")
                if err_area:
                    err_msg = err_area.xpath("h3/text()").extract()
                    item['err_msg'] = err_msg
                    item['status'] = "error"
            return item

感谢@ScrapyNovice的回答。我修改了代码,现在看起来像这样:

def parse(self, response):
    sel = Selector(response)
    sites = sel.xpath("//h3[@class='r']")
    #items = []
    for site in sites:
        item = DropboxItem()
        link = site.xpath('a/@href').extract()
        item['link'] = link
        link = ''.join(link)
        print 'link!!!!!!=', link
        new_request = Request(link, callback=self.parse_file_page)
        new_request.meta['item'] = item
        yield new_request
        #items.append(item)
    yield item
    return
    #return item   #Note, when I simply return item here, got an error msg "SyntaxError: 'return' with argument inside generator"

def parse_file_page(self, response):
    #item passed from request
    print 'parse_file_page!!!'
    item = response.meta['item']
    #selector
    sel = Selector(response)
    content_area = sel.xpath("//div[@id='shmodel-content-area']")
    filename_area = content_area.xpath("div[@class='filename shmodel-filename']")
    if filename_area:
        filename = filename_area.xpath("span[@id]/text()").extract()
        if filename:
            item['filename'] = filename
            item['status'] = "normal"
            item['err_msg'] = "none"
            print 'filename=', filename
    else:
        err_area = content_area.xpath("div[@class='err']")
        if err_area:
            err_msg = err_area.xpath("h3/text()").extract()
            item['filename'] = "null"
            item['err_msg'] = err_msg
            item['status'] = "error"
            print 'err_msg', err_msg
        else:
            item['filename'] = "null"
            item['err_msg'] = "unknown_err"
            item['status'] = "error"
            print 'unknown err'
    return item

控制流程实际上变得有点奇怪。当我使用“scrapy crawl dropbox -o items_dropbox.json -t json”来抓取一个本地文件(一个下载的谷歌搜索结果页面)时,我可以看到输出像这样:

2014-05-31 08:40:35-0400 [scrapy] INFO: Scrapy 0.22.2 started (bot: tutorial)
2014-05-31 08:40:35-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-05-31 08:40:35-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_FORMAT': 'json', 'SPIDER_MODULES': ['tutorial.spiders'], 'FEED_URI': 'items_dropbox.json', 'BOT_NAME': 'tutorial'}
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled item pipelines: 
2014-05-31 08:40:35-0400 [dropbox] INFO: Spider opened
2014-05-31 08:40:35-0400 [dropbox] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-31 08:40:35-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-31 08:40:35-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Crawled (200) <GET file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html> (referer: None)
link!!!!!!= http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0
link!!!!!!= https://www.dropbox.com/s/
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Filtered offsite request to 'www.dropbox.com': <GET https://www.dropbox.com/s/>
link!!!!!!= https://www.dropbox.com/s/awg9oeyychug66w
link!!!!!!= http://www.dropbox.com/s/kfmoyq9y4vrz8fm
link!!!!!!= https://www.dropbox.com/s/pvsp4uz6gejjhel
....  many links here
link!!!!!!= https://www.dropbox.com/s/gavgg48733m3918/MailCheck.xlsx
link!!!!!!= http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Scraped from <200 file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html>
    {'link': [u'http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk']}
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Crawled (200) <GET http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0> (referer: file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html)
parse_file_page!!!
unknown err
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Scraped from <200 http://www.google.com/intl/en/webmasters/>
    {'err_msg': 'unknown_err',
     'filename': 'null',
     'link': [u'http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0'],
     'status': 'error'}
2014-05-31 08:40:35-0400 [dropbox] INFO: Closing spider (finished)
2014-05-31 08:40:35-0400 [dropbox] INFO: Stored json feed (2 items) in: items_dropbox.json
2014-05-31 08:40:35-0400 [dropbox] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 558,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 449979,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 5, 31, 12, 40, 35, 348058),
     'item_scraped_count': 2,
     'log_count/DEBUG': 7,
     'log_count/INFO': 8,
     'request_depth_max': 1,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2014, 5, 31, 12, 40, 35, 249309)}
2014-05-31 08:40:35-0400 [dropbox] INFO: Spider closed (finished)

现在这个JSON文件里只有:

[{"link": ["http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk"]},
{"status": "error", "err_msg": "unknown_err", "link": ["http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0"], "filename": "null"}]

1 个回答

5

你正在创建一个请求并设置好回调函数,但你从来没有真正去使用它。

        for site in sites:
            item = DropboxItem()
            link = site.xpath('a/@href').extract()
            item['link'] = link
            link = ''.join(link)
            #I want to parse a new page with url=link here
            new_request = Request(link, callback=self.parse_file_page)
            new_request.meta['item'] = item
            yield new_request
            # Don't do this here because you're adding your Item twice.
            #items.append(item)

从设计的角度来看,你在 parse() 方法结束时把所有抓取到的项目存储在 items 里,但数据处理管道通常期望接收单个项目,而不是一堆项目。把 items 数组去掉,你就可以使用 Scrapy 内置的 JSON Feed Export 来把结果存储为 JSON 格式。

更新:

当你尝试返回一个项目时出现错误消息的原因是,因为在函数中使用 yield 会把它变成一个生成器。这让你可以多次调用这个函数。每次到达 yield 时,它会返回你指定的值,但会记住它的状态和正在执行的行。下次你调用这个生成器时,它会从上次停止的地方继续执行。如果没有东西可以返回,它会抛出一个 StopIteration 异常。在 Python 2 中,你不能在同一个函数里同时使用 yieldreturn

你不想在 parse() 方法中返回 任何 项目,因为在那时它们还缺少 filenamestatus 等信息。

你在 parse() 中发出的请求是针对 dropbox.com 的,对吧?请求没有通过,因为 dropbox 不在爬虫的 allowed_domains 列表中。(这就是日志消息的原因:DEBUG: Filtered offsite request to 'www.dropbox.com': <GET https://www.dropbox.com/s/>

唯一一个实际有效且没有被过滤的请求指向的是 http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0,这是谷歌的一个页面,而不是 Dropbox 的。你可能想在 parse() 方法中使用 urlparse 来检查链接的域名,然后再发出请求。

至于你的结果:第一个 JSON 对象

{"link": ["http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk"]}

是你在 parse() 方法中调用 yield item 的地方。只有一个,因为你的 yield 不在任何循环中,所以当生成器继续执行时,它会运行下一行:return,这会退出生成器。你会注意到这个项目缺少你在 parse_file_page() 方法中填写的所有字段。这就是为什么你不想在 parse() 方法中返回任何项目。

你的第二个 JSON 对象

{
 "status": "error", 
 "err_msg": "unknown_err", 
 "link": ["http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0"], 
 "filename": "null"
}

是尝试将谷歌的一个页面解析成你原本期待的 Dropbox 页面时的结果。你在 parse() 方法中发出了多个请求,除了一个之外,其他的都指向 dropbox.com。所有的 Dropbox 链接都被丢弃了,因为它们不在你的 allowed_domains 中,所以你唯一得到的响应是来自页面上与 xpath 选择器匹配并且在 allowed_sites 中的另一个链接。(这就是谷歌网站管理员的链接)这就是为什么你在输出中只看到一次 parse_file_page!!!

我建议你多了解一下生成器,因为它们是使用 Scrapy 的一个基本部分。第二个谷歌搜索结果“python generator tutorial”看起来是一个很好的起点

撰写回答