Scrapy不抓取https?

2 投票
1 回答
10352 浏览
提问于 2025-04-18 08:03

我刚接触scrapy,可能做得不太对。不过我发现,scrapy似乎无法抓取我输入的任何https网站。

class SeleniumSpider(CrawlSpider):
    name = "SeleniumSpider"
    start_urls = ["https://www.facebook.com"]

    rules = (
        Rule(SgmlLinkExtractor(allow=('\.html', )), callback='parse_page',follow=True),
    )

    def __init__(self):
        CrawlSpider.__init__(self)
        
    def __del__(self):
        self.driver.stop()
        print self.verificationErrors
        CrawlSpider.__del__(self)

    def parse_page(self, response):
        hxs = HtmlXPathSelector(response)
        hxs.select('//div').extract()

输出结果:

2014-05-30 11:22:01-0400 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
2014-05-30 11:22:01-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-05-30 11:22:01-0400 [scrapy] INFO: Overridden settings: {'DEFAULT_ITEM_CLASS': 'dirbot.items.Website', 'NEWSPIDER_MODULE': 'dirbot.spiders', 'SPIDER_MODULES': ['dirbot.spiders']}    
2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState  
2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats  
2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled item pipelines: FilterWordsPipeline
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Spider opened
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-30 11:22:01-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-30 11:22:01-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-30 11:22:01-0400 [SeleniumSpider] DEBUG: Crawled (200) <GET https://www.facebook.com> (referer: None)
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Closing spider (finished)
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Dumping Scrapy stats:

有什么建议吗?爬虫在http://www.amazon.com和其他网站上运行得很好。

1 个回答

1

这和 https 没什么关系。问题在于,实际上没有包含 .html 的链接。

你可以这样测试一下:

class SeleniumSpider(CrawlSpider):
    name = "SeleniumSpider"
    start_urls = ["https://www.facebook.com"]

    def parse(self, response):
        hxs = Selector(response)
        print hxs.xpath('//a[contains(@href, "html")]').extract()

这样会输出一个空列表。

与其去抓取 Facebook 的 HTML 页面,不如使用 Python 的 Facebook SDK 或者 pyfacebook,这样会更方便、更可靠。我敢肯定,用 scrapy 去解析 Facebook 页面一点都不好玩,因为 Facebook 页面上有很多动态的 JavaScript 逻辑、AJAX 调用等等。

更新(提取所有链接的一般规则):

rules = (
    Rule(SgmlLinkExtractor(), callback='parse_page', follow=True),
)

撰写回答