刮不刮https?

2024-04-25 20:25:02 发布

您现在位置:Python中文网/ 问答频道 /正文

新来的,所以我可能只是做错事。不过,看起来scrapy不会删除我提供给它的任何https站点。

class SeleniumSpider(CrawlSpider):
name = "SeleniumSpider"
start_urls = ["https://www.facebook.com"]

rules = (
    Rule(SgmlLinkExtractor(allow=('\.html', )), callback='parse_page',follow=True),
)

def __init__(self):
    CrawlSpider.__init__(self)

def __del__(self):
    self.driver.stop()
    print self.verificationErrors
    CrawlSpider.__del__(self)

def parse_page(self, response):
    hxs = HtmlXPathSelector(response)
    hxs.select('//div').extract()

输出:

    2014-05-30 11:22:01-0400 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
    2014-05-30 11:22:01-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-05-30 11:22:01-0400 [scrapy] INFO: Overridden settings: {'DEFAULT_ITEM_CLASS': 'dirbot.items.Website', 'NEWSPIDER_MODULE': 'dirbot.spiders', 'SPIDER_MODULES': ['dirbot.spiders']}

2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState

2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats

2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-30 11:22:01-0400 [scrapy] INFO: Enabled item pipelines: FilterWordsPipeline
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Spider opened
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-30 11:22:01-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-30 11:22:01-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-30 11:22:01-0400 [SeleniumSpider] DEBUG: Crawled (200) <GET https://www.facebook.com> (referer: None)
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Closing spider (finished)
2014-05-30 11:22:01-0400 [SeleniumSpider] INFO: Dumping Scrapy stats:

有什么建议吗?Crawler在http://www.amazon.com和其他方面工作得很好


Tags: httpsdebugselfinfocomfacebookparsedef
1条回答
网友
1楼 · 发布于 2024-04-25 20:25:02

这与https无关。问题是实际上没有包含.html的链接。

以下是测试方法:

class SeleniumSpider(CrawlSpider):
    name = "SeleniumSpider"
    start_urls = ["https://www.facebook.com"]

    def parse(self, response):
        hxs = Selector(response)
        print hxs.xpath('//a[contains(@href, "html")]').extract()

它将输出一个空列表。

与其爬行facebook html页面,不如真正使用facebook SDK for python^{},这样更方便、更健壮。我敢肯定使用scrapy解析facebook页面一点也不好玩,因为在facebook上构建页面涉及到很多动态javascript逻辑、ajax调用等。

UPD(提取所有链接的一般规则):

rules = (
    Rule(SgmlLinkExtractor(), callback='parse_page', follow=True),
)

相关问题 更多 >