python - scrapy不跟随链接

1 投票
2 回答
3494 浏览
提问于 2025-04-17 12:02

我正在用Scrapy这个工具来解析网站。我需要解析的链接是这样的:http://example.com/productID/1234/。这些链接可以在地址像这样的网站页面上找到:http://example.com/categoryID/1234/。问题是,我的爬虫程序只抓取了第一个类别ID的页面(比如:http://www.example.com/categoryID/79/,你可以从下面的记录看到),但是没有抓取到其他的链接。我哪里做错了呢?谢谢。

这是我的Scrapy代码:

# -*- coding: UTF-8 -*-

#THIRD-PARTY MODULES
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector

class ExampleComSpider(CrawlSpider):
    name = "example.com"
    allowed_domains = ["http://www.example.com/"]

    start_urls = [
        "http://www.example.com/"
    ]

    rules = (
        # Extract links matching 'categoryID/xxx'
        # and follow links from them (since no callback means follow=True by default).
        Rule(SgmlLinkExtractor(allow=('/categoryID/(\d*)/', ), )),

        # Extract links matching 'productID/xxx' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('/productID/(\d*)/', )), callback='parse_item'),
    )

    def parse_item(self, response):

        self.log('Hi, this is an item page! %s' % response.url)

这是Scrapy的记录:

2012-01-31 12:38:56+0000 [scrapy] INFO: Scrapy 0.14.1 started (bot: parsers)
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled item pipelines:
2012-01-31 12:38:57+0000 [example.com] INFO: Spider opened
2012-01-31 12:38:57+0000 [example.com] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-01-31 12:38:58+0000 [example.com] DEBUG: Crawled (200) <GET http://www.example.com/> (referer: None)
2012-01-31 12:38:58+0000 [example.com] DEBUG: Filtered offsite request to 'www.example.com': <GET http://www.example.com/categoryID/79/>
2012-01-31 12:38:58+0000 [example.com] INFO: Closing spider (finished)
2012-01-31 12:38:58+0000 [example.com] INFO: Dumping spider stats:
  {'downloader/request_bytes': 199,
   'downloader/request_count': 1,
   'downloader/request_method_count/GET': 1,
   'downloader/response_bytes': 121288,
   'downloader/response_count': 1,
   'downloader/response_status_count/200': 1,
   'finish_reason': 'finished',
   'finish_time': datetime.datetime(2012, 1, 31, 12, 38, 58, 409806),
   'request_depth_max': 1,
   'scheduler/memory_enqueued': 1,
   'start_time': datetime.datetime(2012, 1, 31, 12, 38, 57, 127805)}
2012-01-31 12:38:58+0000 [example.com] INFO: Spider closed (finished)
2012-01-31 12:38:58+0000 [scrapy] INFO: Dumping global stats:
  {'memusage/max': 26992640, 'memusage/startup': 26992640}

2 个回答

1

把下面的内容替换成:

allowed_domains = ["http://www.example.com/"]

用这个:

allowed_domains = ["example.com"]

这样就可以解决问题了。

4

这可能是“www.example.com”和“example.com”之间的区别。如果你觉得这样有帮助,可以同时使用这两种方式。

allowed_domains = ["www.example.com", "example.com"]

撰写回答