爬虫未能抓取页面/写入

2 投票
1 回答
1518 浏览
提问于 2025-04-18 11:40

我正在使用以下代码通过scrapy抓取数据:

from scrapy.selector import Selector
from scrapy.spider import Spider


class ExampleSpider(Spider):
    name = "example"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        sel = Selector(response)
        for li in sel.xpath('//ul/li'):
            title = li.xpath('a/text()').extract()
            link = li.xpath('a/@href').extract()
            desc = li.xpath('text()').extract()
            print title, link, desc

但是,当我运行这个爬虫时,我收到了以下消息:

2014-06-30 23:39:00-0500 [scrapy] INFO: Scrapy 0.24.1 started (bot: tutorial)
2014-06-30 23:39:00-0500 [scrapy] INFO: Optional features available: ssl, http11
2014-06-30 23:39:00-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['tutorial.spiders'], 'FEED_URI': 'willthiswork.csv', 'BOT_NAME': 'tutorial'}
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled item pipelines: 
2014-06-30 23:39:01-0500 [example] INFO: Spider opened
2014-06-30 23:39:01-0500 [example] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-06-30 23:39:01-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-06-30 23:39:01-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-06-30 23:39:01-0500 [example] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)

值得注意的是,有一行写着“爬取了0个页面(每分钟0个页面......”,还有一些被覆盖的设置。

另外,我本来想把数据写入的文件完全是空的。

我是不是做错了什么,导致数据没有写入?

1 个回答

1

我猜你是想用 scrapy crawl tutorial -o myfile.json 这个命令。

要让这个命令正常工作,你需要使用 scrapy 的数据项。

items.py 文件中添加以下内容:

def MozItem(Item):
    title = Field()
    link = Field()
    desc = Field()

然后调整一下解析函数。

    def parse(self, response):
        sel = Selector(response)
        item = MozItem()
        for li in sel.xpath('//ul/li'):
            item['title'] = li.xpath('a/text()').extract()
            item['link'] = li.xpath('a/@href').extract()
            item['desc'] = li.xpath('text()').extract()
            yield item

撰写回答