Scrapy spider运行并关闭,但没有收集数据,有3次调试和1次错误。

2024-04-27 05:02:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在用Python运行一个scrapy and pillow项目,不管我尝试了多少次,都会遇到相同的错误。在

我的项目.py具体如下:

import scrapy

class Refrigerator(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    name = scrapy.Field()
    price=scrapy.Field()
    model = scrapy.Field()
    sku = scrapy.Field()
    file_urls = scrapy.Field()
    files = scrapy.Field()
    pass

我的设置.py具体如下:

^{pr2}$

还有我的冰箱pider.py具体如下:

# import the necessary packages
from refrigeratorspider.items import Refrigerator
import scrapy

class refrigeratorspider(scrapy.Spider):
    name = "pyimagesearch-refrigerator-spider"
    start_urls = ["https://www.bestbuy.com/site/refrigerators/french-door-refrigerators/abcat0901004.c?id=abcat0901004"]

    def parse(self, response):
        # let's only gather Time U.S. magazine covers
        url = response.css("div.refineCol ul li").xpath("a[contains(., 'item')]")
        yield scrapy.Request(url.xpath("@href").extract_first(), self.parse_page)

    def parse_page(self, response):
        # loop over all cover link elements that link off to the large
        # cover of the magazine and yield a request to grab the cove
        # data and image
        for href in response.xpath("//a[contains(., 'thumb')]"):
            yield scrapy.Request(href.xpath("@href").extract_first(),
                self.parse_covers)

        # extract the 'Next' link from the pagination, load it, and
        # parse it
        next = response.css("div.pages").xpath("a[contains(., 'Next')]")
        yield scrapy.Request(next.xpath("@href").extract_first(), self.parse_page)

    def parse_covers(self, response):
        # grab the URL of the cover image
        img = response.css(".center-block").xpath("@src")
        imageURL = img.extract_first()

        # grab the title and publication date of the current issue
        name = response.css(".sku-title").extract_first()
        price = response.css(".priceView-hero-price priceView-purchase-price").extract_first()
        model = response.css("sku-value").extract_first()
        sku = response.css("sku-id").extract_first()[:-2]

        # yield the result
        yield Refrigerator(name=name, price=price, model=model, sku=sku, file_urls=[imageURL])

使用以下代码设置当前目录(项目目录)后,我在终端中运行spider:

scrapy crawl pyimagesearch-refrigerator-spider -o output.json

以下是我从终端收到的信息:

2018-05-19 17:52:56 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: refrigeratorspider)
2018-05-19 17:52:56 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.5.0, Python 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 12:04:33) - [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 17.5.0 (OpenSSL 1.0.2o  27 Mar 2018), cryptography 2.1.4, Platform Darwin-16.7.0-x86_64-i386-64bit
2018-05-19 17:52:56 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'refrigeratorspider', 'FEED_FORMAT': 'json', 'FEED_URI': 'output.json', 'NEWSPIDER_MODULE': 'refrigeratorspider.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['refrigeratorspider.spiders']}
2018-05-19 17:52:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2018-05-19 17:52:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-05-19 17:52:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-05-19 17:52:56 [scrapy.middleware] INFO: Enabled item pipelines:
['refrigeratorspider.pipelines.RefrigeratorspiderPipeline']
2018-05-19 17:52:56 [scrapy.core.engine] INFO: Spider opened
2018-05-19 17:52:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-05-19 17:52:56 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-05-19 17:53:04 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.bestbuy.com/robots.txt> (referer: None)
2018-05-19 17:53:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.bestbuy.com/site/refrigerators/french-door-refrigerators/abcat0901004.c?id=abcat0901004> (referer: None)
2018-05-19 17:53:15 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.bestbuy.com/site/refrigerators/french-door-refrigerators/abcat0901004.c?id=abcat0901004> (referer: None)
Traceback (most recent call last):
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/spidermiddlewares/offsite.py", line 30, in process_spider_output
    for x in result:
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Users/Berkeley/refrigeratorspider/refrigeratorspider/spiders/refrigeratorspider.py", line 12, in parse
    yield scrapy.Request(url.xpath("@href").extract_first(), self.parse_page)
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/http/request/__init__.py", line 25, in __init__
    self._set_url(url)
  File "/Users/Berkeley/anaconda3/lib/python3.6/site-packages/scrapy/http/request/__init__.py", line 56, in _set_url
    raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__)
TypeError: Request url must be str or unicode, got NoneType:
2018-05-19 17:53:15 [scrapy.core.engine] INFO: Closing spider (finished)
2018-05-19 17:53:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 662,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 70930,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 5, 19, 22, 53, 15, 470657),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'memusage/max': 51470336,
 'memusage/startup': 51470336,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2018, 5, 19, 22, 52, 56, 475275)}
2018-05-19 17:53:15 [scrapy.core.engine] INFO: Spider closed (finished)

最后,我所有的客户都是最新的。Python运行的是最新的3.6.4。刮痧是1.5倍。和Pip和枕头都安装和更新。在

这也不是语法错误,它运行并完成爬行器,但会刮取0个文件。我可以使用scrapy的shell命令来获取方面,但是当我运行这个spider时,它不起作用。在

任何和所有的帮助是非常感谢!谢谢。在


Tags: theinpyinfourlparseresponsepackages
1条回答
网友
1楼 · 发布于 2024-04-27 05:02:55

出现错误后:

TypeError: Request url must be str or unicode, got NoneType:

发生在这里:

^{pr2}$

看起来没有找到url,因为编译器将url检测为None。尝试使用另一个规则或优化您必须获得的url(在您使用的页面中,我找不到任何div.refineCol

确保确保您有有效的模式来获取有效链接:

def parse(self, response):
    # let's only gather Time U.S. magazine covers               

    patterns=["div.refineCol ul li", ..., ]
    max=len(patterns)
    current=0
    pattern=patterns[current] 

    url = response.css(pattern).xpath("a[contains(., 'item')]")
    while not url || max>0:
        current+=1
        pattern=patterns[current] 
        url = response.css(pattern).xpath("a[contains(., 'item')]")
        max-=1 
    if not url:      
        raise Exception('None of the patterns extracted a link') 
        # or raise CloseSpider('None of the patterns...')

    link=url.xpath("@href").extract_first()
    print("Check %s"%link)
    if link:
        yield scrapy.Request(, self.parse_page)
    else:
        raise Exception('Not a valid url') 
        # or raise CloseSpider('Not a valid url')

    yield scrapy.Request(link, self.parse_page)

def close(self, reason):
    # add messages here or actions in case the spider closes

相关问题 更多 >