请告诉我scrapy启动代码有什么问题

2024-04-25 21:02:37 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图搜集三星墨西哥新闻编辑部的内容(最近的列表)数据。 但它不起作用,你能告诉我为什么吗

https://news.samsung.com/mx

我想我带了javascript的内容,但我看不懂

版本: 刮痧:2.1.0 飞溅:3.4.1

蜘蛛代码

import scrapy
from scrapy_splash import SplashRequest
from scrapy import Request


class CrawlspiderSpider(scrapy.Spider):
    name = 'crawlspider'
    allowed_domains = ['news.samsung.com/mx']
    page = 1
    start_urls = ['https://news.samsung.com/mx']

    def start_request(self):
        for url in self.start_urls:
            yield SplashRequest(
                         url,
                         self.main_parse,
                         endpoint='render.html',
                         args = {'wait': 10}
                     )

    def parse(self, response):
        lists = response.css('#recent_list_box > li').getAll()
        for list in lists:
            yield {"list" :lists.get() }

我们已经包括了所涉及的中间件。 设置代码

BOT_NAME = 'spider'
SPIDER_MODULES = ['spider.spiders']
NEWSPIDER_MODULE = 'spider.spiders'
LOG_FILE = 'log.txt'
AJAXCRAWL_ENABLED = True
ROBOTSTXT_OBEY = False
SPLASH_URL = 'http://127.0.0.1'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPLASH_LOG_400 = True

下面是日志文件中的剩余日志。 如果您能告诉我为什么留下下面的日志以及为什么我无法读取所需的数据,我将不胜感激

2020-07-02 15:27:09 [scrapy.core.engine] INFO: Spider opened
2020-07-02 15:27:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-02 15:27:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-02 15:27:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://news.samsung.com/mx/> from <GET https://news.samsung.com/mx>
2020-07-02 15:27:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.samsung.com/mx/> (referer: None)
2020-07-02 15:27:09 [scrapy.core.scraper] ERROR: Spider error processing <GET https://news.samsung.com/mx/> (referer: None)
Traceback (most recent call last):
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\defer.py", line 117, in iter_errback
    yield next(it)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
    return next(self.data)
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
    for el in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 338, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\scrapy_tutorial\spider\spider\spiders\crawlspider.py", line 22, in parse
    lists = response.css('#recent_list_box > li').getAll()
AttributeError: 'SelectorList' object has no attribute 'getAll'
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-02 15:27:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

热门问题