如何通过更改“def start_requests(self)”中url的一部分在Scrapy中多次运行爬行器

2024-06-08 01:26:46 发布

您现在位置:Python中文网/ 问答频道 /正文

我对这只蜘蛛的逻辑有个疑问。我想爬网的Castbox网站有无限分页类别之一。 因此,我认为我可以拆分JSON文件的URL,然后切片,最后重新连接URL以便能够解析它。因此,我使用while循环来确定爬行器继续爬行所需元素的条件

让我解释清楚

当我检查Castbox网站的JSON URL时,我发现每次通过向下滚动页面重新加载时,URL只有一部分会发生更改。这部分称为“跳过”,它在0到200之间变化,您将在URL中看到它。所以,我想如果我能写一个“def start_requests(self)”,其中URL的“skip”部分可以从0变为200,我就能得到我想要的。 这种功能是否可以每次更改URL?如果是,我的spider的“def start_请求(self)”部分有什么问题

顺便说一下,在运行它时,我得到了以下错误:ModuleNotFoundError:没有名为“urlparse”的模块

这是我的蜘蛛:

-- coding: utf-8 --
import scrapy
import json

class ArtsPodcastsSpider(scrapy.Spider):
    name = 'arts_podcasts'
    allowed_domains = ['www.castbox.fm']
    

    def start_requests(self):
        
        try:
            if response.request.meta['skip']:
                skip=response.request.meta['skip']
            else:
                skip=0
                
            while skip < 201:
                url = 'https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=0&limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1'
                split_url = urlparse.urlsplit(url)
                path = split_url.path
                path.split('&')
                path.split('&')[:-5]
                '&'.join(path.split('&')[:-5])
                parsed_query = urlparse.parse_qs(split_url.query)
                query = urlparse.parse_qs(split_url.query, keep_blank_values=True)
                query['skip'] = skip
                updated = split_url._replace(path='&'.join(base_path.split('&')[:-5]+['limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1', '']),
                    query=urllib.urlencode(query, doseq=True))
                updated_url=urlparse.urlunsplit(updated)
                
                
                yield scrapy.Request(url= updated_url, callback= self.parse_id, meta={'skip':skip})
    
                def parse_id(self, response):

                    skip=response.request.meta['skip']
                    data=json.loads(response.body)
                    category=data.get('data').get('category').get('name')
                    arts_podcasts=data.get('data').get('list')
                    for arts_podcast in arts_podcasts:
                        yield scrapy.Request(url='https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip={0}&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1'.format(arts_podcast.get('list')[2].get('cid')), meta={'category':category,'skip':skip}, callback= self.parse)


                def parse(self, response):

                    skip=response.request.meta['skip']
                    category=response.request.meta['category']
                    arts_podcast=json.loads(response.body).get('data')
                    yield scrapy.Request(callback=self.start_requests,meta={'skip':skip+1})
                    yield{

                        'title':arts_podcast.get('title'),
                        'category':arts_podcast.get('category'),
                        'sub_category':arts_podcast.get('categories'),
                        'subscribers':arts_podcast.get('sub_count'),
                        'plays':arts_podcast.get('play_count'),
                        'comments':arts_podcast.get('comment_count'),
                        'episodes':arts_podcast.get('episode_count'),
                        'website':arts_podcast.get('website'),
                        'author':arts_podcast.get('author'),
                        'description':arts_podcast.get('description'),
                        'language':arts_podcast.get('language')
                        }

谢谢大家!

---编辑--

这是我运行蜘蛛后得到的日志的一部分,@Patrick Klein:

2020-11-14 15:51:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=0&limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1> (referer: None)
2020-11-14 15:51:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=0&limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1> (referer: None)
Traceback (most recent call last):
  File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\shima\projects\castbox_arts_podcasts\castbox_arts_podcasts\spiders\arts_podcasts.py", line 27, in parse_id
    url = f'https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip={arts_podcast.get("list")[2].get("cid")}&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1'
TypeError: 'NoneType' object is not subscriptable

---编辑2--

2020-11-15 13:14:42 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=2583691&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1> (referer: https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=8&limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1)
2020-11-15 13:14:42 [scrapy.core.scraper] DEBUG: Scraped from <200 https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=2946683&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1>
{'sub_category': None, 'title': None, 'subscribers': None, 'plays': None, 'comments': None, 'episodes': None, 'downloads': None, 'website': None, 'author': None, 'description': None, 'language': None}
2020-11-15 13:14:47 [scrapy.crawler] INFO: Received SIGINT twice, forcing unclean shutdown
2020-11-15 13:14:47 [scrapy.core.downloader.handlers.http11] WARNING: Got data loss in https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=12&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1. If you want to process broken responses set the setting DOWNLOAD_FAIL_ON_DATALOSS = False -- This message won't be shown in further requests
2020-11-15 13:14:47 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip=12&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>, <twisted.python.failure.Failure twisted.web.http._DataLoss: Chunked decoder in 'CHUNK_LENGTH' state, still expecting more data to get to 'FINISHED' state.>]

JSON对象的一部分,用于一个需要刮取的项:

{
    "msg": "OK",
    "code": 0,
    "data": {
        "category": {
            "sub_categories": [
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10022",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Books"
                },
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10023",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Design"
                },
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10024",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Fashion & Beauty"
                },
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10025",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Food"
                },
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10026",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Performing Arts"
                },
                {
                    "image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "id": "10027",
                    "night_image_url": "https://castbox.fm/static/everest/category/v3/grey/default.png",
                    "name": "Visual Arts"
                }
            ],
            "id": "10021",
            "name": "Arts"
        },
        "list": [
            {
                "provider_id": 125443881,
                "episode_count": 256,
                "x_play_base": 0,
                "stat_cover_ext_color": false,
                "keywords": [
                    "Arts",
                    "Literature",
                    "TV & Film",
                    "Society & Culture",
                    "freshair",
                    "npr",
                    "terrygross",
                    "news",
                    "facts",
                    "interesting",
                    "worldwide",
                    "international",
                    "best",
                    "awardwinning",
                    "jay z"
                ],
                "cover_ext_color": "-8610134",
                "mongo_id": "5e74365585a4e5dcff18d769",
                "show_id": "56a0a3399eb9a8dd9758c9c2",
                "copyright": "Copyright 2015-2019 NPR - For Personal Use Only",
                "author": "NPR",
                "is_key_channel": true,
                "audiobook_categories": [],
                "comment_count": 29,
                "website": "http://www.npr.org/programs/fresh-air/",
                "rss_url": "https://feeds.npr.org/381444908/podcast.xml",
                "description": "Fresh Air from WHYY, the Peabody Award-winning weekday magazine of contemporary arts and issues, is one of public radio's most popular programs. Hosted by Terry Gross, the show features intimate conversations with today's biggest luminaries.",
                "tags": [
                    "from-itunes"
                ],
                "editable": true,
                "play_count": 8890966,
                "link": "http://www.npr.org/programs/fresh-air/",
                "twitter_names": [
                    "nprfreshair"
                ],
                "categories": [
                    10021,
                    10022,
                    10125,
                    10001,
                    10101,
                    10014,
                    10015
                ],
                "x_subs_base": 25254,
                "small_cover_url": "https://is5-ssl.mzstatic.com/image/thumb/Podcasts113/v4/76/32/0c/76320cb7-7805-5ffc-6d48-18b311dd9be8/mza_18321298089187816075.jpg/200x200bb.jpg",
                "big_cover_url": "https://is5-ssl.mzstatic.com/image/thumb/Podcasts113/v4/76/32/0c/76320cb7-7805-5ffc-6d48-18b311dd9be8/mza_18321298089187816075.jpg/600x600bb.jpg",
                "language": "en",
                "cid": 2698788,
                "latest_eid": 326888897,
                "topic_tags": [
                    "FreshAir",
                    "NPR"
                ],
                "release_date": "2020-11-14T05:01:15Z",
                "title": "Fresh Air",
                "uri": "/ch/2698788",
                "https_cover_url": "https://is5-ssl.mzstatic.com/image/thumb/Podcasts113/v4/76/32/0c/76320cb7-7805-5ffc-6d48-18b311dd9be8/mza_18321298089187816075.jpg/400x400bb.jpg",
                "channel_type": "private",
                "channel_id": "47b5be27cc1ca68aa80f8f7bbccedb47a40992d3",
                "sub_count": 361101,
                "internal_product_id": "cb.ch.2698788",
                "social": {
                    "website": "http://www.npr.org/programs/fresh-air/",
                    "youtube": [
                        {
                            "name": "channel/UCwly5-E5e0EUY-SsnttN4Sg"
                        }
                    ],
                    "twitter": [
                        {
                            "name": "nprfreshair"
                        }
                    ],
                    "facebook": [
                        {
                            "name": "freshairwithterrygross"
                        }
                    ],
                    "instagram": [
                        {
                            "name": "nprfreshair"
                        }
                    ]
                }
            }

Tags: inhttpsimageidurldatagetscrapy
1条回答
网友
1楼 · 发布于 2024-06-08 01:26:46

我注意到您正在将categoryskip传递给您的解析函数,但在spider中并不真正使用它们。实际上有很多未使用的,可能不必要的进口。 此外,在parse_id中使用的URL与start_requests方法中使用的URL几乎相同

我已经把你的蜘蛛改写成了某种东西,我觉得它有点像你想要实现的东西

import scrapy
import json

class ArtsPodcastsSpider(scrapy.Spider):
    name = 'arts_podcasts'

    def start_requests(self):
        for skip in range(201):
            url = f'https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip={skip}&limit=60&web=1&m=20201112&n=609584ea96edb64605bca96212128aa5&r=1'
            yield scrapy.Request(
                url=url, 
                callback=self.parse_id, 
            )

    def parse_id(self, response):
        data = json.loads(response.body)
        arts_podcasts = data.get('data').get('list')
        for arts_podcast in arts_podcasts:
            url = f'https://everest.castbox.fm/data/top_channels/v2?category_id=10021&country=us&skip={arts_podcast["cid"]}&limit=60&web=1&m=20201111&n=609ba0097bb48d4b0778a927bdcf69f4&r=1'
            yield scrapy.Request(
                url=url, 
                callback=self.parse
            )

    def parse(self, response):
        arts_podcasts=json.loads(response.body).get('data')
        for arts_podcast in arts_podcasts['list']:
            yield {
                'title':arts_podcast.get('title'),
                'category':arts_podcast.get('category'),
                'sub_category':arts_podcast.get('categories'),
                'subscribers':arts_podcast.get('sub_count'),
                'plays':arts_podcast.get('play_count'),
                'comments':arts_podcast.get('comment_count'),
                'episodes':arts_podcast.get('episode_count'),
                'website':arts_podcast.get('website'),
                'author':arts_podcast.get('author'),
                'description':arts_podcast.get('description'),
                'language':arts_podcast.get('language')
            }

相关问题 更多 >

    热门问题