用Scrapy刮一个链接

2024-04-27 15:54:08 发布

您现在位置:Python中文网/ 问答频道 /正文

我在刮迪奥网站为其产品。head/script提供了除产品描述之外的所有字段。为了抓取描述,我需要跟随链接(下面代码中的url变量)。我熟悉的唯一方法就是使用BeautifulSoup。我能只用Scrapy来解析它吗? Thx伙计们。你知道吗

class DiorSpider(CrawlSpider):
    name = 'dior'
    allowed_domains = ['www.dior.com']
    start_urls = ['https://www.dior.com/en_us/']
    rules = (
        Rule(LinkExtractor(allow=(r'^https?://www.dior.com/en_us/men/clothing/new-arrivals.*',)), callback='parse_file')
    )

    def parse_file(self, response):
        script_text = response.xpath("//script[contains(., 'window.initialState')]").extract_first()
        blocks = extract_blocks(script_text)
        for block in blocks:
            sku = re.compile(r'("sku":)"[a-zA-Z0-9_]*"').finditer(block)
            url = re.compile(r'("productLink":{"uri":)"[^"]*').finditer(block)
            for item in zip(sku, url):
                scraped_info = {
                    'sku': item[0].group(0).split(':')[1].replace('"', ''),
                    'url': 'https://www.dior.com' + item[1].group(0).split(':')[2].replace('"', '')
                }

                yield scraped_info

Tags: httpscomurlparseresponsewwwscriptitem
1条回答
网友
1楼 · 发布于 2024-04-27 15:54:08

如果您需要从第二个请求中提取额外的信息,而不是在那里生成数据,那么您应该生成一个URL请求,其中包含您在^{}属性中已经提取的信息。你知道吗

from scrapy import Request

# …

    def parse_file(self, response):
        # …
        for block in blocks:
            # …
            for item in zip(sku, url):
                # …
                yield Request(url, callback=self.parse_additional_information, meta={'scraped_info': scraped_info}

    def parse_additional_information(self, response):
        scraped_info = response.meta['scraped_info']
        # extract the additional information, add it to scraped_info
        yield scraped_info

相关问题 更多 >