飞溅碎片数据

2024-06-02 08:23:22 发布

您现在位置:Python中文网/ 问答频道 /正文

一般来说,我了解如何使用Scrapy和x-path解析html。但是,我不知道如何获取HAR数据

mport scrapy
from scrapy_splash import SplashRequest

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com/js']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url=url,
                                callback=self.parse,
                                endpoint='render.html')


    def parse(self, response):
        quotes = response.xpath('//*[@class="quote"]')
        for quote in quotes:
            yield { 'author': quote.xpath('.//*[@class="author"]/text()').extract_first(),
                    'quote': quote.xpath('.//*[@class="text"]/text()').extract_first()
                    }

        script = """function main(splash)
                assert(splash:go(splash.args.url))
                splash:wait(0.3)
                button = splash:select("li[class=next] a")
                splash:set_viewport_full()
                splash:wait(0.1)
                button:mouse_click()
                splash:wait(1)
                return {url = splash:url(),
                        html = splash:html(),
                        har = splash:har()}
            end """
        yield SplashRequest(url=response.url,
                                callback=self.parse,
                                endpoint='execute',
                                args={'lua_source':script})

脚本中将HAR数据导出到文件的下一个语句是什么?如何将所有网络数据生成一个文件?如有任何见解,将不胜感激


Tags: 数据selfurlparseresponsehtmlstartxpath