如何使用Scrapy和Splash获取动态页面的html?

2024-04-27 02:49:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我想浏览以下网站:

https://dimsum.eu-gb.containers.appdomain.cloud/

但是,源代码只是一个脚本:

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width,initial-scale=1"><link href="https://fonts.googleapis.com/css?family=IBM+Plex+Sans" rel="stylesheet"><link rel="icon" type="image/png" href="/favicon.png"><title>IBM Science Summarizer</title><style>#teconsent {
        bottom: 120px !important;
      }</style><link href="/css/article.91dc9a3f.css" rel="prefetch"><link href="/css/faq.415c1d74.css" rel="prefetch"><link href="/css/search.4bc6e428.css" rel="prefetch"><link href="/js/article.8fdbbb61.js" rel="prefetch"><link href="/js/faq.6fba764e.js" rel="prefetch"><link href="/js/search.cdc7df37.js" rel="prefetch"><link href="/css/app.de6343fa.css" rel="preload" as="style"><link href="/css/chunk-vendors.9096ae02.css" rel="preload" as="style"><link href="/js/app.d95ff0b2.js" rel="preload" as="script"><link href="/js/chunk-vendors.29fc9656.js" rel="preload" as="script"><link href="/css/chunk-vendors.9096ae02.css" rel="stylesheet"><link href="/css/app.de6343fa.css" rel="stylesheet"></head><body><noscript><strong>We're sorry but Scholar doesn't work properly without JavaScript enabled. Please enable it to continue.</strong></noscript><div id="app"></div><script>// window.webpackHotUpdate is present in local development mode
      if (!window.webpackHotUpdate) {
        var head = document.getElementsByTagName('head')[0]
        var script = document.createElement('script')

        script.type = 'text/javascript'
        script.src = 'https://www.ibm.com/common/stats/ida_stats.js'

        head.appendChild(script)
      }</script><script src="/js/chunk-vendors.29fc9656.js"></script><script src="/js/app.d95ff0b2.js"></script></body></html>

首先,我想通过网站上的表单进行搜索,但是Scrapy找不到表单。所以我使用了scrapy-spash,但它仍然找不到任何形式:

class IBMSSSpider(scrapy.Spider):
    """ A spider to collect articles from IBM SS website """
    name = 'ibmss'
    start_urls = [
        'https://dimsum.eu-gb.containers.appdomain.cloud/' # search?query=reading%20comprehension',
#   'http://google.com'
    ]
  
    def start_requests(self):
        print("start_urls:", self.start_urls)
        for url in self.start_urls:
            yield SplashRequest(url, self.parse,
                args={
                # optional; parameters passed to Splash HTTP API
                'wait': 0.5,

                'url': url,
                # 'http_method' is set to 'POST' for POST requests
                # 'body' is set to request body for POST requests
                },
                #endpoint='render.json', # optional; default is render.html
                #splash_url='<url>',     # optional; overrides SPLASH_URL
                #slot_policy=scrapy_splash.SlotPolicy.PER_DOMAIN,  # optional
                ) 
 
    def parse(self, response):
        return scrapy.FormRequest.from_response(
            response,
            formdata={'__BVID__377': 'reading comprehension' },
        formxpath='//*[@id="app"]/div[2]/div[1]/div/div[2]/div/div[1]/form',
            callback=self.parse_results
        )
    def parse_results(self, response):
        RESULT_LIST = '//*[@id="app"]/div[2]/div[2]/div/div/div/div'
        # RESULT_LIST = '//*div[contains(@class, "search-results")]//div[contains(@class, "result")]'
        result_listing = response.xpath(RESULT_LIST)

        pub_item = PaperItem(pub_type='archive')

        for result in result_listing:
            pub_url = response.urljoin(result.xpath('.//div[contains(@class, "result-title")]/a/@href').extract_first())
  
            print(pub_url)            
            yield scrapy.Request(pub_url, callback=self.parse_paper_details,
                                 meta={'result':  pub_item})

考虑到网站的来源,请指导我如何运行它的刮板


Tags: selfdivappurlresponsejslinkscript
1条回答
网友
1楼 · 发布于 2024-04-27 02:49:56

所以我不能用Scrapy来做这件事

在Scrapy中刮取动态内容

我不确定你需要关于这些文章的哪些信息,但在抓取动态内容驱动的网站时,这里有几点需要考虑

  1. 这个网站有多少是由javascript驱动的
  2. 是否有API可以重新设计HTTP请求,而不是自动化浏览器活动? 2.1)如果是,我是否需要标头、参数和cookie来模拟该请求
  3. 使用splash预呈现页面
  4. 最后的办法是使用含硒的软膏
  5. 在脚本中直接使用selenium模块

按此顺序进行的原因是,每一个都是一个潜在的解决方案,这会增加刮板变脆的可能性,并且刮板的效率会越来越慢

最有效的解决方案是寻找API

本网站

查看该网站,您可以看到它完全由javascript驱动,这增加了它向API端点发出AJAX请求的机会。使用chrome开发工具,您可以看到有5个对API的请求https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search

我经常首先使用requests包来处理API端点。因此,通过这样做,我发现它实际上只需要标题和查询。我想你把阅读理解看作是一种搜索,所以我就以它为例

我将对网络工具中的请求进行一个CURL复制,并将其复制到curl.trillworks.com,从而将头等转换为nice格式

出于某种原因,绝对有必要将数据字符串中的null传递给此API。但是,在python中传递字典时没有空等价项,这是能够在Scrapy中传递参数的方法(使用meta或cb_kwargs)。我有兴趣看到其他人在这方面的工作,让它在刮擦工作。我可能遗漏了一些关于在请求中传递参数的信息

代码示例

import requests

headers = {
    'Connection': 'keep-alive',
    'Accept': 'application/json',
    'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Mobile Safari/537.36',
    'Content-Type': 'application/json',
    'Origin': 'https://dimsum.eu-gb.containers.appdomain.cloud',
    'Sec-Fetch-Site': 'same-origin',
    'Sec-Fetch-Mode': 'cors',
    'Sec-Fetch-Dest': 'empty',
    'Referer': 'https://dimsum.eu-gb.containers.appdomain.cloud/',
    'Accept-Language': 'en-US,en;q=0.9',
}

data = '{"query":"reading comprehension","filters":{},"page":0,"size":10,"sort":null,"sessionInfo":""}'

response = requests.post('https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search', headers=headers, data=data)

articles = response.json()['searchResults']['results']
for a in articles: 
    for b in a['sections']:
            title =  b['title']
            print(title)
            print('     ')
            for c in b['fragments']:
                text= c['text']
                print(text)

在这里,我们是为了循环该页面上的每个搜索结果文章,每个部分都有一个标题,我们循环并打印它,然后在该部分中有片段,其中包含该页面上的所有文本。然后我们正在打印它。 同样,我不知道你在用这些信息做什么,所以我不能再详细说明了,但是你应该能够从中存储你需要的文本

我必须敦促您亲自查看json对象,如果有其他数据需求,您只需搜索json即可。如果你想要ARVIX pdf的链接,那么它也在那里

评论更新

下面是您需要实现的代码示例,以便使用scrapy实现此功能

import scrapy
import json

class TestSpider(scrapy.Spider):
    name = 'test'
    allowed_domains = ['dimsum.eu-gb.containers.appdomain.cloud/']
  
    headers = {
    'Connection': 'keep-alive',
    'Accept': 'application/json',
    'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Mobile Safari/537.36',
    'Content-Type': 'application/json',
    'Origin': 'https://dimsum.eu-gb.containers.appdomain.cloud',
    'Sec-Fetch-Site': 'same-origin',
    'Sec-Fetch-Mode': 'cors',
    'Sec-Fetch-Dest': 'empty',
    'Referer': 'https://dimsum.eu-gb.containers.appdomain.cloud/',
    'Accept-Language': 'en-US,en;q=0.9',
    }
    cookies = {
    'dimsum_user': 'dce0087b-b1ed-4ceb-861a-6dcdc1af500f',
    'JSESSIONID': 'node01i38ra486o3eocapxvtryared1263001.node0',
    }
    data = {"query":"reading comprehension","filters":{},"page":0,"size":10,"sort":null,"sessionInfo":""}


    
    def start_requests(self):
        api_url = 'https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search'
        yield scrapy.Request(url=api_url,method='POST',headers=self.headers,cb_kwargs={'data':self.data},cookies=self.cookies, callback=self.parse)
    
    def parse(self, response):
        articles = response.json()['searchResults']['results']
        for a in articles: 
           for b in a['sections']:
               title =  b['title']
               print(title)
               print('     ')
               for c in b['fragments']:
                   text= c['text']
                   print(text)
        

问题

Null在python中不是关键字,因此不能用作字典,不幸的是,"sort":null必须用作我在数据variable中输入的参数。我也尝试过将其转换为JSON字符串,但没有成功

你得到的错误是

data = {"query":"reading comprehension","filters{},"page":0,"size":10,"sort":null,"sessionInfo":""}

NameError: name 'null' is not defined`NameError: name 'null' is not defined

基本刮痕日志

2020-07-30 13:10:10 [scrapy.core.engine] INFO: Spider opened
2020-07-30 13:10:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-30 13:10:10 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-30 13:10:10 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search> (failed 1 times): 500 Internal Server Error
2020-07-30 13:10:10 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <POST https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search> (failed 2 times): 500 Internal Server Error
2020-07-30 13:10:10 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <POST https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search> (failed 3 times): 500 Internal Server Error
2020-07-30 13:10:10 [scrapy.core.engine] DEBUG: Crawled (500) <POST https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search> (referer: https://dimsum.eu-gb.containers.appdomain.cloud/)
2020-07-30 13:10:10 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <500 https://dimsum.eu-gb.containers.appdomain.cloud/api/scholar/search>: HTTP status code is not handled or not allowed
2020-07-30 13:10:10 [scrapy.core.engine] INFO: Closing spider (finished)

对这个问题的想法和建议持开放态度

相关问题 更多 >