当方法是POST时,Scrapy FormRequest发送GET请求

2024-06-16 10:04:45 发布

您现在位置:Python中文网/ 问答频道 /正文

This是我要爬网的页面

页面上的数据来自这个URL

这是我的爬虫代码。我至少检查了5次标题和表单数据。我认为他们是对的。问题是向start_url发送一个GET请求很难,即使我重写了parse方法的默认行为。你知道吗

class MySpider(CrawlSpider):

    name = 'myspider'

    start_urls = [
        'https://277kmabdt6-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%20(lite)%203.27.1%3BJS%20Helper%202.26.0%3Bvue-instantsearch%201.7.0&x-algolia-application-id=277KMABDT6&x-algolia-api-key=bf8b92303c2418c9aed3c2f29f6cbdab',
    ]

    formdata = {
        'requests': [{'indexName': 'listings',
                      'params': 'query=&hitsPerPage=24&page=0&highlightPreTag=__ais-highlight__&highlightPostTag=__%2Fais-highlight__&filters=announce_type%3Aproperty-announces%20AND%20language_code%3Apt%20AND%20listing_id%3A%205&facets=%5B%22announce_type%22%5D&tagFilters='}]
    }
    headers = {
        'accept': 'application/json',
        'content-type': 'application/x-www-form-urlencoded',
        'Origin': 'https://www.flat.com.br',
        'Referer': 'https://www.flat.com.br/search?query=',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36',
    }


    def parse(self, response):
        for url in self.start_urls:
            yield scrapy.FormRequest(
                url=url,
                method='POST',
                headers=self.headers,
                formdata=self.formdata,
                callback=self.parse_page,
            )

    def parse_page(self, response):

        print json.loads(response.text)

这是我运行蜘蛛时得到的信息。你知道吗

我的问题是:为什么scrapy向url发送一个GET请求,我遗漏了什么?可能是我的请求失败的其他原因吗?你知道吗

2019-07-01 11:45:58 [scrapy] DEBUG: Crawled (400) <GET https://277kmabdt6-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%20(lite)%203.27.1%3BJS%20Helper%202.26.0%3Bvue-instantsearch%201.7.0&x-algolia-application-id=277KMABDT6&x-algolia-api-key=bf8b92303c2418c9aed3c2f29f6cbdab> (referer: None)
2019-07-01 11:45:58 [scrapy] DEBUG: Ignoring response <400 https://277kmabdt6-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%20(lite)%203.27.1%3BJS%20Helper%202.26.0%3Bvue-instantsearch%201.7.0&x-algolia-application-id=277KMABDT6&x-algolia-api-key=bf8b92303c2418c9aed3c2f29f6cbdab>: HTTP status code is not handled or not allowed

Tags: httpsselfurlgetnetapplicationparseresponse
3条回答

我认为只有当有效负载是body=json.dumps(self.formdata)而不是formdata=self.formdata时才能得到有效的响应,因为它们是json格式的。建议部分如下:

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.FormRequest(
                url=url,method='POST',
                headers=self.headers,body=json.dumps(self.formdata),
                callback=self.parse_page,
            )

当您使用parse()方法时,默认情况下,该方法从start_urlsget请求获取响应,但在这种情况下,您在start_urls中使用的url永远不会通过parse()方法,因为它将抛出status 400错误或其他错误。因此,要像您尝试的那样使用parse()方法,请确保您在start_urls中使用的url能够获得所需的状态。也就是说,即使使用状态为200的differnt url,然后使用right url处理post请求,那么响应也是所需的。你知道吗

import json
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'

    #different url

    start_urls = ['https://stackoverflow.com/questions/tagged/web-scraping']
    url = 'https://277kmabdt6-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20vanilla%20JavaScript%20(lite)%203.27.1%3BJS%20Helper%202.26.0%3Bvue-instantsearch%201.7.0&x-algolia-application-id=277KMABDT6&x-algolia-api-key=bf8b92303c2418c9aed3c2f29f6cbdab'

    formdata = {
        'requests': [{'indexName': 'listings',
        'params': 'query=&hitsPerPage=24&page=0&highlightPreTag=__ais-highlight__&highlightPostTag=__%2Fais-highlight__&filters=announce_type%3Aproperty-announces%20AND%20language_code%3Apt%20AND%20listing_id%3A%205&facets=%5B%22announce_type%22%5D&tagFilters='}]
    }
    headers = {
        'accept': 'application/json',
        'content-type': 'application/x-www-form-urlencoded',
        'Origin': 'https://www.flat.com.br',
        'Referer': 'https://www.flat.com.br/search?query=',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36',
    }

    def parse(self,response):
        yield scrapy.Request(
                url=self.url,method='POST',
                headers=self.headers,body=json.dumps(self.formdata),
                callback=self.parse_page,
            )

    def parse_page(self, response):
        print(json.loads(response.text))

首先将解析方法重命名为:

def start_requests(self):

发送表单时,您应该使用scrapy.FormRequest请求相反。如果您想要发送一个原始的正文,那么只需要使用method=post。在这种情况下,它看起来像ofrm数据,所以这样做。你知道吗

    formdata = {
        'requests': [{'indexName': 'listings',
        'params': 'query=&hitsPerPage=24&page=0&highlightPreTag=__ais-highlight__&highlightPostTag=__%2Fais-highlight__&filters=announce_type%3Aproperty-announces%20AND%20language_code%3Apt%20AND%20listing_id%3A%205&facets=%5B%22announce_type%22%5D&tagFilters='}]
    }
    headers = {
        'accept': 'application/json',
        'content-type': 'application/x-www-form-urlencoded',
        'Origin': 'https://www.flat.com.br',
        'Referer': 'https://www.flat.com.br/search?query=',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36',
    }

def start_requests(self):
    for link in self.start_urls:
        yield scrapy.FormRequest(link, headers=headers, formdata=formdata, callback=self.parse_page)

您还可以使用其他工具,例如来自响应的表单请求,来帮助实现这一点。如果您想发送一个原始的json字符串或其他东西,那么您需要将字典转换为字符串,然后像这里所做的那样将方法设置为POST。FormRequest将自动发送POST请求,如果您使用from response特性,它将是智能的。你知道吗

参考文献: https://docs.scrapy.org/en/latest/topics/request-response.html#request-subclasses

您需要将parse方法重命名为start_requests,因为默认情况下,Scrapy将GETself.start_urls为每个URL命名:

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.FormRequest(
            url=url,
            method='POST',
            headers=self.headers,
            formdata=self.formdata,
            callback=self.parse_page,
        )

相关问题 更多 >