如何在使用deathbycaptcha服务处理Google reCAPTCHA v2时控制Scrapy中的请求流?

2024-04-26 12:38:31 发布

您现在位置:Python中文网/ 问答频道 /正文

您好:)我正在使用python与scrapy web爬行框架一起工作,抓取一个网站,并用deathbycaptcha服务解决我在他们的页面上遇到的验证码。我的下载延迟被设置为30秒,我只需要抓取几页来获取基本信息,这样我就不会对网站的带宽造成太大的干扰或诸如此类的事情。我把抓取当作普通浏览器上的一种体验。在

所以首先让我们谈谈这些问题。在

问题1(代码) 我怎样才能在解决问题的过程中,基本上停止创建新的请求,或者过多地使用验证码?我尝试过很多不同的方法都没有效果,而且我对scrapy还是相当陌生的,所以我不太擅长编辑下载程序的中间件或scrapy引擎代码,但是如果这是唯一的方法,那么就这样吧,但我希望有一个非常简单有效的解决方案,让验证码来完成它的工作,不会有新的请求打断它。在

发行代码 我该如何修复这个计时器函数,我想这和第一个问题有点关联。如果验证码超时而没有解决,则它将永远不会重置captchaIsRunning布尔值,并继续不允许验证码重新开始尝试求解。计时器是我试图解决第一个问题的方法之一,但是。。。我得到一个错误。不确定这是否与import语句中从threadingtimeit拉取的事实有关,但我认为这没有什么大的区别。有人能指导我修正定时器语句的正确方向吗?在

就像我说的deathByCaptchaAPI运行得很好,当它有机会的时候,但是那些糟糕的请求真的干扰了我,我还没有找到一个相关的解决方案来解决这个问题。再说一次,我还不是一个讨厌的专家,所以有些事情已经远远超出了我的舒适区,这需要推动,但不要太努力,我最终打破一切xD感谢你的帮助,它是非常感谢!很抱歉问这么长的问题。在

不管怎么说,这个页面可以让你查找一些结果,在大约40-60个页面之后,它会重定向到一个有reCAPTCHAV2的验证码页面。deathbycaptcha服务有一个API来解决recaptcha v2,但不幸的是,它们的解决时间有时可能超过几分钟,这是非常令人失望的,但它确实发生了。所以我很自然地将我的DOWNLOAD_TIMEOUT设置调整为240秒,这样它就有足够的时间来解决验证码问题,然后继续抓取,这样它就不再重定向了。我的垃圾设置如下:

CONCURRENT_REQUESTS = 1
DEPTH_LIMIT = 1
DOWNLOAD_DELAY = 30
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1
DOWNLOAD_TIMEOUT = 240
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 10
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60

然后很明显剩下的,但我认为这些是我的问题中最重要的。我启用了一个扩展,然后中间件有一些额外的东西,因为我也在这个文件中使用docker和scrapy splash。在

^{pr2}$

所以我不认为这些东西会对验证码或下载程序的中间软件有很大的影响。。。但这里有一些代码来自我的铲运机:

Python:

import sys
import os
sys.path.append(r'F:\Documents\ScrapyDirectory\scrapername\scrapername\spiders')
import deathbycaptcha
import json
import scrapy
import requests
from datetime import datetime
import math
import urllib
import time
from scrapy_splash import SplashRequest
from threading import Timer
from timeit import Timer

class scrapername(scrapy.Spider):
    name = "scrapername"
    start_urls = []

    global scrapeUrlList
    global charCompStorage
    global captchaIsRunning

    r = requests.get('http://example.com/examplejsonfeed.php')

    myObject = json.loads(r.text)

    #print("Loading names...")
    for o in myObject['objects']:
        #a huge function for creating basically a lot of objects and appending links created from these objects to the scrapeUrlList function

    print(len(scrapeUrlList))
    for url in scrapeUrlList:
        start_urls.append(url[1])
        #add all those urls that just got created to the start_urls list


    link_collection = []

    def resetCaptchaInformation():
        global captchaIsRunning
        if captchaIsRunning:
            captchaIsRunning = False

    def afterCaptchaSubmit(self, response):
        global captchaIsRunning
        print("Captcha submitted: " + response.request.url)
        captchaIsRunning = False

    def parse(self, response):
        global captchaIsRunning
        self.logger.info("got response %s for %r" % (response.status, response.url))

        if "InternalCaptcha" in response.request.url:
        #checks for captcha in the url and if it's there it starts running the captcha solver API
            if not captchaIsRunning:
            #I have this statement here as a deterrent to prevent the captcha solver from starting again and again and 
            #again with every new request (which it does)  *ISSUE 1*
                if "captchasubmit" in response.request.url:
                    print("Found captcha submit in url")
                else:
                    print("Internal Captcha is activated")
                    captchaIsRunning = True
                    t = Timer(240.0, self.resetCaptchaInformation)
                    #so I have been having major issues here not sure why?
                    #*ISSUE 2*
                    t.start()

                    username = "username"
                    password = "password"

                    print("Set username and password")

                    Captcha_dict = {
                    'googlekey': '6LcMUhgUAAAAAPn2MfvqN9KYxj7KVut-oCG2oCoK',
                    'pageurl': response.request.url}

                    print("Created catpcha dict")

                    json_Captcha = json.dumps(Captcha_dict)

                    print("json.dumps on captcha dict:")
                    print(json_Captcha)

                    client = deathbycaptcha.SocketClient(username, password)

                    print("Set up client with deathbycaptcha socket client")

                    try:
                        print("Trying to solve captcha")
                        balance = client.get_balance()

                        print("Remaining Balance: " + str(balance))

                        # Put your CAPTCHA type and Json payload here:
                        captcha = client.decode(type=4,token_params=json_Captcha)

                        if captcha:
                            # The CAPTCHA was solved; captcha["captcha"] item holds its
                            # numeric ID, and captcha["text"] item its a text token".
                            print("CAPTCHA %s solved: %s" % (captcha["captcha"], captcha["text"]))

                            data = {
                                'g-recaptcha-response':captcha["text"],
                            }

                            try:
                                dest = response.xpath("/html/body/form/@action").extract_first()
                                print("Form URL: " + dest)
                                submitURL = "https://exampleaddress.com" + dest
                                yield scrapy.FormRequest(url=submitURL, formdata=data, callback=self.afterCaptchaSubmit, dont_filter = True)

                                print("Yielded form request")

                                if '':  # check if the CAPTCHA was incorrectly solved
                                    client.report(captcha["captcha"])
                            except TypeError:
                                sys.exit()
                    except deathbycaptcha.AccessDeniedException:
                        # Access to DBC API denied, check your credentials and/or balance
                        print("error: Access to DBC API denied, check your credentials and/or balance")
            else:
                pass
        else:
            print("no Captcha")
            #this will run if no captcha is on the page that the redirect landed on
            #and basically parses all the information on the page

很抱歉所有的代码,谢谢您耐心阅读。如果你对为什么会有什么问题,尽管问我好让我解释。所以验证码能解决问题。这不是问题所在。当scraper运行时,有很多请求发生,它运行到302重定向,然后得到一个200响应,抓取页面,检测验证码并开始解决它。然后scrapy发送另一个请求,在captcha页面上得到302重定向,200个响应,并检测到验证码并再次尝试解决它。它多次启动API并浪费我的令牌。因此,if not captchaIsRunning:语句可以阻止这种情况的发生。这里是我现在输出的废日志,当它命中验证码时,请记住在t他很好,运行了我所有的解析日志。在

废日志:

2018-07-19 14:10:35 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> from <GET https://www.exampleaddress.com/results?name=Thomas%20Garrett&citystatezip=Las%20Vegas,%20Nv>
2018-07-19 14:10:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
2018-07-19 14:10:49 [scrapername] INFO: got response 200 for 'https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv'
Internal Captcha is activated
2018-07-19 14:10:49 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
Traceback (most recent call last):
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
    for el in result:
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "F:\Documents\ScrapyDirectory\scraperName\scraperName\spiders\scraperName- Copy.py", line 232, in parse
    t = Timer(240.0, self.resetCaptchaInformation)
  File "F:\Program Files (x86)\Anaconda3\lib\timeit.py", line 130, in __init__
    raise ValueError("stmt is neither a string nor callable")
ValueError: stmt is neither a string nor callable
2018-07-19 14:10:53 [scrapy.extensions.logstats] INFO: Crawled 63 pages (at 2 pages/min), scraped 13 items (at 0 items/min)
2018-07-19 14:11:02 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv> from <GET https://www.exampleaddress.com/results?name=Samuel%20Van%20Cleave&citystatezip=Las%20Vegas,%20Nv>
2018-07-19 14:11:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
2018-07-19 14:11:13 [scrapername] INFO: got response 200 for 'https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv'
#and then an endless supply of 302 redirects, and 200 response for their crawl
#nothing happens, because the Timer failed, the captcha never solved?
#I'm not sure what is going wrong with it, hence the issues I am having

Tags: andtheinhttpsimportcomurlfor