如何获得残缺的失败网址?

2024-03-29 09:17:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我是一个新手的刮和它的惊人的爬虫框架我知道!

在我的项目中,我发送了9万多个请求,但有些请求失败了。 我将日志级别设置为INFO,我只看到一些统计数据,但没有详细信息。

2012-12-05 21:03:04+0800 [pd_spider] INFO: Dumping spider stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/twisted.internet.error.ConnectionDone': 1,
 'downloader/request_bytes': 46282582,
 'downloader/request_count': 92383,
 'downloader/request_method_count/GET': 92383,
 'downloader/response_bytes': 123766459,
 'downloader/response_count': 92382,
 'downloader/response_status_count/200': 92382,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2012, 12, 5, 13, 3, 4, 836000),
 'item_scraped_count': 46191,
 'request_depth_max': 1,
 'scheduler/memory_enqueued': 92383,
 'start_time': datetime.datetime(2012, 12, 5, 12, 23, 25, 427000)}

有没有办法得到更详细的报告?例如,显示那些失败的url。谢谢!


Tags: 项目info框架datetimebytestimeresponserequest
3条回答

是的,这是可能的。

我在spider类中添加了一个failed_URL列表,如果响应的状态是404,则将URL追加到该列表中(这需要扩展以覆盖其他错误状态)。

然后我添加了一个句柄,将列表连接到一个字符串中,并在蜘蛛关闭时将其添加到统计信息中。

根据您的评论,可以跟踪扭曲的错误。

from scrapy.spider import BaseSpider
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals

class MySpider(BaseSpider):
    handle_httpstatus_list = [404] 
    name = "myspider"
    allowed_domains = ["example.com"]
    start_urls = [
        'http://www.example.com/thisurlexists.html',
        'http://www.example.com/thisurldoesnotexist.html',
        'http://www.example.com/neitherdoesthisone.html'
    ]

    def __init__(self, category=None):
        self.failed_urls = []

    def parse(self, response):
        if response.status == 404:
            self.crawler.stats.inc_value('failed_url_count')
            self.failed_urls.append(response.url)

    def handle_spider_closed(spider, reason):
        self.crawler.stats.set_value('failed_urls', ','.join(spider.failed_urls))

    def process_exception(self, response, exception, spider):
        ex_class = "%s.%s" % (exception.__class__.__module__, exception.__class__.__name__)
        self.crawler.stats.inc_value('downloader/exception_count', spider=spider)
        self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spider=spider)

    dispatcher.connect(handle_spider_closed, signals.spider_closed)

输出(只有在实际抛出异常时才会显示下载程序/异常计数*统计信息-我在关闭无线适配器后尝试运行spider来模拟它们):

2012-12-10 11:15:26+0000 [myspider] INFO: Dumping Scrapy stats:
    {'downloader/exception_count': 15,
     'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 15,
     'downloader/request_bytes': 717,
     'downloader/request_count': 3,
     'downloader/request_method_count/GET': 3,
     'downloader/response_bytes': 15209,
     'downloader/response_count': 3,
     'downloader/response_status_count/200': 1,
     'downloader/response_status_count/404': 2,
     'failed_url_count': 2,
     'failed_urls': 'http://www.example.com/thisurldoesnotexist.html, http://www.example.com/neitherdoesthisone.html'
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 874000),
     'log_count/DEBUG': 9,
     'log_count/ERROR': 2,
     'log_count/INFO': 4,
     'response_received_count': 3,
     'scheduler/dequeued': 3,
     'scheduler/dequeued/memory': 3,
     'scheduler/enqueued': 3,
     'scheduler/enqueued/memory': 3,
     'spider_exceptions/NameError': 2,
     'start_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 560000)}

下面是另一个如何处理和收集404个错误(检查github帮助页)的示例:

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item, Field


class GitHubLinkItem(Item):
    url = Field()
    referer = Field()
    status = Field()


class GithubHelpSpider(CrawlSpider):
    name = "github_help"
    allowed_domains = ["help.github.com"]
    start_urls = ["https://help.github.com", ]
    handle_httpstatus_list = [404]
    rules = (Rule(SgmlLinkExtractor(), callback='parse_item', follow=True),)

    def parse_item(self, response):
        if response.status == 404:
            item = GitHubLinkItem()
            item['url'] = response.url
            item['referer'] = response.request.headers.get('Referer')
            item['status'] = response.status

            return item

只需使用-o output.json运行scrapy runspider,并查看output.json文件中的项目列表。

来自@Talvalin和@alecxe的答案对我帮助很大,但它们似乎没有捕获不生成响应对象的下载程序事件(例如,twisted.internet.error.TimeoutErrortwisted.web.http.PotentialDataLoss)。这些错误在运行结束时显示在stats转储中,但没有任何元信息。

正如我发现的here,错误由stats.py中间件跟踪,捕获在DownloaderStatsprocess_exception方法中,特别是在ex_class变量中,该变量根据需要增加每个错误类型,然后在运行结束时转储计数。

要将这些错误与来自相应请求对象的信息相匹配,可以向每个请求添加一个唯一的id(通过request.meta),然后将其拉入stats.pyprocess_exception方法中:

self.stats.set_value('downloader/my_errs/{0}'.format(request.meta), ex_class)

这将为每个基于下载程序的错误生成一个唯一的字符串,而不伴随响应。然后,您可以将修改后的stats.py另存为其他内容(例如my_stats.py),将其添加到downloadermiddleware(具有正确的优先级),并禁用stock stats.py

DOWNLOADER_MIDDLEWARES = {
    'myproject.my_stats.MyDownloaderStats': 850,
    'scrapy.downloadermiddleware.stats.DownloaderStats': None,
    }

运行结束时的输出如下所示(这里使用meta info,其中每个请求url都映射到一个组id和成员id,用斜线分隔,如'0/14'):

{'downloader/exception_count': 3,
 'downloader/exception_type_count/twisted.web.http.PotentialDataLoss': 3,
 'downloader/my_errs/0/1': 'twisted.web.http.PotentialDataLoss',
 'downloader/my_errs/0/38': 'twisted.web.http.PotentialDataLoss',
 'downloader/my_errs/0/86': 'twisted.web.http.PotentialDataLoss',
 'downloader/request_bytes': 47583,
 'downloader/request_count': 133,
 'downloader/request_method_count/GET': 133,
 'downloader/response_bytes': 3416996,
 'downloader/response_count': 130,
 'downloader/response_status_count/200': 95,
 'downloader/response_status_count/301': 24,
 'downloader/response_status_count/302': 8,
 'downloader/response_status_count/500': 3,
 'finish_reason': 'finished'....}

This answer处理基于非下载程序的错误。

相关问题 更多 >