脏兮兮的蜘蛛还在找工作

2024-03-28 15:37:47 发布

您现在位置:Python中文网/ 问答频道 /正文

2017-10-17 11:48:18 [scrapy.core.engine] INFO: Closing spider (finished)
29957 2017-10-17 11:48:19 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
29958 {'downloader/request_bytes': 2175351,
29959  'downloader/request_count': 6472,
29960  'downloader/request_method_count/GET': 6472,
29961  'downloader/response_bytes': 494012842,
29962  'downloader/response_count': 6472,
29963  'downloader/response_status_count/200': 6419,
29964  'downloader/response_status_count/301': 53,
29965  'dupefilter/filtered': 8,
29966  'finish_reason': 'finished',
29967  'finish_time': datetime.datetime(2017, 10, 17, 11, 48, 19, 59301),
29968  'item_scraped_count': 6417,
29969  'log_count/DEBUG': 12891,
29970  'log_count/ERROR': 1,
29971  'log_count/INFO': 120,
29972  'memusage/max': 385605632,
29973  'memusage/startup': 94334976,
29974  'request_depth_max': 1,
29975  'response_received_count': 6419,
29976  'scheduler/dequeued': 6471,
29977  'scheduler/dequeued/memory': 6471,
29978  'scheduler/enqueued': 6471,
29979  'scheduler/enqueued/memory': 6471,
29980  'spider_exceptions/RuntimeError': 1,
29981  'start_time': datetime.datetime(2017, 10, 17, 9, 55, 15, 21744)}
29982 2017-10-17 11:48:19 [scrapy.core.engine] INFO: Spider closed (finished)

正如你所见,我的蜘蛛只做了6472个工作,但实际上我有228336个工作要做,但它刚刚关闭。 我正在一个房产网站上爬行,每个房产都有一个id号。我搜索了228336号房子的ID,并将其存储在一个文件中。这个spider将验证这些ID(此步骤成功)

问题是我不知道为什么没有完成所有的工作就关门了

def parse(self, response):
          print("number of the id_dict is %s"%len(self.house_id_dict))
          for n in self.house_id_dict.keys():    #house ID eg.44225621 
              yield response.follow("/for-sale/details/"+n, callback=self.parse_house)
          print("yield all the links!")

钥匙是房子的身份证。它甚至不打印最后一行。。。 我要发疯了。。。。你知道吗


Tags: selfinfologiddatetimeresponserequestcount