Scrapy - 导出文件的好分割方法?
我想把JsonLinesItemExporter导出的文件分成多个文件,每当爬虫解析了特定数量的项目(MAX_ITEMS)时就分一次。下面的代码是一个可行的解决方案,但我希望能得到一些建议。我担心在某些时候可能会出问题,因为我明确调用了spider_opened()和spider_closed()来关闭旧文件并打开新文件。有什么想法或最佳实践欢迎分享 :)
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/topics/item-pipeline.html
from scrapy import signals
from scrapy import log
from scrapy.contrib.exporter import JsonLinesItemExporter
MAX_ITEMS = 10000
class DmozPipeline(object):
def process_item(self, item, spider):
return item
class JsonLinePipeline(object):
def __init__(self):
self.files = {}
self.ids_seen = set()
self.fileid = 0
self.filetype = ".json"
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_opened(self, spider):
file = open("items-" + str(self.fileid) + self.filetype, 'w+b')
self.files[spider] = file
self.exporter = JsonLinesItemExporter(file)
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
file = self.files.pop(spider)
file.close()
def process_item(self, item, spider):
i = len(self.ids_seen)
if i % MAX_ITEMS + 1 == True and i > 0:
self.spider_closed(spider)
self.fileid = self.fileid + 1
self.spider_opened(spider)
if item['link'][0] in self.ids_seen:
raise DropItem("Duplicate site found: %s" % item)
else:
self.ids_seen.add(item['link'][0])
self.exporter.export_item(item)
return item
1 个回答
0
这是个老问题,但为了完整性还是说一下:
有一个设置叫做 FEED_EXPORT_BATCH_ITEM_COUNT。
在 settings.py 文件(或者类似的文件)中,你可以这样设置:
FEED_EXPORT_BATCH_ITEM_COUNT = 100
然后你的爬虫命令行应该是:
scrapy crawl spidername -o "dirname/%(batch_id)d-filename%(batch_time)s.json"
详细信息可以查看 文档