从脚本运行scrapy时不包含管道
我在用一个脚本运行scrapy,但它只启动了爬虫,并没有经过我的数据处理流程。我看过这个链接,但里面没有提到如何包含数据处理流程。
我的设置:
Scraper/
scrapy.cfg
ScrapyScript.py
Scraper/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
my_spider.py
我的脚本:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from Scraper.spiders.my_spider import MySpiderSpider
spider = MySpiderSpider(domain='myDomain.com')
settings = get_project_settings
crawler = Crawler(Settings())
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Reactor activated...')
reactor.run()
log.msg('Reactor stopped.')
我的数据处理流程:
from scrapy.exceptions import DropItem
from scrapy import log
import sqlite3
class ImageCheckPipeline(object):
def process_item(self, item, spider):
if item['image']:
log.msg("Item added successfully.")
return item
else:
del item
raise DropItem("Non-image thumbnail found: ")
class StoreImage(object):
def __init__(self):
self.db = sqlite3.connect('images')
self.cursor = self.db.cursor()
try:
self.cursor.execute('''
CREATE TABLE IMAGES(IMAGE BLOB, TITLE TEXT, URL TEXT)
''')
self.db.commit()
except sqlite3.OperationalError:
self.cursor.execute('''
DELETE FROM IMAGES
''')
self.db.commit()
def process_item(self, item, spider):
title = item['title'][0]
image = item['image'][0]
url = item['url'][0]
self.cursor.execute('''
INSERT INTO IMAGES VALUES (?, ?, ?)
''', (image, title, url))
self.db.commit()
脚本的输出结果:
[name@localhost Scraper]$ python ScrapyScript.py
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor activated...
2014-08-06 17:55:22-0400 [my_spider] INFO: Closing spider (finished)
2014-08-06 17:55:22-0400 [my_spider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 213,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 18852,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 518492),
'item_scraped_count': 51,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 363898)}
2014-08-06 17:55:22-0400 [my_spider] INFO: Spider closed (finished)
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor stopped.
[name@localhost Scraper]$
2 个回答
25
我试过@Pawel和文档里的方法,但对我来说不管用。后来我查看了Scrapy的源代码,发现有时候它没有正确识别设置模块。我一直在想为什么管道没有被使用,直到我意识到它们根本没有从脚本中找到。
正如文档和Pawel所说,我使用了:
from scrapy.utils.project import get_project_settings
settings = get_project_settings()
crawler = Crawler(settings)
但是,当我调用:
print "these are the pipelines:"
print crawler.settings.__dict__['attributes']['ITEM_PIPELINES']
时,我得到了:
these are the pipelines:
<SettingsAttribute value={} priority=0>
settings
没有被正确填充。
我意识到需要的是一个相对于包含调用Scrapy脚本的模块的项目设置模块的路径,比如scrapy.myproject.settings
。然后,我创建了Settings()
对象,方法如下:
from scrapy.settings import Settings
settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'scraper.edx_bot.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')
我使用的完整代码,能够有效导入管道,是:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.settings import Settings
from scrapy.utils.project import get_project_settings
from scrapy.myproject.spiders.first_spider import FirstSpider
spider = FirstSpider()
settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'scrapy.myproject.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start(loglevel=log.INFO)
reactor.run()
13
你需要真正调用一下 get_project_settings。你在代码中传给爬虫的 Settings 对象只会给你一些默认设置,而不是你特定项目的设置。你需要写成这样:
from scrapy.utils.project import get_project_settings
settings = get_project_settings()
crawler = Crawler(settings)