Scrapy从脚本抓取后总是阻塞脚本执行
我正在按照这个指南 http://doc.scrapy.org/en/0.16/topics/practices.html#run-scrapy-from-a-script 来从我的脚本中运行 Scrapy。以下是我脚本的一部分:
crawler = Crawler(Settings(settings))
crawler.configure()
spider = crawler.spiders.create(spider_name)
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()
print "It can't be printed out!"
它的运行效果正如我所期待的:访问网页,抓取需要的信息,并把结果以 JSON 格式存储在我指定的位置(通过 FEED_URI)。但是,当爬虫完成工作后(我可以通过输出的 JSON 中的数字看到),我的脚本执行就不会继续了。 这可能不是 Scrapy 的问题,答案可能在 twisted 的 reactor 中。 我该如何释放线程的执行呢?
2 个回答
6
在scrapy 0.19.x版本中,你应该这样做:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings
spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent
注意这些代码行
settings = get_project_settings()
crawler = Crawler(settings)
如果不加这些,你的爬虫就不会使用你的设置,也不会保存抓取到的内容。我花了一段时间才弄明白为什么文档中的示例没有保存我的内容。于是我提交了一个请求来修正文档中的示例。
还有一种方法就是直接在你的脚本中调用命令
from scrapy import cmdline
cmdline.execute("scrapy crawl followall".split()) #followall is the spider's name
28
当爬虫完成工作时,你需要停止反应器。你可以通过监听 spider_closed
这个信号来实现这一点:
from twisted.internet import reactor
from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher
from testspiders.spiders.followall import FollowAllSpider
def stop_reactor():
reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Running reactor...')
reactor.run() # the script will block here until the spider is closed
log.msg('Reactor stopped.')
而命令行的日志输出可能看起来像这样:
stav@maia:/srv/scrapy/testspiders$ ./api
2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...
2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)
2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 23934,...}
2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)
2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.
stav@maia:/srv/scrapy/testspiders$