在scrapy项目中暂停和恢复作业不起作用
我正在做一个使用scrapy的项目,目的是从一个需要登录的网站下载图片。一切都运行得很好,我能够成功下载图片。现在我想要的是能够在需要的时候暂停和恢复爬虫来抓取图片。
为此,我按照scrapy手册中的说明进行了操作,具体如下。在运行爬虫时,我使用了下面的命令:
scrapy crawl somespider -s JOBDIR=crawls/somespider-1
如果想要停止爬虫,可以按CTRL+C。想要再次恢复时,使用同样的命令。
但是在恢复后,爬虫会在几分钟内关闭,并且并没有从上次停止的地方继续。
更新:
class SampleSpider(Spider):
name = "sample project"
allowed_domains = ["xyz.com"]
start_urls = (
'http://abcyz.com/',
)
def parse(self, response):
return FormRequest.from_response(response,
formname='Loginform',
formdata={'username': 'Name',
'password': '****'},
callback=self.after_login)
def after_login(self, response):
# check login succeed before going on
if "authentication error" in str(response.body).lower():
print "I am error"
return
else:
start_urls = ['..','..']
for url in start_urls:
yield Request(url=urls,callback=self.parse_phots,dont_filter=True)
def parse_photos(self,response):
**downloading image here**
我哪里做错了呢?
这是我在暂停后再次运行爬虫时收到的日志:
2014-05-13 15:40:31+0530 [scrapy] INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:40:31+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:40:31+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:40:31+0530 [sample] INFO: Spider opened
2014-05-13 15:40:31+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
......................
2014-05-13 15:42:06+0530 [sample] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 141184,
'downloader/request_count': 413,
'downloader/request_method_count/GET': 412,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 11213203,
'downloader/response_count': 413,
'downloader/response_status_count/200': 412,
'downloader/response_status_count/404': 1,
'file_count': 285,
'file_status_count/downloaded': 285,
'finish_reason': 'shutdown',
'finish_time': datetime.datetime(2014, 5, 13, 10, 12, 6, 534088),
'item_scraped_count': 125,
'log_count/DEBUG': 826,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'log_count/WARNING': 219,
'request_depth_max': 12,
'response_received_count': 413,
'scheduler/dequeued': 127,
'scheduler/dequeued/disk': 127,
'scheduler/enqueued': 403,
'scheduler/enqueued/disk': 403,
'start_time': datetime.datetime(2014, 5, 13, 10, 10, 31, 232618)}
2014-05-13 15:42:06+0530 [sample] INFO: Spider closed (shutdown)
恢复后,它停止并显示:
INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:42:32+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:42:32+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:42:32+0530 [sample] INFO: Spider opened
*2014-05-13 15:42:32+0530 [sample] INFO: Resuming crawl (276 requests scheduled)*
2014-05-13 15:42:32+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-13 15:43:19+0530 [sample] INFO: Closing spider (finished)
2014-05-13 15:43:19+0530 [sample] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 3,
'downloader/request_bytes': 132365,
'downloader/request_count': 281,
'downloader/request_method_count/GET': 281,
'downloader/response_bytes': 567884,
'downloader/response_count': 278,
'downloader/response_status_count/200': 278,
'file_count': 1,
'file_status_count/downloaded': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 5, 13, 10, 13, 19, 554981),
'item_scraped_count': 276,
'log_count/DEBUG': 561,
'log_count/ERROR': 1,
'log_count/INFO': 8,
'log_count/WARNING': 1,
'request_depth_max': 1,
'response_received_count': 278,
'scheduler/dequeued': 277,
'scheduler/dequeued/disk': 277,
'scheduler/enqueued': 1,
'scheduler/enqueued/disk': 1,
'start_time': datetime.datetime(2014, 5, 13, 10, 12, 32, 659276)}
2014-05-13 15:43:19+0530 [sample] INFO: Spider closed (finished)
2 个回答
0
因为你需要进行身份验证,所以我猜想当你重新开始这个工作时,之前的 cookies 已经过期了。可以参考一下这个链接:Scrapy 持久化注意事项
当 cookies 过期或者身份验证失败时,先弄清楚 HTTP 状态码是什么,然后你可以用类似下面的代码:
def parse(self, response):
if response.status == 404 or response.status != 200:
self.authenticate()
# continue with scraping
希望这对你有帮助。
2
你可以用这个命令来代替你写的那个命令:
scrapy crawl somespider --set JOBDIR=crawl1
要停止它,你只需要按一次控制键加C(control-C),然后等着Scrapy停止。如果你按两次控制键加C,它就不会正常工作了!
然后要继续搜索,就再次运行这个命令:
scrapy crawl somespider --set JOBDIR=crawl1