Scrapy 的 Scrapyd 调度爬虫太慢
我正在运行Scrapyd,遇到了一个奇怪的问题,就是同时启动4个爬虫的时候。
2012-02-06 15:27:17+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,2,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,3,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:18+0100 [Launcher] Process started: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545
2012-02-06 15:27:19+0100 [Launcher] Process finished: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545
2012-02-06 15:27:23+0100 [Launcher] Process started: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546
2012-02-06 15:27:24+0100 [Launcher] Process finished: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546
2012-02-06 15:27:28+0100 [Launcher] Process started: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547
2012-02-06 15:27:29+0100 [Launcher] Process finished: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547
2012-02-06 15:27:33+0100 [Launcher] Process started: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549
2012-02-06 15:27:35+0100 [Launcher] Process finished: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549
我已经为Scrapyd设置了这些参数:
[scrapyd]
max_proc = 10
为什么Scrapyd没有像我安排的那样,同时快速地运行这些爬虫呢?
2 个回答
6
根据我使用scrapyd的经验,当你安排一个爬虫(也就是程序)时,它并不会立刻开始运行。通常它会等一会儿,等当前的爬虫正在运行后,再开始下一个爬虫的过程(也就是执行scrapy crawl
命令)。
所以,scrapyd会一个接一个地启动爬虫,直到达到max_proc
这个最大进程数。
从你的日志来看,你的每个爬虫大约运行1秒钟。我觉得,如果你的爬虫至少能运行30秒,你就能看到所有的爬虫都在运行了。
10
我通过编辑 scrapyd/app.py 文件的第30行解决了这个问题。
把 timer = TimerService(5, poller.poll)
改成了 timer = TimerService(0.1, poller.poll)
补充说明:下面 AliBZ 的评论提到的配置设置,是调整轮询频率的更好方法。