<p>只需按手册所述升起闭合蜘蛛</p>
<blockquote>
<p><strong>How can I instruct a spider to stop itself?</strong></p>
<p>Raise the CloseSpider from a callback.</p>
</blockquote>
<pre><code>from scrapy.exceptions import CloseSpider
def parse_page(self, response):
if 'Bandwidth exceeded' in response.body:
raise CloseSpider('bandwidth_exceeded')
</code></pre>
<p><a href="http://doc.scrapy.org/en/latest/faq.html#how-can-i-instruct-a-spider-to-stop-itself" rel="nofollow noreferrer">http://doc.scrapy.org/en/latest/faq.html#how-can-i-instruct-a-spider-to-stop-itself</a>
<a href="http://doc.scrapy.org/en/latest/topics/exceptions.html#scrapy.exceptions.CloseSpider" rel="nofollow noreferrer">http://doc.scrapy.org/en/latest/topics/exceptions.html#scrapy.exceptions.CloseSpider</a></p>
<blockquote>
<p>Note that requests that are still in progress (HTTP request sent,
response not yet received) will still be parsed. No new request will
be processed though.</p>
</blockquote>
<p><a href="https://stackoverflow.com/a/23895143/5041915">https://stackoverflow.com/a/23895143/5041915</a></p>
<p>更新:
实际上我发现了一些有趣的东西,如果停止蜘蛛在主函数。在</p>
<p>新的有效工作线程可能没有时间启动,因为引发异常的速度更快。在</p>
<p>我建议在回调函数中检查条件并尽早引发异常。在</p>