Scrapy:连接被拒绝
我在测试scrapy安装的时候遇到了一个错误:
$ scrapy shell http://www.google.es
j2011-02-16 10:54:46+0100 [scrapy] INFO: Scrapy 0.12.0.2536 started (bot: scrapybot)
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled extensions: TelnetConsole, SpiderContext, WebService, CoreStats, MemoryUsage, CloseSpider
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled scheduler middlewares: DuplicatesFilterMiddleware
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, HttpCompressionMiddleware, DownloaderStats
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled item pipelines:
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2011-02-16 10:54:46+0100 [default] INFO: Spider opened
2011-02-16 10:54:47+0100 [default] DEBUG: Retrying <GET http://www.google.es> (failed 1 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] DEBUG: Retrying <GET http://www.google.es> (failed 2 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] DEBUG: Discarding <GET http://www.google.es> (failed 3 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] ERROR: Error downloading <http://www.google.es>: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionRefusedError'>: Connection was refused by other side: 111: Connection refused.
]
2011-02-16 10:54:47+0100 [scrapy] ERROR: Shell error
Traceback (most recent call last):
Failure: scrapy.exceptions.IgnoreRequest: Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] INFO: Closing spider (shutdown)
2011-02-16 10:54:47+0100 [default] INFO: Spider closed (shutdown)
版本信息:
- Scrapy 0.12.0.2536
- Python 2.6.6
- 操作系统:Ubuntu 10.10
补充说明:我可以用浏览器、wget和telnet访问google.es 80端口,而且所有网站都出现这个问题。
3 个回答
0
我也遇到过那个错误。后来发现是因为我访问的端口被防火墙给挡住了。我的服务器默认会封锁端口,只有在白名单上的端口才可以通行。
1
可能是你的网络连接出了问题。
首先,检查一下你的互联网连接。
如果你是通过代理服务器上网的,那么你需要在你的scrapy项目里加一段代码(可以参考这个链接:http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware)
总之,试着升级一下你的scrapy版本。
11
任务 1:
Scrapy会发送一个包含“bot”的用户代理(user agent)。有些网站可能会根据这个用户代理来阻止访问。
试着在settings.py文件中覆盖USER_AGENT。
比如:USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'
任务 2:
尝试在请求之间设置一个延迟,这样看起来就像是人类在发送请求一样。
DOWNLOAD_DELAY = 0.25
任务 3:
如果以上方法都不行,可以安装wireshark,看看Scrapy发送的请求头(request header)或提交的数据(post data)和你浏览器发送的有什么不同。