使用mechanize检索robots.txt时出现HTTP 403错误
这个命令在终端里执行成功了
$ curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)" http://fifa-infinity.com/robots.txt
并且打印出了robots.txt文件。如果不加用户代理选项,服务器会返回403错误。查看robots.txt文件可以发现,http://www.fifa-infinity.com/board下的内容是允许被抓取的。不过,下面的python代码却失败了:
import logging
import mechanize
from mechanize import Browser
ua = 'Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)'
br = Browser()
br.addheaders = [('User-Agent', ua)]
br.set_debug_http(True)
br.set_debug_responses(True)
logging.getLogger('mechanize').setLevel(logging.DEBUG)
br.open('http://www.fifa-infinity.com/robots.txt')
在我的控制台上输出的是:
No handlers could be found for logger "mechanize.cookies"
send: 'GET /robots.txt HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: www.fifa-infinity.com\r\nConnection: close\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)\r\n\r\n'
reply: 'HTTP/1.1 403 Bad Behavior\r\n'
header: Date: Wed, 13 Feb 2013 15:37:16 GMT
header: Server: Apache
header: X-Powered-By: PHP/5.2.17
header: Vary: User-Agent,Accept-Encoding
header: Connection: close
header: Transfer-Encoding: chunked
header: Content-Type: text/html
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 255, in _mech_open
raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 403: Bad Behavior
奇怪的是,使用curl命令时如果不设置用户代理,返回的是“403: Forbidden”,而不是“403: Bad Behavior”。
我是不是哪里做错了,还是说这是mechanize/urllib2的一个bug?我不明白单纯获取robots.txt怎么会被认为是“坏行为”呢?
1 个回答
9
经过实验验证,你需要添加一个 Accept 头部来指定可以接受的内容类型(只要有这个 "Accept" 头部,任何类型都可以)。比如,修改后会变得有效:
br.addheaders = [('User-Agent', ua)]
变成:
br.addheaders = [('User-Agent', ua), ('Accept', '*/*')]