Scrapy通过IP地址爬取本地网站
我正在尝试使用Scrapy这个工具,想要爬取我本地网络上的一个网站。这个网站的IP地址是192.168.0.185。以下是我的爬虫代码:
from scrapy.spider import BaseSpider
class 192.168.0.185_Spider(BaseSpider):
name = "192.168.0.185"
allowed_domains = ["192.168.0.185"]
start_urls = ["http://192.168.0.185/"]
def parse(self, response):
print "Test:", response.headers
然后在和我的爬虫代码同一个文件夹里,我会执行这个命令来运行爬虫:
scrapy crawl 192.168.0.185
但是我收到了一个非常难看、难以理解的错误信息:
2012-02-10 20:55:18-0600 [scrapy] INFO: Scrapy 0.14.0 started (bot: tutorial)
2012-02-10 20:55:18-0600 [scrapy] DEBUG: Enabled extensions: LogStats,
TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-02-10 20:55:18-0600 [scrapy] DEBUG: Enabled downloader middlewares:
HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware,
DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware,
HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-02-10 20:55:18-0600 [scrapy] DEBUG: Enabled spider middlewares:
HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware,
DepthMiddleware 2012-02-10 20:55:18-0600 [scrapy] DEBUG: Enabled item pipelines:
Traceback (most recent call last): File "/usr/bin/scrapy", line 5, in <module>
pkg_resources.run_script('Scrapy==0.14.0', 'scrapy')
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 467, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 1200, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/EGG-INFO/scripts
/scrapy", line 4, in <module>
execute()
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/scrapy/cmdline.py",
line 132, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/scrapy/cmdline.py",
line 97, in _run_print_help func(*a, **kw)
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/scrapy/cmdline.py",
line 139, in _run_command cmd.run(args, opts)
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/scrapy/commands
/crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "/usr/lib/python2.6/site-packages/Scrapy-0.14.0-py2.6.egg/scrapy
/spidermanager.py", line 43, in create
raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: 192.168.0.185'
于是我又做了一个爬虫,几乎和第一个一模一样,只是这次用的是域名,而不是IP地址。这个爬虫运行得很好。有没有人知道这是怎么回事?我该如何让Scrapy通过IP地址而不是域名来爬取网站呢?
from scrapy.spider import BaseSpider
class facebook_Spider(BaseSpider):
name = "facebook"
allowed_domains = ["facebook.com"]
start_urls = ["http://www.facebook.com/"]
def parse(self, response):
print "Test:", response.headers
1 个回答
9
class 192.168.0.185_Spider(BaseSpider):
...
在Python中,类名不能以数字开头,也不能包含点(.)。你可以查看文档了解更多信息,链接在这里:标识符和关键字
你可以用正确的名字来创建这个爬虫:
$ scrapy startproject testproj
$ cd testproj
$ scrapy genspider testspider 192.168.0.185
Created spider 'testspider' using template 'crawl' in module:
testproj.spiders.testspider
爬虫的定义看起来是这样的:
class TestspiderSpider(CrawlSpider):
name = 'testspider'
allowed_domains = ['192.168.0.185']
start_urls = ['http://www.192.168.0.185/']
...
而且你可能需要从start_urls
中删除www
。要开始爬取数据,使用爬虫的名字代替主机名:
$ scrapy crawl testspider