selenium:socket.error: [错误61] 连接被拒绝
我有10个链接想要抓取。
当我运行爬虫的时候,可以把链接保存到一个json文件里,但还是出现了一些错误,比如:
看起来像是selenium运行了两次。这是什么问题呢?
请给我一些指导,谢谢!
2014-08-06 10:30:26+0800 [spider2] DEBUG: Scraped from <200 http://www.test/a/1>
{'link': u'http://www.test/a/1'}
2014-08-06 10:30:26+0800 [spider2] ERROR: Spider error processing <GET
http://www.test/a/1>
Traceback (most recent call last):
........
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 61] Connection refused
这是我的代码:
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from selenium.webdriver.support.wait import WebDriverWait
from scrapy.http.request import Request
class ProductSpider(Spider):
name = "spider2"
start_urls = ['http://www.test.com/']
def __init__(self):
self.driver = webdriver.Firefox()
def parse(self, response):
self.driver.get(response.url)
self.driver.implicitly_wait(20)
next = self.driver.find_elements_by_css_selector("div.body .heading a")
for a in next:
item = TaItem()
item['link'] = a.get_attribute("href")
yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)
def parse_detail(self,response):
item = response.meta['item']
yield item
self.driver.close()
1 个回答
7
问题在于你关闭驱动的时间太早了。
你应该等到爬虫完成工作后再关闭它,可以关注一下 spider_closed
这个信号:
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from scrapy.http.request import Request
class ProductSpider(Spider):
name = "spider2"
start_urls = ['http://www.test.com/']
def __init__(self):
self.driver = webdriver.Firefox()
dispatcher.connect(self.spider_closed, signals.spider_closed)
def parse(self, response):
self.driver.get(response.url)
self.driver.implicitly_wait(20)
next = self.driver.find_elements_by_css_selector("div.body .heading a")
for a in next:
item = TaItem()
item['link'] = a.get_attribute("href")
yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)
def parse_detail(self,response):
item = response.meta['item']
yield item
def spider_closed(self, spider):
self.driver.close()
另外,看看这个链接: scrapy: 当爬虫退出时调用一个函数。