爬行蜘蛛不跟随链接

2024-04-19 06:01:01 发布

您现在位置:Python中文网/ 问答频道 /正文

为此,我使用了Scrapy crawling spider示例中的示例:http://doc.scrapy.org/en/latest/topics/spiders.html

我想从一个网页中获取链接,并跟随它们来解析带有统计信息的表,但不知何故,我看不到任何链接会被抓取并跟踪到包含数据的网页。这是我的剧本:

from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request



class Basketspider(CrawlSpider):
    name = "basketsp"
    allowed_domains = ["euroleague.net"]
    start_urls = ["http://www.euroleague.net/main"]
    rules = (
        Rule(SgmlLinkExtractor(allow=("results/by-date?seasoncode=E2000")),follow=True),
        Rule(SgmlLinkExtractor(allow=("showgame?gamecode=165&seasoncode=E2000#!boxscore")), callback='parse_item'),
    )


    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)
        sel = HtmlXPathSelector(response)
        items=[]
        item = BasketbaseItem()
        item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
        item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
        item['stage'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Stage of tournament
        item['home'] = sel.select('//div[@class="gs-teams"]/a[@class="localClub"]/text()').extract() #Home team
        item['guest'] = sel.select('//div[@class="gs-teams"]/a[@class="roadClub"]/text()').extract() #Visitor team
        item['referees'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() #Referees
        item['attendance'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblAudience"]/text()').extract()
        item['fst'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[2][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[2][@class="AlternatingColumn"]/text()').extract()
        item['snd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[3][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[3][@class="NormalColumn"]/text()').extract()
        item['trd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[4][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[4][@class="AlternatingColumn"]/text()').extract()
        item['tth'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[5][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[5][@class="NormalColumn"]/text()').extract()
        item['xt1'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt2'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt3'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt4'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['game_id'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() # Game ID construct
        item['arena'] = sel.select('//div[@class="gs-dates"]/text()').extract() #Arena
        item['result'] = sel.select('//span[@class="score"]/text()').extract() #Result
        item['league'] = sel.select('//div[@class="gs-dates"]/text()').extract() #League
        print item['date'],item['time'], item['stage'], item['home'],item['guest'],item['referees'],item['attendance'],item['fst'],item['snd'],item['trd'],item['tth'],item['result']
        items.append(item)    

我收到了终端的回复:

^{pr2}$

我在做什么,错在这里?任何想法都会很有帮助。我试图让SgmlLinkExtractor()为空,表示所有链接都将被跟踪,但我得到了相同的情况。没有迹象表明爬行蜘蛛能工作。在

我在python2.7.2+上运行Scrapy版本0.16.2


Tags: textdivgsidextractitemselectclass
2条回答

Scrapy误解了起始url的内容类型。在

您可以使用scrapy shell进行验证:

$ scrapy shell 'http://www.euroleague.net/main' 
2013-11-18 16:39:26+0900 [scrapy] INFO: Scrapy 0.21.0 started (bot: scrapybot)
...

AttributeError: 'Response' object has no attribute 'body_as_unicode'

请参见my previous answer关于缺少的body_as_unicode属性。我没有注意到服务器没有设置任何类型的内容。在

crawspider ignores non-html responses,因此不会处理响应,也不会跟踪任何链接。在

我建议在github上打开一个问题,因为我认为Scrapy应该能够透明地处理这个问题。在

作为解决方法,您可以重写CrawlSpider parse方法,从传递的响应对象创建一个HtmlResponse,并将其传递给超类parse方法。在

在允许的域前面加上“www”。在

相关问题 更多 >