scrapy: 在多个解析中传输项并收集数据

0 投票
1 回答
1245 浏览
提问于 2025-04-17 15:25

我第一次尝试在页面之间传递一个项目。

在每次循环中它都能正常工作,性别信息也能正确到达parse_3,但g2不符合响应网址的类别,而g1(第一个类别级别)总是我在parse_sub中循环的列表的最后一个元素。

我肯定是哪里做错了,但我找不到问题所在。如果有人能给我解释一下是怎么回事,那就太好了。

最好的祝愿,
杰克

class xspider(BaseSpider):
    name = 'x'
    allowed_domains = ['x.com']
    start_urls = ['http://www.x.com']

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        maincats = hxs.select('//ul[@class="Nav"]/li/a/@href').extract()[1:3]
        for maincat in maincats:
            item = catItem()
            if 'men' in maincat:
                item['gender'] = 'men'
                maincat = 'http://www.x.com' + maincat
                request = Request(maincat, callback=self.parse_sub)
                request.meta['item'] = item
            if 'woman' in maincat:
                item['gender'] = []
                item['gender'] = 'woman'
                maincat = 'http://www.x.com' + maincat
                request = Request(maincat, callback=self.parse_sub)
                request.meta['item'] = item
            yield request

    def parse_sub(self, response):
        i = 0
        hxs = HtmlXPathSelector(response)
        subcats = hxs.select('//ul[@class="sub Sprite"]/li/a/@href').extract()[0:5]
        text = hxs.select('//ul[@class="sub Sprite"]/li/a/span/text()').extract()[0:5]
        for item in text:
            item = response.meta['item']
            subcat = 'http://www.x.com' + subcats[i]
            request = Request(subcat, callback=self.parse_subcat)
            item['g1'] = text[i]
            item['gender'] = response.request.meta['item']
            i = i + 1
            request.meta['item'] = item
            yield request

    def parse_subcat(self, response):
        hxs = HtmlXPathSelector(response)
        test = hxs.select('//ul[@class="sub"]/li/a').extract()
        for s in test:
            item = response.meta['item']
            item['g2'] = []
            item['g2'] = hxs.select('//span[@class="Active Sprite"]/text()').extract()[0]
            s = s.encode('utf-8','ignore')
            link = s[s.find('href="')+6:][:s[s.find('href="')+6:].find('/"')]
            link = 'http://www.x.com/' + str(link) + '/'
            request = Request(link, callback=self.parse_3)
            request.meta['item'] = item
            yield request

    def parse_3(self, response):
        item = response.meta['item']
        print item

1 个回答

2
def parse_subcat(self, response):
    hxs = HtmlXPathSelector(response)
    test = hxs.select('//ul[@class="sub"]/li/a').extract()
    for s in test:
        item = response.meta['item']
        item['g2'] = []
        item['g2'] = hxs.select('//span[@class="Active Sprite"]/text()').extract()[0]
        s = s.encode('utf-8','ignore')
        link = s[s.find('href="')+6:][:s[s.find('href="')+6:].find('/"')]
        link = 'http://www.x.com/' + str(link) + '/'
        request = Request(link, callback=self.parse_3)
        request.meta['item'] = item
        yield request

响应中没有包含元数据,但请求中有。所以,应该用 item = response.request.meta['item'] 来代替 item = response.meta['item']

撰写回答