如何在Scrapy中解析通过xpath提取的其他URL?

0 投票
1 回答
1662 浏览
提问于 2025-04-18 11:16

我正在根据一个类似索引的页面生成一个项目列表。我有一个起始网址和一系列要遵循的xpath规则:

def parse(self,response):
    sel = Selector(response)
    sites = sel.xpath('//tbody/tr')
    items = []
    for site in sites:
        item = EvolutionmItem()
        item['title'] = site.xpath('td/div[not(contains(., "Sticky:") or contains(.,"ANNOUNCEMENT"))]/a[contains(@id,"thread_title")]/text()').extract()
        item['url'] = site.xpath('td[contains(@id,"threadtitle")]/div/a[contains(@href,"http://forums.evolutionm.net/sale-engine-drivetrain-power/")]/@href').extract()
        item['poster'] = site.xpath('td[contains(@id,"threadtitle")]/div[@class="smallfont"]/span/text()').extract()
        item['status'] = site.xpath('td[contains(@id,"threadtitle")]/div/span[contains(@class,"highlight")]').extract()
        items.append(item)
    return items

这段代码没有错误,能准确提取我需要的内容。现在我想访问每个网址,并从这些网址中提取更多数据。

有什么好的方法可以做到这一点吗?我似乎无法让request.meta正常工作。

编辑

Girish的解决方案是正确的,但为了让它正常工作,我必须确保我的 item['url'] 不是空的:

for site in sites:
    item = EvolutionmItem()
    ...
    if item['url']:
        yield Request(item['url'][0],meta={'item':item},callback=self.thread_parse)

1 个回答

2

你需要创建一个请求对象,这个对象需要包含网址元数据回调函数这几个参数。

def parse(self,response):
    sel = Selector(response)
    sites = sel.xpath('//tbody/tr')
    for site in sites:
        item = EvolutionmItem()
        item['title'] = site.xpath('td/div[not(contains(., "Sticky:") or contains(.,"ANNOUNCEMENT"))]/a[contains(@id,"thread_title")]/text()').extract()
        item['url'] = u''. join( site.xpath('td[contains(@id,"threadtitle")]/div/a[contains(@href,"http://forums.evolutionm.net/sale-engine-drivetrain-power/")]/@href').extract())
        item['poster'] = site.xpath('td[contains(@id,"threadtitle")]/div[@class="smallfont"]/span/text()').extract()
        item['status'] = site.xpath('td[contains(@id,"threadtitle")]/div/span[contains(@class,"highlight")]').extract()

    yield Request(url = item['url'], meta = {'item': item}, callback=self.parse_additional_info) 

def parse_additional_info(self, response):
    #extract additional info 
    yield item

撰写回答