如何在Scrapy中创建规则?

0 投票
2 回答
3740 浏览
提问于 2025-04-17 06:45

我在创建规则的时候遇到了一些麻烦。假设我开始的链接是 http://www.example.com/search?q=news
当我在浏览器中打开这个链接时,我得到了以下的源代码:

<html><head>...</head><body>
<ul id="results-list">
<li class="result clearfix news">
<div class="summary">
<h3><a href="/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/">Struggling Canucks rely on Schneider to snag win against Sens</a></h3>
<p class="summary">Nov 21, 2011&ndash; Eleventh place Canucks rely on goalie Cory Schneider to improve record to 10-9-1
</p>
<p class="meta"><a href="/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/">http://www.example.com/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/</a>
</p>
</div>
</li>
<li class="result clearfix news">
<div class="summary">
<h3><a href="/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/">Celebrities set to testify at U.K. media ethics inquiry</a></h3>
<p class="summary">Nov 20, 2011&ndash; Hugh Grant and J.K. Rowling given opportunity to strike back against tabloids’ invasion of privacy
</p>
<p class="meta"><a href="/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/">http://www.example.com/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/</a>
</p>
</div>
</li>
...
</ul><!-- end of ul#results-list -->

<ul class="paginator">
<li class="selected"><a href="http://www.example.com/search/?q=news&start=0">1</a></li>
<li ><a href="http://www.example.com/search/?q=news&start=10">2</a></li>
<li ><a href="http://www.example.com/search/?q=news&start=20">3</a></li>
...
<li class="jump last"><a href="http://www.example.com/search/?q=news&start=90">Last</a></li>
</ul><!-- end of ul.paginator -->
</body></html>

现在我想从链接中提取数据(这个链接在ul#results-list中),比如这个链接 http://www.example.com/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/,还有其他的链接...

为此我创建了一个爬虫,代码如下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from thirdapp.items import ThirdappItem

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = [
        'http://www.example.com/search?q=news',
        'http://www.example.com/search?q=movies',
        ]
    rules = (
        Rule(SgmlLinkExtractor(allow('?q=news',), restrict_xpaths('ul[@class="paginator"]',)), callback='parse_item', allow=True),
        )

    def parse_item(self, response):
        self.log('Hi, this is an item page! %s', response.url)

        hxs = HtmlXPathSelector(response)
        #item = ThirdappItem()
        items = hxs.select('//h3')
        scraped_items = []
        for item in items:
            scraped_item = ThirdappItem()
            scraped_item["title"] = item.select('a/text()').extract()
            scraped_items.append(scraped_item)
        return items

spider = MySpider()

那么我应该怎么设置规则,才能得到我想要的结果呢?

2 个回答

0

根据文档,SgmlLinkExtractor 的 allow 参数是一个单一的正则表达式(或者多个正则表达式的列表),只有符合这些表达式的(绝对)网址才能被提取出来。所以 allow 参数的样子可能是这样的:

allow=('.*\?q=news.*',)

而且最后的规则参数很可能不是 allow,而是 follow=True

最终的规则(注意问号前的转义字符):

Rule(SgmlLinkExtractor(allow=('.*\?q=news.*',), restrict_xpaths=('ul[@class="paginator"]',)), callback='parse_item', follow=True)
1

首先,你到底期待什么样的结果呢?
其次,也许你应该关注一下你规则中的链接,而不仅仅是包含列表项的ul-container,因为它里面还有需要的链接节点!

撰写回答