使用中间件防止Scrapy重复访问网站

6 投票
1 回答
3091 浏览
提问于 2025-04-17 14:33

我遇到了这样一个问题:

如何在Scrapy中根据网址过滤重复请求

我不想让一个网站被爬取超过一次。我调整了中间件,并写了一个打印语句来测试它是否能正确识别已经访问过的网站。结果是可以的。

不过,解析似乎还是执行了多次,因为我收到的json文件里有重复的条目。

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item

from crawlspider.items import KickstarterItem

from HTMLParser import HTMLParser

### code for stripping off HTML tags:
class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
    def handle_data(self, d):
        self.fed.append(d)
    def get_data(self):
        return str(''.join(self.fed))

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()
###

items = []

class MySpider(CrawlSpider):
    name = 'kickstarter'
    allowed_domains = ['kickstarter.com']
    start_urls = ['http://www.kickstarter.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(SgmlLinkExtractor(allow=('discover/categories/comics', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('projects/', )), callback='parse_item'),
    )

    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = KickstarterItem()

        item['date'] = hxs.select('//*[@id="about"]/div[2]/ul/li[1]/text()').extract()
        item['projname'] = hxs.select('//*[@id="title"]/a').extract()
        item['projname'] = strip_tags(str(item['projname']))

        item['projauthor'] = hxs.select('//*[@id="name"]')
        item['projauthor'] = item['projauthor'].select('string()').extract()[0]

        item['backers'] = hxs.select('//*[@id="backers_count"]/data').extract()
        item['backers'] = strip_tags(str(item['backers']))

        item['collmoney'] = hxs.select('//*[@id="pledged"]/data').extract()
        item['collmoney'] = strip_tags(str(item['collmoney']))

        item['goalmoney'] = hxs.select('//*[@id="stats"]/h5[2]/text()').extract()
        items.append(item)
        return items

我的items.py文件是这样的:

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/topics/items.html

from scrapy.item import Item, Field

class KickstarterItem(Item):
    # define the fields for your item here like:
    date = Field()
    projname = Field()
    projauthor = Field()
    backers = Field()
    collmoney = Field()
    goalmoney = Field()
    pass

我的中间件是这样的:

import os

from scrapy.dupefilter import RFPDupeFilter
from scrapy.utils.request import request_fingerprint

class CustomFilter(RFPDupeFilter):
def __getid(self, url):
    mm = url.split("/")[4] #extracts project-id (is a number) from project-URL
    print "_____________", mm
    return mm

def request_seen(self, request):
    fp = self.__getid(request.url)
    self.fingerprints.add(fp)
    if fp in self.fingerprints and fp.isdigit(): # .isdigit() checks wether fp comes from a project ID
        print "______fp is a number (therefore a project-id) and has been encountered before______"
        return True
    if self.file:
        self.file.write(fp + os.linesep)

我在settings.py文件中添加了这一行:

DUPEFILTER_CLASS = 'crawlspider.duplicate_filter.CustomFilter'

我使用“scrapy crawl kickstarter -o items.json -t json”来运行脚本。然后我看到中间件代码中的打印语句是正确的。

有没有人能告诉我,为什么json文件中会有多个相同数据的条目呢?

1 个回答

1

现在,这里有三个修改可以去掉重复项:

我在settings.py文件中添加了这一行:

ITEM_PIPELINES = ['crawlspider.pipelines.DuplicatesPipeline',]

这样scrapy就知道我在pipelines.py文件中添加了一个叫做DuplicatesPipeline的功能:

from scrapy import signals
from scrapy.exceptions import DropItem

class DuplicatesPipeline(object):

    def __init__(self):
        self.ids_seen = set()

    def process_item(self, item, spider):
        if item['projname'] in self.ids_seen:
            raise DropItem("Duplicate item found: %s" % item)
        else:
            self.ids_seen.add(item['projname'])
            return item

你不需要调整爬虫,也不需要使用我之前提到的dupefilter或中间件的东西。

不过我感觉我的解决方案并没有减少通信,因为在评估和可能丢弃之前,Item对象必须先被创建。不过我对此没问题。

(这是提问者找到的解决方案,已移到答案中)

撰写回答