杂乱无章,只跟踪内部网址,但提取所有找到的链接

2024-04-28 20:02:07 发布

您现在位置:Python中文网/ 问答频道 /正文

我想得到所有外部链接从一个给定的网站使用剪贴。蜘蛛还使用以下代码爬行外部链接:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from myproject.items import someItem

class someSpider(CrawlSpider):
  name = 'crawltest'
  allowed_domains = ['someurl.com']
  start_urls = ['http://www.someurl.com/']

  rules = (Rule (LinkExtractor(), callback="parse_obj", follow=True),
  )

  def parse_obj(self,response):
    item = someItem()
    item['url'] = response.url
    return item

我错过了什么?“允许的域”是否阻止对外部链接进行爬网?如果我为LinkExtractor设置“允许域”,则不会提取外部链接。只是澄清一下:我不想抓取内部链接,而是提取外部链接。任何帮助通知!


Tags: fromimportcomobjparseresponseitemcontrib
3条回答

解析每个页面后,还可以使用链接提取器来提取所有链接。

链接提取器将为您过滤链接。在本例中,链接提取器将拒绝允许域中的链接,因此它只获取外部链接。

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LxmlLinkExtractor
from myproject.items import someItem

class someSpider(CrawlSpider):
  name = 'crawltest'
  allowed_domains = ['someurl.com']
  start_urls = ['http://www.someurl.com/']

  rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)


  def parse_obj(self,response):
    for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
        item = someItem()
        item['url'] = link.url

根据12Ryan12的答案更新的代码

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from scrapy.item import Item, Field

class MyItem(Item):
    url= Field()


class someSpider(CrawlSpider):
    name = 'crawltest'
    allowed_domains = ['someurl.com']
    start_urls = ['http://www.someurl.com/']
    rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)

    def parse_obj(self,response):
        item = MyItem()
        item['url'] = []
        for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
            item['url'].append(link.url)
        return item

解决方案是在SgmlLinkExtractor中使用process\u link函数 这里的文档http://doc.scrapy.org/en/latest/topics/link-extractors.html

class testSpider(CrawlSpider):
    name = "test"
    bot_name = 'test'
    allowed_domains = ["news.google.com"]
    start_urls = ["https://news.google.com/"]
    rules = (
    Rule(SgmlLinkExtractor(allow_domains=()), callback='parse_items',process_links="filter_links",follow= True) ,
     )

    def filter_links(self, links):
        for link in links:
            if self.allowed_domains[0] not in link.url:
                print link.url

        return links

    def parse_items(self, response):
        ### ...

相关问题 更多 >