残破递归林

2024-05-28 19:36:58 发布

您现在位置:Python中文网/ 问答频道 /正文

它从web上的一个url(例如:http://python.org)开始,获取与该url对应的web页面,并将该页面上的所有链接解析为链接库。接下来,它从刚刚创建的存储库中获取任何url的内容,将这些新内容中的链接解析到存储库中,并对存储库中的所有链接继续此过程,直到停止或在获取给定数量的链接之后。在

我怎么能用python和scrapy做到这一点呢?。我能够抓取网页中的所有链接,但如何在深度递归地执行它


Tags: orgwebhttpurl网页内容数量链接
2条回答

几点意见:

  • 这么简单的任务你用不着刮胡子。Urllib(或请求)和html解析器(beautifulsoup等)可以完成这项工作
  • 我不记得在哪里听到过,但我认为使用BFS算法进行爬网比较好。可以很容易地避免循环引用。在

下面是一个简单的实现:它没有fecch内部链接(只有绝对形式的超链接),也没有任何错误处理(403404,没有链接,…),而且速度非常慢(在这种情况下,multiprocessing模块可以帮助很多)。在

import BeautifulSoup
import urllib2
import itertools
import random


class Crawler(object):
    """docstring for Crawler"""

    def __init__(self):

        self.soup = None                                        # Beautiful Soup object
        self.current_page   = "http://www.python.org/"          # Current page's address
        self.links          = set()                             # Queue with every links fetched
        self.visited_links  = set()

        self.counter = 0 # Simple counter for debug purpose

    def open(self):

        # Open url
        print self.counter , ":", self.current_page
        res = urllib2.urlopen(self.current_page)
        html_code = res.read()
        self.visited_links.add(self.current_page) 

        # Fetch every links
        self.soup = BeautifulSoup.BeautifulSoup(html_code)

        page_links = []
        try :
            page_links = itertools.ifilter(  # Only deal with absolute links 
                                            lambda href: 'http://' in href,
                                                ( a.get('href') for a in self.soup.findAll('a') )  )
        except Exception: # Magnificent exception handling
            pass



        # Update links 
        self.links = self.links.union( set(page_links) ) 



        # Choose a random url from non-visited set
        self.current_page = random.sample( self.links.difference(self.visited_links),1)[0]
        self.counter+=1


    def run(self):

        # Crawl 3 webpages (or stop if all url has been fetched)
        while len(self.visited_links) < 3 or (self.visited_links == self.links):
            self.open()

        for link in self.links:
            print link



if __name__ == '__main__':

    C = Crawler()
    C.run()

输出:

^{pr2}$

下面是一个主要的爬网方法,用来递归地从网页中删除链接。此方法将抓取一个URL并将所有已爬网的URL放入缓冲区。现在,多个线程将等待从这个全局缓冲区弹出url并再次调用这个爬网方法。在

def crawl(self,urlObj):
    '''Main function to crawl URL's '''

    try:
        if ((urlObj.valid) and (urlObj.url not in CRAWLED_URLS.keys())):
            rsp = urlcon.urlopen(urlObj.url,timeout=2)
            hCode = rsp.read()
            soup = BeautifulSoup(hCode)
            links = self.scrap(soup)
            boolStatus = self.checkmax()
            if boolStatus:
                CRAWLED_URLS.setdefault(urlObj.url,"True")
            else:
                return
            for eachLink in links:
                if eachLink not in VISITED_URLS:
                    parsedURL = urlparse(eachLink)
                    if parsedURL.scheme and "javascript" in parsedURL.scheme:
                        #print("***************Javascript found in scheme " + str(eachLink) + "**************")
                        continue
                    '''Handle internal URLs '''
                    try:
                        if not parsedURL.scheme and not parsedURL.netloc:
                            #print("No scheme and host found for "  + str(eachLink))
                            newURL = urlunparse(parsedURL._replace(**{"scheme":urlObj.scheme,"netloc":urlObj.netloc}))
                            eachLink = newURL
                        elif not parsedURL.scheme :
                            #print("Scheme not found for " + str(eachLink))
                            newURL = urlunparse(parsedURL._replace(**{"scheme":urlObj.scheme}))
                            eachLink = newURL
                        if eachLink not in VISITED_URLS: #Check again for internal URL's
                            #print(" Found child link " + eachLink)
                            CRAWL_BUFFER.append(eachLink)
                            with self._lock:
                                self.count += 1
                                #print(" Count is =================> " + str(self.count))
                            boolStatus = self.checkmax()
                            if boolStatus:
                                VISITED_URLS.setdefault(eachLink, "True")
                            else:
                                return
                    except TypeError:
                        print("Type error occured ")
        else:
            print("URL already present in visited " + str(urlObj.url))
    except socket.timeout as e:
        print("**************** Socket timeout occured*******************" )
    except URLError as e:
        if isinstance(e.reason, ConnectionRefusedError):
            print("**************** Conn refused error occured*******************")
        elif isinstance(e.reason, socket.timeout):
            print("**************** Socket timed out error occured***************" )
        elif isinstance(e.reason, OSError):
            print("**************** OS error occured*************")
        elif isinstance(e,HTTPError):
            print("**************** HTTP Error occured*************")
        else:
            print("**************** URL Error occured***************")
    except Exception as e:
        print("Unknown exception occured while fetching HTML code" + str(e))
        traceback.print_exc()

完整的源代码和指令可在https://github.com/tarunbansal/crawler上找到

相关问题 更多 >

    热门问题