Python网页爬虫:连接超时

0 投票
2 回答
2116 浏览
提问于 2025-04-17 13:27

我正在尝试实现一个简单的网页爬虫,已经写了一些基础代码来开始。代码分成两个模块,分别是fetcher.pycrawler.py。以下是这两个文件:

fetcher.py:

    import urllib2
    import re
    def fetcher(s):
    "fetch a web page from a url"

    try:
            req = urllib2.Request(s)
            urlResponse = urllib2.urlopen(req).read()
    except urllib2.URLError as e:
            print e.reason
            return

    p,q = s.split("//")
    d = q.split("/")
    fdes = open(d[0],"w+")
    fdes.write(str(urlResponse))
    fdes.seek(0)
    return fdes



    if __name__ == "__main__":
    defaultSeed = "http://www.python.org"
    print fetcher(defaultSeed)

crawler.py:

from bs4 import BeautifulSoup
import re
from fetchpage import fetcher    

usedLinks = open("Used","a+")
newLinks = open("New","w+")

newLinks.seek(0)

def parse(fd,var=0):
        soup = BeautifulSoup(fd)
        for li in soup.find_all("a",href=re.compile("http")):
                newLinks.seek(0,2)
                newLinks.write(str(li.get("href")).strip("/"))
                newLinks.write("\n")

        fd.close()
        newLinks.seek(var)
        link = newLinks.readline().strip("\n")

        return str(link)


def crawler(seed,n):
        if n == 0:
                usedLinks.close()
                newLinks.close()
                return
        else:
                usedLinks.write(seed)
                usedLinks.write("\n")
                fdes = fetcher(seed)
                newSeed = parse(fdes,newLinks.tell())
                crawler(newSeed,n-1)

if __name__ == "__main__":
        crawler("http://www.python.org/",7)

问题是,当我运行crawler.py时,前4到5个链接都能正常工作,但之后程序就卡住了,过了一分钟后出现了以下错误:

[Errno 110] Connection timed out
   Traceback (most recent call last):
  File "crawler.py", line 37, in <module>
    crawler("http://www.python.org/",7)
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
 File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 33, in crawler
    newSeed = parse(fdes,newLinks.tell())
  File "crawler.py", line 11, in parse
    soup = BeautifulSoup(fd)
  File "/usr/lib/python2.7/dist-packages/bs4/__init__.py", line 169, in __init__
    self.builder.prepare_markup(markup, from_encoding))
  File "/usr/lib/python2.7/dist-packages/bs4/builder/_lxml.py", line 68, in     prepare_markup
    dammit = UnicodeDammit(markup, try_encodings, is_html=True)
  File "/usr/lib/python2.7/dist-packages/bs4/dammit.py", line 191, in __init__
    self._detectEncoding(markup, is_html)
  File "/usr/lib/python2.7/dist-packages/bs4/dammit.py", line 362, in _detectEncoding
    xml_encoding_match = xml_encoding_re.match(xml_data)
TypeError: expected string or buffer

有没有人能帮我解决这个问题?我对Python还很陌生,不知道为什么过了一段时间后会出现连接超时的提示?

2 个回答

0

你可以试着用代理来避免在发送多个请求时被检测到。你可以看看这个回答,了解一下如何用代理发送urllib请求:如何通过代理用urllib打开网站 - Python

0

连接超时并不是python特有的概念,它的意思是你向服务器发出了请求,但服务器在你程序愿意等待的时间内没有回应。

出现这种情况的一个可能原因是,python.org可能有一些机制来检测是否有脚本在发送多个请求,可能在发送了4到5个请求后就完全停止提供页面了。除了在其他网站上试试你的脚本之外,基本上没有什么办法可以避免这个问题。

撰写回答