如何在Python中可靠地处理网页数据

1 投票
1 回答
1180 浏览
提问于 2025-04-16 14:11

我正在使用以下代码从一个网站获取数据:

time_out = 4

def tryconnect(turl, timer=time_out, retries=10):
    urlopener = None
    sitefound = 1
    tried = 0
    while (sitefound != 0) and tried < retries:
        try:
            urlopener = urllib2.urlopen(turl, None, timer)
            sitefound = 0
        except urllib2.URLError:
            tried += 1
    if urlopener: return urlopener
    else: return None

[...]

urlopener = tryconnect('www.example.com')
if not urlopener:
    return None
try:
    for line in urlopener:
        do stuff
except httplib.IncompleteRead:
    print 'incomplete'
    return None
except socket.timeout:
    print 'socket'
    return None
return stuff

有没有什么方法可以让我处理这些错误,而不需要每次都写这么多重复的代码呢?

谢谢!

1 个回答

6

你可以在第一个函数中减少一些重复的代码:

time_out = 4

def tryconnect(turl, timer=time_out, retries=10):
    for tried in xrange(retries):
        try:
            return urllib2.urlopen(turl, None, timer)
        except urllib2.URLError:
            pass
    return None

在第二个函数中也是一样:

urlopener = tryconnect('www.example.com')
if urlopener:
    try:
        for line in urlopener:
            do stuff
    except (httplib.IncompleteRead, socket.timeout), e:
        print e
        return None
else:
    return None

撰写回答