使用python和lxml从表中提取文本
我最近看到有用户问关于如何从网页表格中提取信息的问题,链接在这里:用Python提取网页信息。ekhumoro的回答在那个用户提问的页面上效果很好。下面是代码。
from urllib2 import urlopen
from lxml import etree
url = 'http://www.uscho.com/standings/division-i-men/2011-2012/'
tree = etree.HTML(urlopen(url).read())
for section in tree.xpath('//section[starts-with(@id, "section_")]'):
print section.xpath('h3[1]/text()')[0]
for row in section.xpath('table/tbody/tr'):
cols = row.xpath('td//text()')
print ' ', cols[0].ljust(25), ' '.join(cols[1:])
print
我的问题是,我想用这段代码作为参考来解析这个页面:http://www.uscho.com/rankings/d-i-mens-poll/。不过我做了一些修改后,只能打印出h1和h3的内容。
输入
url = 'http://www.uscho.com/rankings/d-i-mens-poll/'
tree = etree.HTML(urlopen(url).read())
for section in tree.xpath('//section[starts-with(@id, "rankings")]'):
print section.xpath('h1[1]/text()')[0]
print section.xpath('h3[1]/text()')[0]
for row in section.xpath('table/tbody/tr'):
cols = row.xpath('td/b/text()')
print ' ', cols[0].ljust(25), ' '.join(cols[1:])
print
输出
USCHO.com Division I Men's Poll
December 12, 2011
这个表格的结构看起来是一样的,所以我不知道为什么我不能用类似的代码。我只是个机械工程师,感觉自己有点力不从心。希望能得到一些帮助。
4 个回答
0
把 'table/tbody/tr'
替换成 'table/tr'
。
2
这个表格的结构稍微有点不同,而且里面有一些空白的列。
可以用 lxml
来解决这个问题:
from urllib2 import urlopen
from lxml import etree
url = 'http://www.uscho.com/rankings/d-i-mens-poll/'
tree = etree.HTML(urlopen(url).read())
for section in tree.xpath('//section[@id="rankings"]'):
print section.xpath('h1[1]/text()')[0],
print section.xpath('h3[1]/text()')[0]
print
for row in section.xpath('table/tr[@class="even" or @class="odd"]'):
print '%-3s %-20s %10s %10s %10s %10s' % tuple(
''.join(col.xpath('.//text()')) for col in row.xpath('td'))
print
输出结果:
USCHO.com Division I Men's Poll December 12, 2011
1 Minnesota-Duluth (49) 12-3-3 999 1
2 Minnesota 14-5-1 901 2
3 Boston College 12-6-0 875 3
4 Ohio State ( 1) 13-4-1 848 4
5 Merrimack 10-2-2 844 5
6 Notre Dame 11-6-3 667 7
7 Colorado College 9-5-0 650 6
8 Western Michigan 9-4-5 647 8
9 Boston University 10-5-1 581 11
10 Ferris State 11-6-1 521 9
11 Union 8-3-5 510 10
12 Colgate 11-4-2 495 12
13 Cornell 7-3-1 347 16
14 Denver 7-6-3 329 13
15 Michigan State 10-6-2 306 14
16 Lake Superior 11-7-2 258 15
17 Massachusetts-Lowell 10-5-0 251 18
18 North Dakota 9-8-1 88 19
19 Yale 6-5-1 69 17
20 Michigan 9-8-3 62 NR
4
lxml
是个很不错的工具,但如果你对 xpath
不太熟悉,我建议你使用 BeautifulSoup
:
from urllib2 import urlopen
from BeautifulSoup import BeautifulSoup
url = 'http://www.uscho.com/rankings/d-i-mens-poll/'
soup = BeautifulSoup(urlopen(url).read())
section = soup.find('section', id='rankings')
h1 = section.find('h1')
print h1.text
h3 = section.find('h3')
print h3.text
print
rows = section.find('table').findAll('tr')[1:-1]
for row in rows:
columns = [data.text for data in row.findAll('td')[1:]]
print '{0:20} {1:4} {2:>6} {3:>4}'.format(*columns)
这个脚本的输出结果是:
USCHO.com Division I Men's Poll
December 12, 2011
Minnesota-Duluth (49) 12-3-3 999
Minnesota 14-5-1 901
Boston College 12-6-0 875
Ohio State ( 1) 13-4-1 848
Merrimack 10-2-2 844
Notre Dame 11-6-3 667
Colorado College 9-5-0 650
Western Michigan 9-4-5 647
Boston University 10-5-1 581
Ferris State 11-6-1 521
Union 8-3-5 510
Colgate 11-4-2 495
Cornell 7-3-1 347
Denver 7-6-3 329
Michigan State 10-6-2 306
Lake Superior 11-7-2 258
Massachusetts-Lowell 10-5-0 251
North Dakota 9-8-1 88
Yale 6-5-1 69
Michigan 9-8-3 62