使用Pandas清理HTML表格
我想从一个网站上读取一个表格,并提取里面的值。为此,我做了以下操作:
url = 'http://www.astro.keele.ac.uk/jkt/debcat/'
df = pd.read_html(url, header=0)
即使我设置了header=0,我还是得到了一个名为df[0]的表头,所以我接着做了这个:
df = df[1]
df1.shape
(161, 11)
df1.columns
Index([u' System ', u' Period (days) ', u' V B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H] (dex) ', u' References and notes '], dtype='object')
但是,我遇到了一个问题:
df1.Period
'DataFrame'对象没有'period'这个属性
我也不能这样做:
df1.to_csv('junk.csv')
那么,我该如何访问这些列并清理这个表格呢?谢谢!
2 个回答
0
这个列的名字被解析成了 u' Period (days) '
,所以要访问这个列的话,你可以这样做:
>>> df1[ u' Period (days) ' ]
也就是说,你需要使用一些HTML解析库来完成这个工作;比如说 BeautifulSoup
就能很好地做到这一点;
>>> from bs4 import BeautifulSoup
>>> from urllib2 import urlopen
>>> url = 'http://www.astro.keele.ac.uk/jkt/debcat/'
>>> html = urlopen(url).read()
>>> soup = BeautifulSoup(html)
>>> # catch the target table by its attributes
>>> table = soup.find('table', attrs={'frame':'BOX', 'rules':'ALL'})
>>> # parse the table as a list of lists; each row as a single list
>>> tbl = [[td.getText() for td in tr.findAll(['td', 'th'])] for tr in table.findAll('tr')]
最后的 tbl
是目标表格,它是一个列表的列表;也就是说,每一行都是一个包含该行单元格值的列表;例如 tbl[0]
就是表头:
>>> tbl[0]
[u' System ', u' Period (days) ', u' V B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H] (dex) ', u' References and notes ']
1
看起来数据的格式已经挺不错的:
>>> url = 'http://www.astro.keele.ac.uk/jkt/debcat/'
>>> df = pd.read_html(url, header=0)
>>> df1 = df[1]
>>> df1.head()
System Period (days) V B-V Spectral type \
0 V3903 Sgr 1.744 NaT NaT
1 V467 Vel 2.753 NaT NaT
2 EM Car 3.414 NaT NaT
3 Y Cyg 2.996 NaT NaT
4 V478 Cyg 2.881 NaT NaT
Mass (Msun ) Radius (Rsun) \
0 27.27 ± 0.55 19.01 ± 0.44 8.088 ± 0.086 6.125 ± 0.060
1 25.3 ± 0.7 8.25 ± 0.17 9.99 ± 0.09 3.49 ± 0.03
2 22.89 ± 0.32 21.43 ± 0.33 9.35 ± 0.17 8.34 ± 0.14
3 17.57 ± 0.27 17.04 ± 0.26 5.93 ± 0.07 5.78 ± 0.07
4 16.67 ± 0.45 16.31 ± 0.35 7.423 ± 0.079 7.423 ± 0.079
Surface gravity (cgs) log Teff (K) \
0 4.058 ± 0.016 4.143 ± 0.013 4.580 ± 0.021 4.531 ± 0.021
1 3.842 ± 0.016 4.268 ± 0.017 4.559 ± 0.031 4.402 ± 0.046
2 3.856 ± 0.017 3.926 ± 0.016 4.531 ± 0.026 4.531 ± 0.026
3 4.16 ± 0.10 4.18 ± 0.10 4.545 ± 0.007 4.534 ± 0.007
4 3.919 ± 0.015 3.909 ± 0.013 4.484 ± 0.015 4.485 ± 0.015
log (L/Lsun) [M/H] (dex) \
0 5.087 ± 0.029 4.658 ± 0.032 NaN
1 5.187 ± 0.126 3.649 ± 0.110 NaN
2 5.02 ± 0.10 4.92 ± 0.10 NaN
3 NaN 0.00 ± 0.00
4 4.63 ± 0.06 4.63 ± 0.06 NaN
References and notes
0 Vaz et al. (1997A&A...327.1094V)
1 Michalska et al. (2013MNRAS.429.1354M)
2 Andersen & Clausen (1989A&A...213..183A)
3 Simon, Sturm & Fiedler (1994A&A...292..507S)
4 Popper & Hill (1991AJ....101..600P) Popper & E...
[5 rows x 11 columns]
既然你知道怎么查看列:
>>> df1.columns
Index([u' System ', u' Period (days) ', u' V B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H] (dex) ', u' References and notes '], dtype='object')
那么 df.Period
不能用也就不奇怪了——毕竟没有任何一列叫 Period
。Pandas 不会随便猜哪个列最接近。如果你想处理列名,可以试试下面的方法:
>>> df1.columns = [x.strip() for x in df1.columns] # get rid of the leading/trailing spaces
>>> df1 = df1.rename(columns={"Period (days)": "Period"})
这样之后 df1["Period"]
(推荐用法)和 df1.Period
(快捷用法)就可以用了:
>>> df1["Period"].describe()
count 161.000000
mean 32.035019
std 98.392634
min 0.452000
25% 2.293000
50% 3.895000
75% 9.945000
max 771.781000
Name: Period, dtype: float64
你说的“我也不能用 df1.to_csv('junk.csv')
”并不是一个错误报告,因为你没有解释为什么不能,或者执行后会发生什么。我猜测你遇到了编码错误:
>>> df1.to_csv("out.csv")
Traceback (most recent call last):
[...]
File "lib.pyx", line 845, in pandas.lib.write_csv_rows (pandas/lib.c:14261)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xb1' in position 6: ordinal not in range(128)
这个问题可以通过指定合适的编码来避免:
>>> df1.to_csv("out.csv", encoding="utf8")