解析HTML:Python中的lxml错误
我正在写一个简单的脚本,目的是从这个链接获取一个大灰色的表格。
我写的代码如下:
import urllib2
from lxml import etree
html = urllib2.urlopen("http://www.afi.com/100years/movies10.aspx").read()
root = etree.XML(html)
但是我在最后一行代码上遇到了一个错误。
Traceback (most recent call last):
File "D:\Workspace\afi100\afi100.py", line 13, in <module>
root = etree.XML(html)
File "lxml.etree.pyx", line 2720, in lxml.etree.XML (src/lxml/lxml.etree.c:52577)
File "parser.pxi", line 1556, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:79602)
File "parser.pxi", line 1435, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:78449)
File "parser.pxi", line 943, in lxml.etree._BaseParser._parseDoc (src/lxml/lxml.etree.c:75099)
File "parser.pxi", line 547, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:71467)
File "parser.pxi", line 628, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:72340)
File "parser.pxi", line 568, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:71683)
XMLSyntaxError: Space required after the Public Identifier, line 3, column 59
有没有什么办法可以解决这个错误呢?
谢谢。
2 个回答
1
你链接的文档格式不符合XHTML的标准,所以不能用XML解析器来加载它。
你需要使用像Beautiful Soup这样的HTML解析器来处理它。
10
你正在用XML解析器来解析HTML,其实你应该使用lxml的HTML解析器。
import urllib2
from StringIO import StringIO
from lxml import etree
ufile = urllib2.urlopen("http://www.afi.com/100years/movies10.aspx")
root = etree.parse(ufile, etree.HTMLParser())
print etree.tostring(root)