如何解决解析包含西里尔符号的HTML文件的问题?

3 投票
4 回答
6074 浏览
提问于 2025-04-16 07:04

我有一个包含元素的html文件:

<html>
<body>
<span class="one">Text</span>some text</br>
<span class="two">Привет</span>Текст на русском</br>
</body>
</html>

要获取“some text”:

# -*- coding:cp1251 -*-
import lxml
from lxml import html

filename = "t.html"
fread = open(filename, 'r')
source = fread.read()

tree = html.fromstring(source)
fread.close()


tags = tree.xpath('//span[@class="one" and text()="Text"]') #This OK
print "name: ",tags[0].text
print "value: ",tags[0].tail

tags = tree.xpath('//span[@class="two" and text()="Привет"]') #This False

print "name: ",tags[0].text
print "value: ",tags[0].tail

这样显示:

name: Text
value: some text

Traceback: ... in line `tags = tree.xpath('//span[@class="two" and text()="Привет"]')`
    ValueError: All strings must be XML compatible: Unicode or ASCII, no NULL bytes

怎么解决这个问题呢?

4 个回答

1

我在用lxml生成XML的时候遇到了同样的错误。在这里找到了解决办法:http://lethain.com/stripping-illegal-characters-from-xml-in-python/

我只做了:

remove_re = re.compile(u'[\x00-\x08\x0B-\x0C\x0E-\x1F\x7F]')
etree_sub_el.text = remove_re.sub('', text)
4

试试这个

tree = html.fromstring(source.decode('utf-8'))

还有这个

tags = tree.xpath('//span[@class="two" and text()="%s"]' % u'Привет' )
4

lxml

(观察到在系统编码方面,这个有点不太靠谱,显然在Windows XP上运行不正常,不过在Linux上可以。)

我通过解码源字符串让它工作了 - tree = html.fromstring(source.decode('utf-8'))

# -*- coding:cp1251 -*-
import lxml
from lxml import html

filename = "t.html"
fread = open(filename, 'r')
source = fread.read()

tree = html.fromstring(source.decode('utf-8'))
fread.close()


tags = tree.xpath('//span[@class="one" and text()="Text"]') #This OK
print "name: ",tags[0].text
print "value: ",tags[0].tail

tags = tree.xpath('//span[@class="two" and text()="Привет"]') #This is now OK too

print "name: ",tags[0].text
print "value: ",tags[0].tail

这意味着实际的树结构都是unicode对象。如果你直接把xpath参数作为unicode传入,它会找不到任何匹配项。

BeautifulSoup

反正我更喜欢用BeautifulSoup来处理这些事情。这里是我的交互式会话;我把文件保存为cp1251编码。

>>> from BeautifulSoup import BeautifulSoup
>>> filename = '/tmp/cyrillic'
>>> fread = open(filename, 'r')
>>> source = fread.read()
>>> source  # Scary
'<html>\n<body>\n<span class="one">Text</span>some text</br>\n<span class="two">\xcf\xf0\xe8\xe2\xe5\xf2</span>\xd2\xe5\xea\xf1\xf2 \xed\xe0 \xf0\xf3\xf1\xf1\xea\xee\xec</br>\n</body>\n</html>\n'
>>> source = source.decode('cp1251')  # Let's try getting this right.
u'<html>\n<body>\n<span class="one">Text</span>some text</br>\n<span class="two">\u041f\u0440\u0438\u0432\u0435\u0442</span>\u0422\u0435\u043a\u0441\u0442 \u043d\u0430 \u0440\u0443\u0441\u0441\u043a\u043e\u043c</br>\n</body>\n</html>\n'
>>> soup = BeautifulSoup(source)
>>> soup  # OK, that's looking right now. Note the </br> was dropped as that's bad HTML with no meaning.
<html>
<body>
<span class="one">Text</span>some text
<span class="two">Привет</span>Текст на русском
</body>
</html>

>>> soup.find('span', 'one').findNextSibling(text=True)
u'some text'
>>> soup.find('span', 'two').findNextSibling(text=True)  # This looks a bit daunting ...
u'\u0422\u0435\u043a\u0441\u0442 \u043d\u0430 \u0440\u0443\u0441\u0441\u043a\u043e\u043c'
>>> print _  # ... but it's not, really. Just Unicode chars.
Текст на русском
>>> # Then you may also wish to get things by text:
>>> print soup.find(text=u'Привет').findParent().findNextSibling(text=True)
Текст на русском
>>> # You can't get things by attributes and the contained NavigableString at the same time, though. That may be a limitation.

在那之后,考虑在从文件系统读取时尝试source.decode('cp1251')而不是source.decode('utf-8')可能是值得的。这样lxml可能就能正常工作了。

撰写回答