如何在提取网页数据时去除特殊字符?

1 投票
2 回答
1290 浏览
提问于 2025-04-18 18:33

我正在从一个网站提取数据,发现有一条记录里面有个特殊字符,比如说 Comfort Inn And Suites�? Blazing Stump。当我尝试提取这条记录时,程序报错了:

    Traceback (most recent call last):
  File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "C:\Python27\lib\site-packages\twisted\internet\task.py", line 638, in _tick
    taskObj._oneWorkUnit()
  File "C:\Python27\lib\site-packages\twisted\internet\task.py", line 484, in _oneWorkUnit
    result = next(self._iterator)
  File "C:\Python27\lib\site-packages\scrapy\utils\defer.py", line 57, in <genexpr>
    work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
  File "C:\Python27\lib\site-packages\scrapy\utils\defer.py", line 96, in iter_errback
    yield it.next()
  File "C:\Python27\lib\site-packages\scrapy\contrib\spidermiddleware\offsite.py", line 24, in process_spider_output
    for x in result:
  File "C:\Python27\lib\site-packages\scrapy\contrib\spidermiddleware\referer.py", line 14, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "C:\Python27\lib\site-packages\scrapy\contrib\spidermiddleware\urllength.py", line 32, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Python27\lib\site-packages\scrapy\contrib\spidermiddleware\depth.py", line 48, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "E:\Scrapy projects\emedia\emedia\spiders\test_spider.py", line 46, in parse
    print repr(business.select('a[@class="name"]/text()').extract()[0])
  File "C:\Python27\lib\site-packages\scrapy\selector\lxmlsel.py", line 51, in select
    result = self.xpathev(xpath)
  File "xpath.pxi", line 318, in lxml.etree.XPathElementEvaluator.__call__ (src\lxml\lxml.etree.c:145954)

  File "xpath.pxi", line 241, in lxml.etree._XPathEvaluatorBase._handle_result (src\lxml\lxml.etree.c:144987)

  File "extensions.pxi", line 621, in lxml.etree._unwrapXPathObject (src\lxml\lxml.etree.c:139973)

  File "extensions.pxi", line 655, in lxml.etree._createNodeSetResult (src\lxml\lxml.etree.c:140328)

  File "extensions.pxi", line 676, in lxml.etree._unpackNodeSetEntry (src\lxml\lxml.etree.c:140524)

  File "extensions.pxi", line 784, in lxml.etree._buildElementStringResult (src\lxml\lxml.etree.c:141695)

  File "apihelpers.pxi", line 1373, in lxml.etree.funicode (src\lxml\lxml.etree.c:26255)

exceptions.UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 22: invalid continuation byte

我在网上查了很多资料,尝试了各种方法,比如 decode('utf-8')unicodedata.normalize('NFC',business.select('a[@class="name"]/text()').extract()[0]),但是问题还是没有解决。

这个数据的来源网址是 "http://www.truelocal.com.au/find/hotels/97/",我说的这条记录是页面上的第四个条目。

2 个回答

0

不要用“替换”来解决乱码问题,应该修复导致乱码的数据库和代码。

首先,你需要判断是简单的乱码还是“重复编码”。可以用 SELECT col, HEX(col) ... 来检查一个字符变成了2-4个字节(乱码)还是4-6个字节(重复编码)。举个例子:

`é` (as utf8) should come back `C3A9`, but instead shows `C383C2A9`
The Emoji `` should come back `F09F91BD`, but comes back `C3B0C5B8E28098C2BD`

你可以在这里了解“乱码”和“重复编码” 更多信息

关于数据库修复的内容可以在这里找到 更多信息

  • 如果字符集是latin1,但里面有utf8的字节;在修复字符集时保持字节不变:

首先,假设你有这样的表字段声明:

col VARCHAR(111) CHARACTER SET latin1 NOT NULL

然后通过这个两步的ALTER语句来转换列,而不改变字节:

ALTER TABLE tbl MODIFY COLUMN col VARBINARY(111) NOT NULL;
ALTER TABLE tbl MODIFY COLUMN col VARCHAR(111) CHARACTER SET utf8mb4 NOT NULL;

注意:如果你开始时是 TEXT 类型,使用 BLOB 作为中间定义。(这就是前面提到的“两步ALTER”。)记得保持其他设置不变,比如VARCHAR、NOT NULL等。

  • CHARACTER SET utf8mb4 的重复编码: UPDATE tbl SET col = CONVERT(BINARY(CONVERT(col USING latin1)) USING utf8mb4);

  • CHARACTER SET latin1 的重复编码:先进行两步ALTER,然后修复重复编码。

4

你在原网页上遇到了一个糟糕的乱码,这可能是因为在某个地方处理Unicode时出现了问题。源代码中的实际UTF-8字节是C3 3F C2 A0,用十六进制表示。

认为它曾经是一个U+00A0 不换行空格。当它被编码为UTF-8时变成C2 A0,如果把它当作拉丁-1来解释,然后再编码为UTF-8,就变成了C3 82 C2 A0,但是82在再次被解释为拉丁-1时是一个控制字符,所以被替换成了一个?问号,十六进制表示为3F

当你点击链接去查看该场所的详细页面时,你会看到同样名称的不同乱码:Comfort Inn And Suites Blazing Stump,这给我们带来了Unicode字符U+00C3, U+201A, U+00C2和一个&nbsp; HTML实体,或者再次是Unicode字符U+00A0。将其编码为Windows 1252编码(这是拉丁-1的超集),你会再次得到C3 82 C2 A0

要解决这个问题,你只能直接在页面的源代码中进行处理。

pagesource.replace('\xc3?\xc2\xa0', '\xc2\xa0')

这样做可以通过用原本应该有的UTF-8字节替换错误的数据来“修复”它。

如果你有一个scrapy的Response对象,可以替换其主体:

body = response.body.replace('\xc3?\xc2\xa0', '\xc2\xa0')
response = response.replace(body=body)

撰写回答