我用狮身人面像作为文档。我想用拼写检查来处理法语。在
到目前为止,我已经做了以下工作:
sudo pip install sphinxcontrib-spelling
sudo apt-get install myspell-fr-fr
extensions = ["sphinxcontrib.spelling"] spelling_lang='fr'
builder = ["html", "pdf", "spelling" ],
这是我运行斯芬克斯时得到的回溯:
Exception occurred:
File "/usr/lib/python2.7/dist-packages/sphinx/cmdline.py", line 188, in main
warningiserror, tags)
File "/usr/lib/python2.7/dist-packages/sphinx/application.py", line 134, in __init__
self._init_builder(buildername)
File "/usr/lib/python2.7/dist-packages/sphinx/application.py", line 194, in _init_builder
self.builder = builderclass(self)
File "/usr/lib/python2.7/dist-packages/sphinx/builders/__init__.py", line 57, in __init__
self.init()
File "/usr/lib/pymodules/python2.7/sphinxcontrib/spelling.py", line 253, in init
filters=filters,
File "/usr/lib/pymodules/python2.7/sphinxcontrib/spelling.py", line 181, in __init__
self.tokenizer = get_tokenizer(lang, filters)
File "/usr/lib/python2.7/dist-packages/enchant/tokenize/__init__.py", line 186, in get_tokenizer
raise TokenizerNotFoundError(msg)
TokenizerNotFoundError: No tokenizer found for language 'fr'
欢迎任何帮助:-)
我也犯了同样的错误,看来这和字典丢失没有关系。在
PyEnchant只是没有法国代币器,但只有一个英国。如Extending enchant.tokenize文档中所述:
必须将给定语言的标记器添加到
PyEnchant
。在急功近利
将pyenchant repo和cd克隆到其中:
转到目录,其中定义了标记器:
^{pr2}$将现有的
en.py
标记器复制到您要使用的语言代码中(我缺少cs
,可以尝试fr
):根据修改后的代码安装包:
现在它会起作用的。在
更好的解决方案是检查复制的标记器并修改它不适合您的语言的地方。并对
pyenchant
做出贡献。在相关问题 更多 >
编程相关推荐