当我试着训练一个有4万个句子的语料库时,没有问题。但当我训练86K个句子时,我会出现这样的错误:
ERROR:root:
Traceback (most recent call last):
File "CLC_POS_train.py", line 95, in main
train(sys.argv[10], encoding, flag_tagger, k, percent, eval_flag)
File "CLC_POS_train.py", line 49, in train
CLC_POS.process('TBL', train_data, test_data, flag_evaluate[1], flag_dump[1], 'pos_tbl.model' + postfix)
File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 184, in process
tagger = CLC_POS.train_tbl(train_data)
File "d:\WORKing\VCL\TEST\CongToan_POS\Source\CLC_POS.py", line 71, in train_tbl
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 274, in train
self._init_mappings(test_sents, train_sents)
File "C:\Python34\lib\site-packages\nltk-3.1-py3.4.egg\nltk\tag\brill_trainer.py", line 341, in _init_mappings
self._tag_positions[tag].append((sentnum, wordnum))
MemoryError
INFO:root:
我已经在Windows64位中使用了Python3.5,但是仍然得到这个错误。 这是用于培训的代码:
t0 = RegexpTagger(MyRegexp.create_regexp_tagger())
t1 = nltk.UnigramTagger(train_data, backoff=t0)
t2 = nltk.BigramTagger(train_data, backoff=t1)
trainer = brill_trainer.BrillTaggerTrainer(t2, brill.fntbl37())
tbl_tagger = brill_trainer.BrillTaggerTrainer.train(trainer, train_data, max_rules=1000, min_score=3)
发生这种情况是因为你的电脑没有足够的内存。 当你训练你的大型语料库时,它需要大量的内存。 安装更多的内存,然后你就可以完成它。你知道吗
相关问题 更多 >
编程相关推荐