Stanford CoreNLP TokensRegex/Error在Python中解析.rules文件时出错

2024-05-15 09:53:35 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图在这个{a1}中解决这个问题,但是使用斯坦福nlp库中的regexner是不可能的

(注意:我使用的是斯坦福NLP库版本0.2.0、斯坦福CoreNLP版本3.9.2和Python 3.7.3)

所以我想尝试一个使用TokenRegex的解决方案。 作为第一次尝试,我尝试使用此solution中的令牌regex文件tokenrgxrules.rules:

ner = { type: "CLASS", value: "edu.stanford.nlp.ling.CoreAnnotations$NamedEntityTagAnnotation" }

$ORGANIZATION_TITLES = "/inc\.|corp\./"

$COMPANY_INDICATOR_WORDS = "/company|corporation/"

ENV.defaults["stage"] = 1

{ pattern: (/works/ /for/ ([{pos: NNP}]+ $ORGANIZATION_TITLES)), action: (Annotate($1, ner, "RULE_FOUND_ORG") ) }

ENV.defaults["stage"] = 2

{ pattern: (([{pos: NNP}]+) /works/ /for/ [{ner: "RULE_FOUND_ORG"}]), action: (Annotate($1, ner, "RULE_FOUND_PERS") ) }

下面是我的python代码:

import stanfordnlp


from stanfordnlp.server import CoreNLPClient
# example text
print('---')
print('input text')
print('')
text = "The analysis of shotgun sequencing data from metagenomic mixtures raises complex computational challenges. Part of the difficulty stems from the read length limitation of existing deep DNA sequencing technologies, an issue compounded by the extensive level of homology across viral and bacterial species. Another complication is the divergence of the microbial DNA sequences from the publicly available references. As a consequence, the assignment of a sequencing read to a database organism is often unclear. Lastly, the number of reads originating from a disease causing pathogen can be low (Barzon et al., 2013). The pathogen contribution to the mixture depends on the biological context, the timing of sample extraction and the type of pathogen considered. Therefore, highly sensitive computational approaches are required."
text = "In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples."
print(text)
# set up the client
print('---')
print('starting up Java Stanford CoreNLP Server...')
#I am not sure if I can add here the tokensregex rules
prop={'regexner.mapping': 'rgxrules.txt', "tokensregex.rules": "tokenrgxrules.rules", 'annotators': 'tokenize,ssplit,pos,lemma,ner,regexner,tokensregex'}


# set up the client


with CoreNLPClient(properties=prop,timeout=100000, memory='16G',be_quiet=False ) as client:
    # submit the request to the server
    ann = client.annotate(text)
    # get the first sentence
    sentence = ann.sentence[0]

以下是我得到的结果:

    Starting server with command: java -Xmx16G -cp /Users/stanford-corenlp-full-2018-10-05//* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 100000 -threads 5 -maxCharLength 100000 -quiet False -serverProperties corenlp_server-f8a9bab3cb0b44da.props -preload tokenize,ssplit,pos,lemma,ner,tokensregex
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - setting default constituency parser
[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz
[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead
[main] INFO CoreNLP - to use shift reduce parser download English models jar from:
[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html
[main] INFO CoreNLP -     Threads: 5
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.1 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
[main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 580704 unique entries out of 581863 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_caseless.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 4869 unique entries out of 4869 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_cased.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 585573 unique entries from 2 files

在开始解析tokenregex.rules文件之前,一切正常:

[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
[main] ERROR CoreNLP - Could not pre-load annotators in server; encountered exception:
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.main(StanfordCoreNLPServer.java:1505)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 12 more
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[pool-1-thread-3] INFO CoreNLP - [/0:0:0:0:0:0:0:1:49907] API call w/annotators tokenize,ssplit,pos,lemma,ner,tokensregex
In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.mkStanfordCoreNLP(StanfordCoreNLPServer.java:368)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.access$800(StanfordCoreNLPServer.java:50)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer$CoreNLPHandler.handle(StanfordCoreNLPServer.java:855)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
    at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 23 more
Traceback (most recent call last):
  File "/Users/anaconda3/lib/python3.7/site-packages/stanfordnlp/server/client.py", line 330, in _request
    r.raise_for_status()
  File "/Users/anaconda3/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9000/?properties=%7B%27outputFormat%27%3A+%27serialized%27%7D

我花了几个小时寻找这个问题的解决方案,但没有找到任何解决方案,我也尝试了官方的nlp页面上的基本代码,但它给了我同样的错误。我用python编写代码,因此无法测试他们的Java代码

任何帮助或链接都会非常有用

先谢谢你


Tags: ofthefrominfonlppipelinemainjava
2条回答

我们真的把我们的帮助资源分成了两半,不是吗

查看这一行:

Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL

我怀疑它试图找到一条相对的路径,但没有成功。如果用文件的绝对路径替换tokenrgxrules.rules,会发生什么情况

这个问题似乎发生在stanza 1.0.0和Stanford CoreNLP 3.9.2中,团队正在对此进行研究。我相信这意味着其他人正在尝试使用相同的端口,而服务器正在默默地失败。您应该首先验证是否可以让服务器在Python客户端之外运行。既然你看到了这个错误,我想服务器甚至没有启动

相关问题 更多 >