用Stanford-coreNLP实现python-nltk中的共指消解

2024-05-15 01:21:13 发布

您现在位置:Python中文网/ 问答频道 /正文

Stanford CoreNLP提供了coreference解析as mentioned here,还有this threadthis,提供了一些关于它在Java中实现的见解。

但是,我使用的是python和NLTK,我不确定如何在python代码中使用CoreNLP的Coreference解析功能。我已经能够在NLTK中设置StanfordParser,这是我目前的代码。

from nltk.parse.stanford import StanfordDependencyParser
stanford_parser_dir = 'stanford-parser/'
eng_model_path = stanford_parser_dir  + "stanford-parser-models/edu/stanford/nlp/models/lexparser/englishRNN.ser.gz"
my_path_to_models_jar = stanford_parser_dir  + "stanford-parser-3.5.2-models.jar"
my_path_to_jar = stanford_parser_dir  + "stanford-parser.jar"

如何在python中使用CoreNLP的coreference解析?


Tags: topath代码parsermodelsmyasdir
3条回答

相对较新的包装器stanfordcorenlp可能对您有用。

假设文本是“巴拉克奥巴马出生在夏威夷。他是总统。奥巴马于2008年当选。

enter image description here

代码:

# coding=utf-8

import json
from stanfordcorenlp import StanfordCoreNLP

nlp = StanfordCoreNLP(r'G:\JavaLibraries\stanford-corenlp-full-2017-06-09', quiet=False)
props = {'annotators': 'coref', 'pipelineLanguage': 'en'}

text = 'Barack Obama was born in Hawaii.  He is the president. Obama was elected in 2008.'
result = json.loads(nlp.annotate(text, properties=props))

num, mentions = result['corefs'].items()[0]
for mention in mentions:
    print(mention)

上面的每个“提到”都是这样一个Python命令:

{
  "id": 0,
  "text": "Barack Obama",
  "type": "PROPER",
  "number": "SINGULAR",
  "gender": "MALE",
  "animacy": "ANIMATE",
  "startIndex": 1,
  "endIndex": 3,
  "headIndex": 2,
  "sentNum": 1,
  "position": [
    1,
    1
  ],
  "isRepresentativeMention": true
}

正如@Igor所提到的,您可以尝试在这个GitHub repo中实现python包装器:https://github.com/dasmith/stanford-corenlp-python

此回购协议包含两个主要文件: corenlp.py公司 客户端.py

执行以下更改以使coreNLP正常工作:

  1. 在corenlp.py中,更改corenlp文件夹的路径。设置本地计算机包含corenlp文件夹的路径,并将该路径添加到corenlp.py的第144行

    if not corenlp_path: corenlp_path = <path to the corenlp file>

  2. “corenlp.py”中的jar文件版本号不同。根据您拥有的corenlp版本设置它。在corenlp.py的135行更改它

    jars = ["stanford-corenlp-3.4.1.jar", "stanford-corenlp-3.4.1-models.jar", "joda-time.jar", "xom.jar", "jollyday.jar"]

在这个版本中,用您下载的jar版本替换3.4.1。

  1. 运行命令:

    python corenlp.py

这将启动服务器

  1. 现在运行主客户端程序

    python client.py

这提供了一个字典,您可以使用“coref”作为键来访问coref:

约翰是一位计算机科学家。他喜欢编码。

{
     "coref": [[[["a Computer Scientist", 0, 4, 2, 5], ["John", 0, 0, 0, 1]], [["He", 1, 0, 0, 1], ["John", 0, 0, 0, 1]]]]
}

我在Ubuntu16.04上试过。使用java版本7或8。

斯坦福大学的CoreNLP现在有一个名为StanfordNLP的official Python binding,您可以在StanfordNLP website中阅读。

原生API doesn't seem支持coref处理器,但是您可以使用CoreNLPClient接口从Python调用“标准”CoreNLP(原始Java软件)。

因此,在按照说明设置Python包装器here之后,您可以得到这样的coreference链:

from stanfordnlp.server import CoreNLPClient

text = 'Barack was born in Hawaii. His wife Michelle was born in Milan. He says that she is very smart.'
print(f"Input text: {text}")

# set up the client
client = CoreNLPClient(properties={'annotators': 'coref', 'coref.algorithm' : 'statistical'}, timeout=60000, memory='16G')

# submit the request to the server
ann = client.annotate(text)    

mychains = list()
chains = ann.corefChain
for chain in chains:
    mychain = list()
    # Loop through every mention of this chain
    for mention in chain.mention:
        # Get the sentence in which this mention is located, and get the words which are part of this mention
        # (we can have more than one word, for example, a mention can be a pronoun like "he", but also a compound noun like "His wife Michelle")
        words_list = ann.sentence[mention.sentenceIndex].token[mention.beginIndex:mention.endIndex]
        #build a string out of the words of this mention
        ment_word = ' '.join([x.word for x in words_list])
        mychain.append(ment_word)
    mychains.append(mychain)

for chain in mychains:
    print(' <-> '.join(chain))

相关问题 更多 >

    热门问题