Python正则表达式Unicode Tex匹配的位置和值

2024-04-25 22:02:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我必须匹配文档中出现的多个标记,并获取匹配标记的值和位置

对于非Unicode文本,我将这个regex r"\b(?=\w)" + re.escape(word) + r"\b(?!\w)"finditer一起使用,它可以工作

对于Unicode文本,我必须使用类似单词边界的解决方案,如u"(\s|^)%s(\s|$)" % word。这在大多数情况下都有效,但当我有两个连续的单词如in时就不行了तुम मुझे दोस्त कहते कहते हो".

这是重现这个问题的代码

import re
import json

# a input document of sentences
document="These are oranges and apples and and pears, but not pinapples\nThese are oranges and apples and pears, but not pinapples"


# uncomment to test UNICODE
document="तुम मुझे दोस्त कहते कहते हो"

sentences=[] # sentences
seen = {} # map if a token has been see already!

# split into sentences
lines=document.splitlines()

for index,line in enumerate(lines):

  print("Line:%d %s" % (index,line))

  # split token that are words
  # LP: (for Simon ;P we do not care of punct at all!
  rgx = re.compile("([\w][\w']*\w)")
  tokens=rgx.findall(line)

  # uncomment to test UNICODE
  tokens=["तुम","मुझे","दोस्त","कहते","कहते","हो"]

  print("Tokens:",tokens)

  sentence={} # a sentence
  items=[] # word tokens

  # for each token word
  for index_word,word in enumerate(tokens):

    # uncomment to test UNICODE
    my_regex = u"(\s|^)%s(\s|$)"  % word
    #my_regex = r"\b(?=\w)" + re.escape(word) + r"\b(?!\w)"
    r = re.compile(my_regex, flags=re.I | re.X | re.UNICODE)

    item = {}
    # for each matched token in sentence
    for m in r.finditer(document):

      token=m.group()
      characterOffsetBegin=m.start()
      characterOffsetEnd=characterOffsetBegin+len(m.group()) - 1 # LP: star from 0

      print ("word:%s characterOffsetBegin:%d characterOffsetEnd:%d" % (token, characterOffsetBegin, characterOffsetEnd) )

      found=-1
      if word in seen:
        found=seen[word]

      if characterOffsetBegin > found:
        # store last word has been seen
        seen[word] = characterOffsetBegin
        item['index']=index_word+1 #// word index starts from 1
        item['word']=token
        item['characterOffsetBegin'] = characterOffsetBegin;
        item['characterOffsetEnd'] = characterOffsetEnd;
        items.append(item)
        break

  sentence['text']=line
  sentence['tokens']=items
  sentences.append(sentence)

print(json.dumps(sentences, indent=4, sort_keys=True))

print("------ testing ------")
text=''
for sentence in sentences:
  for token in sentence['tokens']:
    # LP: we get the token from a slice in original text
    text = text + document[token['characterOffsetBegin']:token['characterOffsetEnd']+1] + " "
  text = text + '\n'
print(text)

特别是对于标记कहते,我将得到相同的匹配,而不是下一个标记

word: कहते  characterOffsetBegin:20 characterOffsetEnd:25
word: कहते  characterOffsetBegin:20 characterOffsetEnd:25

Tags: textinretokenforindexsentencesitem
1条回答
网友
1楼 · 发布于 2024-04-25 22:02:20

对于非Unicode文本,可以使用更好的正则表达式,如

my_regex = r"(?<!\w){}(?!\w)".format(re.escape(word))

如果word以一个非单词字符开头,你的就不起作用了。如果当前位置的左侧有一个单词字符,则(?<!\w)负lookahead失败匹配;如果当前位置的右侧有一个单词字符,则(?!\w)负lookahead失败匹配

Unicode文本regex的第二个问题是,第二个组使用单词后面的空白,因此它不可用于随后的匹配。在这里使用了望台很方便:

my_regex = r"(?<!\S){}(?!\S)".format(re.escape(word))

看这个Python demo online

如果当前位置的左侧有一个非空白字符,则(?<!\S)负lookback将失败匹配;如果当前位置的右侧有一个非空白字符,则(?!\S)负lookahead将失败匹配

相关问题 更多 >