python(nltk?)

2024-04-29 02:10:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一组连接词,我想把它们分成数组

例如:

split_word("acquirecustomerdata")
=> ['acquire', 'customer', 'data']

我找到了pyenchant,但它不适用于64位windows。在

然后我尝试将每个字符串拆分为子字符串,然后将它们与wordnet进行比较,以找到一个等价的单词。 例如:

^{pr2}$

但这种解决办法并不确定,而且时间太长。 所以我在找你帮忙。在

谢谢你


Tags: 字符串datawindowscustomer数组单词wordnetword
2条回答

如果你有一个所有可能的单词的列表,你可以用这样的方法:

import re

word_list = ["go", "walk", "run", "jump"]  # list of all possible words
pattern = re.compile("|".join("%s" % word for word in word_list))

s = "gowalkrunjump"
result = re.findall(pattern, s)

检查Word Segmentation Task来自Norvig的工作。在

from __future__ import division
from collections import Counter
import re, nltk

WORDS = nltk.corpus.brown.words()
COUNTS = Counter(WORDS)

def pdist(counter):
    "Make a probability distribution, given evidence from a Counter."
    N = sum(counter.values())
    return lambda x: counter[x]/N

P = pdist(COUNTS)

def Pwords(words):
    "Probability of words, assuming each word is independent of others."
    return product(P(w) for w in words)

def product(nums):
    "Multiply the numbers together.  (Like `sum`, but with multiplication.)"
    result = 1
    for x in nums:
        result *= x
    return result

def splits(text, start=0, L=20):
    "Return a list of all (first, rest) pairs; start <= len(first) <= L."
    return [(text[:i], text[i:]) 
            for i in range(start, min(len(text), L)+1)]

def segment(text):
    "Return a list of words that is the most probable segmentation of text."
    if not text: 
        return []
    else:
        candidates = ([first] + segment(rest) 
                      for (first, rest) in splits(text, 1))
        return max(candidates, key=Pwords)

print segment('acquirecustomerdata')
#['acquire', 'customer', 'data']

为了获得更好的解决方案,可以使用bigram/trigram。在

更多示例:Word Segmentation Task

相关问题 更多 >