在使用pyspark和nltk时,我希望获得所有“NP”单词的长度,并按降序对它们进行排序。我现在被困在子树的导航上
示例子树输出
#>>>[(Tree('NP', [Tree('NBAR', [('WASHINGTON', 'NN')])]), 1)
尝试获取所有NP单词的长度。 然后把这些长度按降序排列
第一个元素是长度为1的单词和单词数,依此类推
example:
#[(1, 6157),6157 words length of one
# (2, 1833),1833 words length of 2
# (3, 654),
# (4, 204),
# (5, 65)]
import nltk
import re
textstring = """This is just a bunch of words to use for this example.
John gave them to me last night but Kim took them to work.
Hi Stacy. URL:http://example.com. Jessica, Mark, Tiger, Book, Crow, Airplane, SpaceShip"""
TOKEN_RE = re.compile(r"\b[\w']+\b")
grammar = r"""
NBAR:
{<NN.*|JJS>*<NN.*>}
NP:
{<NBAR>}
{<NBAR><IN><NBAR>}
"""
chunker = nltk.RegexpParser(grammar)
text = sc.parallelize(textstring.split(' ')
dropURL=text.filter(lambda x: "URL" not in x)
words = dropURL.flatMap(lambda dropURL: dropURL.split(" "))
tree = words.flatMap(lambda words: chunker.parse(nltk.tag.pos_tag(nltk.regexp_tokenize(words, TOKEN_RE))))
#data=tree.map(lambda word: (word,len(word))).filter(lambda t : t.label() =='NBAR') -- error
#data=tree.map(lambda x: (x,len(x)))##.filter(lambda t : t[0] =='NBAR')
#>>>[(Tree('NP', [Tree('NBAR', [('WASHINGTON', 'NN')])]), 1) Trying to get the length of all NP's and in descending order.
#data=tree.map(lambda x: (x,len(x))).reduceByKey(lambda x: x=='NBAR') ##this is an error but I am getting close I think
data=tree.map(lambda x: (x[0][0],len(x[0][0][0])))#.reduceByKey(lambda x : x[1] =='NP') ##Long run time.
things = data.collect()
things
您可以为每个条目添加类型检查以防止出现错误:
相关问题 更多 >
编程相关推荐