在pylucene中使用jcc编写自定义分析器/继承?
我想在pylucene中写一个自定义的分析器。通常在Java的Lucene中,当你写一个分析器类时,你的类是从Lucene的Analyzer类继承的。
但是pylucene使用了jcc,这是一个把Java代码转换成C++或Python的编译器。
那么,如何让一个Python类继承自Java类呢?特别是,如何写一个自定义的pylucene分析器呢?
谢谢。
2 个回答
1
在pylucene中,你可以从任何类继承,但名字以Python开头的类会同时扩展底层的Java类,也就是说,当你从Java代码调用这些方法时,它们会变成“虚拟”的。因此,如果你想创建自定义的分析器,就要从PythonAnalyzer继承,并实现tokenStream方法。
3
这是一个示例,展示了如何使用一个分析器来包装EdgeNGram过滤器。
import lucene
class EdgeNGramAnalyzer(lucene.PythonAnalyzer):
'''
This is an example of a custom Analyzer (in this case an edge-n-gram analyzer)
EdgeNGram Analyzers are good for type-ahead
'''
def __init__(self, side, minlength, maxlength):
'''
Args:
side[enum] Can be one of lucene.EdgeNGramTokenFilter.Side.FRONT or lucene.EdgeNGramTokenFilter.Side.BACK
minlength[int]
maxlength[int]
'''
lucene.PythonAnalyzer.__init__(self)
self.side = side
self.minlength = minlength
self.maxlength = maxlength
def tokenStream(self, fieldName, reader):
result = lucene.LowerCaseTokenizer(Version.LUCENE_CURRENT, reader)
result = lucene.StandardFilter(result)
result = lucene.StopFilter(True, result, StopAnalyzer.ENGLISH_STOP_WORDS_SET)
result = lucene.ASCIIFoldingFilter(result)
result = lucene.EdgeNGramTokenFilter(result, self.side, self.minlength, self.maxlength)
return result
这是另一个示例,展示了如何重新实现PorterStemmer。
# This sample illustrates how to write an Analyzer 'extension' in Python.
#
# What is happening behind the scenes ?
#
# The PorterStemmerAnalyzer python class does not in fact extend Analyzer,
# it merely provides an implementation for Analyzer's abstract tokenStream()
# method. When an instance of PorterStemmerAnalyzer is passed to PyLucene,
# with a call to IndexWriter(store, PorterStemmerAnalyzer(), True) for
# example, the PyLucene SWIG-based glue code wraps it into an instance of
# PythonAnalyzer, a proper java extension of Analyzer which implements a
# native tokenStream() method whose job is to call the tokenStream() method
# on the python instance it wraps. The PythonAnalyzer instance is the
# Analyzer extension bridge to PorterStemmerAnalyzer.
'''
More explanation...
Analyzers split up a chunk of text into tokens...
Analyzers are applied to an index globally (unless you use perFieldAnalyzer)
Analyzers implement Tokenizers and TokenFilters.
Tokenizers break up string into tokens. TokenFilters break of Tokens into more Tokens or filter out
Tokens
'''
import sys, os
from datetime import datetime
from lucene import *
from IndexFiles import IndexFiles
class PorterStemmerAnalyzer(PythonAnalyzer):
def tokenStream(self, fieldName, reader):
#There can only be 1 tokenizer in each Analyzer
result = StandardTokenizer(Version.LUCENE_CURRENT, reader)
result = StandardFilter(result)
result = LowerCaseFilter(result)
result = PorterStemFilter(result)
result = StopFilter(True, result, StopAnalyzer.ENGLISH_STOP_WORDS_SET)
return result
if __name__ == '__main__':
if len(sys.argv) < 2:
sys.exit("requires at least one argument: lucene-index-path")
initVM()
start = datetime.now()
try:
IndexFiles(sys.argv[1], "index", PorterStemmerAnalyzer())
end = datetime.now()
print end - start
except Exception, e:
print "Failed: ", e
你可以查看 perFieldAnalyzerWrapper.java 还有 KeywordAnalyzerTest.py
analyzer = PerFieldAnalyzerWrapper(SimpleAnalyzer())
analyzer.addAnalyzer("partnum", KeywordAnalyzer())
query = QueryParser(Version.LUCENE_CURRENT, "description",
analyzer).parse("partnum:Q36 AND SPACE")
scoreDocs = self.searcher.search(query, 50).scoreDocs