有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

java Lucene的StopFilter没有删除stopwords?

我为我的域构建了一个自定义Lucene分析器,但在得到意外结果后,我决定获取该分析器的TokenStream,并手动调试它

通过这样做,我发现它似乎没有过滤我的停止词,我不明白为什么

以下是我用作测试的文本(随机推特项目,如果没有任何意义,很抱歉):

Ke #rosicata il #pareggio nel #recupero del #recupero di #vantaggiato del #Livorno a #Catania 1-1 #finale spegne i #sogni degli #etnei di #raggiungere i #playoff per la #promozione in #serieA #catanialivorno square squareformat iphoneography instagramapp uploaded:by=instagram

以下是我的stopwords(存储在分析仪加载的文件中):

square 
squareformat 
iphoneography 
instagramapp 
uploaded:by=instagram

最后,这里是输出(一行=一个标记):

rosicata
pareggio
recupero
recupero
vantaggiato
livorno
catania
finale
finale spegne
spegne
sogni
etnei
raggiungere
playoff
promozione
seriea
seriea catanialivorno
seriea catanialivorno square
catanialivorno
catanialivorno square
catanialivorno square squareformat
square
square squareformat
square squareformat iphoneography
squareformat
squareformat iphoneography
squareformat iphoneography instagramapp
iphoneography
iphoneography instagramapp
instagramapp
instagram

如你所见,最后几行包含了我想用过滤器删除的内容

以下是过滤代码:

@Override
protected TokenStreamComponents createComponents(String string) {
    final StandardTokenizer src = new StandardTokenizer();
    src.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH);
    TokenStream tokenStream = new StandardFilter(src);
    // From StandardAnalyzer
    tokenStream = new LowerCaseFilter(tokenStream);
    // Custom filters
    // Filter emails, uris and numbers
    Set<String> stopTypes = new HashSet<>();
    stopTypes.add("<URL>");
    stopTypes.add("<NUM>");
    stopTypes.add("<EMAIL>");
    tokenStream = new TypeTokenFilter(tokenStream, stopTypes);
    // Non latin removal
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("\\P{InBasic_Latin}"), "", true);
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("^(?=.*\\d).+$"), "", true);
    // Remove words containing www
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile(".*(www).*"), "", true);
    // Remove special tags like uploaded:by=instagram
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile(".+:.+(=.+)?"), "", true);
    // Remove words shorter than 3 characters
    tokenStream = new LengthFilter(tokenStream, 3, 25);
    // Stopwords
    tokenStream = new StopFilter(tokenStream, stopwordsCollection);
    // N-Grams
    tokenStream = new ShingleFilter(tokenStream, 3);
    // HACK - ShingleFilter uses fillers like _ for some reasons and there's no way to disable it now, so we replace all the _ with empty strings
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile(".*_.*"), "", true);
    tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("\\b(\\w+)\\s+\\1\\b"), "", true);
    // Stopwords
    tokenStream = new StopFilter(tokenStream, stopwordsCollection);
    // Final trim
    tokenStream = new TrimFilter(tokenStream);
    // Set CharTerm attribute
    tokenStream.addAttribute(CharTermAttribute.class);
    tokenStream.addAttribute(FuzzyTermsEnum.LevenshteinAutomataAttribute.class);
    return new TokenStreamComponents(src, tokenStream) {
        @Override
        protected void setReader(final Reader reader) {
            src.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH);
            super.setReader(reader);
        }
    };        
}

为了再次检查,我在这个方法返回之前放置了一个断点,stopwordsCollection,它是一个CharArraySet,包含了我文件中的相同单词(因此它们被正确加载)

我的第一个想法是,木瓦过滤器会干扰停止字的删除,但现在我只看到输出也包含square,这是一个单字停止字

谁能帮我修一下吗

编辑:为了清晰起见,我还添加了用于打印代币的代码

TokenStream tokenStream = null;
try {
    tokenStream = new MyAnalyzer().tokenStream("text", item.toString());
    tokenStream.reset();
    // Iterate over the stream to process single words
    while (tokenStream.incrementToken()) {
        CharTermAttribute charTerm = tokenStream.getAttribute(CharTermAttribute.class);
        System.out.println(charTerm.toString());
    }
    // Perform end-of-stream operations, e.g. set the final offset.
    tokenStream.end();
} catch (IOException ex) {
} finally {
    try {
        // Close the stream to release resources
        if (tokenStream != null) {
            tokenStream.close();
        }
    } catch (IOException ex) {
    }
}

编辑2:结果是我存储了带有尾随空格的停止字,但仍然有一个最小的问题

电流输出为:

rosicata
pareggio
recupero
recupero
vantaggiato
livorno
catania
finale
finale spegne
spegne
sogni
etnei
raggiungere
playoff
promozione
seriea
seriea catanialivorno
catanialivorno
instagram

如您所见,最后一个单词是instagram,来自上传的:by=instagram。现在,upload:by=instagram也是一个stopword,我仍在使用基于正则表达式的过滤器来删除这种模式

tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile(".+:.+(=.+)?"), "", true);

我也把它作为第一个过滤器,但我还是得到了instagram


共 (0) 个答案