diff --git a/src/java/org/apache/lucene/analysis/package.html b/src/java/org/apache/lucene/analysis/package.html index b4356e7eef9..8036b32ede8 100644 --- a/src/java/org/apache/lucene/analysis/package.html +++ b/src/java/org/apache/lucene/analysis/package.html @@ -72,10 +72,10 @@ There are many post tokenization steps that can be done, including (but not limi up incoming text into tokens. In most cases, an Analyzer will use a Tokenizer as the first step in the analysis process.
  • {@link org.apache.lucene.analysis.TokenFilter} – A TokenFilter is also a {@link org.apache.lucene.analysis.TokenStream} and is responsible - for modifying tokenss that have been created by the Tokenizer. Common modifications performed by a + for modifying tokens that have been created by the Tokenizer. Common modifications performed by a TokenFilter are: deletion, stemming, synonym injection, and down casing. Not all Analyzers require TokenFilters
  • - Since Lucene 2.9 the TokenStream API has changed. Please see section "New TokenStream API" below for details. + Lucene 2.9 introduces a new TokenStream API. Please see the section "New TokenStream API" below for more details.

    Hints, Tips and Traps

    @@ -358,6 +358,9 @@ public class MyAnalyzer extends Analyzer { while (stream.incrementToken()) { System.out.println(termAtt.term()); } + + stream.end() + stream.close(); } }