diff --git a/src/java/org/apache/lucene/analysis/package.html b/src/java/org/apache/lucene/analysis/package.html index 93e36764e62..28b8e65a510 100644 --- a/src/java/org/apache/lucene/analysis/package.html +++ b/src/java/org/apache/lucene/analysis/package.html @@ -12,26 +12,40 @@ Lucene, indexing and search library, accepts only plain text input.
-Applications that build their search capabilities upon Lucene may support documents in various formats - HTML, XML, PDF, Word - just to name a few. +Applications that build their search capabilities upon Lucene may support documents in various formats – HTML, XML, PDF, Word – just to name a few. Lucene does not care about the Parsing of these and other document formats, and it is the responsibility of the -application using Lucene to use an appropriate Parser to convert the original format into plain text, before passing that plain text to Lucene. +application using Lucene to use an appropriate Parser to convert the original format into plain text before passing that plain text to Lucene.
-Plain text passed to Lucene for indexing goes through a process generally called tokenization - namely breaking of the -input text into small indexing elements - Tokens. The way that the input text is broken into tokens very -much dictates the further search capabilities of the index into which that text was added. Sentences -beginnings and endings can be identified to provide for more accurate phrase and proximity searches -(though sentence identification is not provided by Lucene). +Plain text passed to Lucene for indexing goes through a process generally called tokenization – namely breaking of the +input text into small indexing elements – +{@link org.apache.lucene.analysis.Token Tokens}. +The way input text is broken into tokens very +much dictates further capabilities of search upon that text. +For instance, sentences beginnings and endings can be identified to provide for more accurate phrase +and proximity searches (though sentence identification is not provided by Lucene).
-In some cases simply breaking the input text into tokens is not enough - a deeper Analysis is needed, +In some cases simply breaking the input text into tokens is not enough – a deeper Analysis is needed, providing for several functions, including (but not limited to):
Lucene Java provides a number of analysis capabilities, the most commonly used one being the {@link +
+ Lucene Java provides a number of analysis capabilities, the most commonly used one being the {@link org.apache.lucene.analysis.standard.StandardAnalyzer}. Many applications will have a long and industrious life with nothing more than the StandardAnalyzer. However, there are a few other classes/packages that are worth mentioning:
Analysis is one of the main causes of performance degradation during indexing. Simply put, the more you analyze the slower the indexing (in most cases). +
+ Analysis is one of the main causes of performance degradation during indexing. Simply put, the more you analyze the slower the indexing (in most cases). Perhaps your application would be just fine using the simple {@link org.apache.lucene.analysis.WhitespaceTokenizer} combined with a - {@link org.apache.lucene.analysis.StopFilter}.
+ {@link org.apache.lucene.analysis.StopFilter}. The contrib/benchmark library can be useful for testing out the speed of the analysis process. + ++ Applications usually do not invoke analysis – Lucene does it for them: +
+ Analyzer analyzer = new StandardAnalyzer(); // or any other analyzer + TokenStream ts = analyzer.tokenStream("myfield",new StringReader("some text goes here")); + Token t = ts.next(); + while (t!=null) { + System.out.println("token: "+t)); + t = ts.next(); + } ++ +
+ Selecting the "correct" analyzer is crucial + for search quality, and can also affect indexing and search performance. + The "correct" analyzer differs between applications. + Lucene java's wiki page + AnalysisParalysis + provides some data on "analyzing your analyzer". + Here are some rules of thumb: +
Creating your own Analyzer is straightforward. It usually involves either wrapping an existing Tokenizer and set of TokenFilters to create a new Analyzer or creating both the Analyzer and a Tokenizer or TokenFilter. Before pursuing this approach, you may find it worthwhile to explore the contrib/analyzers library and/or ask on the java-user@lucene.apache.org mailing list first to see if what you need already exists. If you are still committed to creating your own Analyzer or TokenStream derivation (Tokenizer or TokenFilter) have a look at -the source code of any one of the many samples located in this package.
+the source code of any one of the many samples located in this package. + ++ The following sections discuss some aspects of implementing your own analyzer. +
++ When {@link org.apache.lucene.document.Document#add(org.apache.lucene.document.Fieldable) document.add(field)} + is called multiple times for the same field name, we could say that each such call creates a new + section for that field in that document. + In fact, a separate call to + {@link org.apache.lucene.analysis.Analyzer#tokenStream(java.lang.String, java.io.Reader) tokenStream(field,reader)} + would take place for each of these so called "sections". + However, the default Analyzer behavior is to treat all these sections as one large section. + This allows phrase search and proximity search to seamlessly cross + boundaries between these "sections". + In other words, if a certain field "f" is added like this: +
+ document.add(new Field("f","first ends",...); + document.add(new Field("f","starts two",...); + indexWriter.addDocument(document); ++ Then, a phrase search for "ends starts" would find that document. + Where desired, this behavior can be modified by introducing a "position gap" between consecutive field "sections", + simply by overriding + {@link org.apache.lucene.analysis.Analyzer#getPositionIncrementGap(java.lang.String) Analyzer.getPositionIncrementGap(fieldName)}: +
+ Analyzer myAnalyzer = new StandardAnalyzer() { + public int getPositionIncrementGap(String fieldName) { + return 10; + } + }; ++ +
+ By default, all tokens created by Analyzers and Tokenizers have a + {@link org.apache.lucene.analysis.Token#getPositionIncrement() position increment} of one. + This means that the position stored for that token in the index would be one more than + that of the previous token. + Recall that phrase and proximity searches rely on position info. +
++ If the selected analyzer filters the stop words "is" and "the", then for a document + containing the string "blue is the sky", only the tokens "blue", "sky" are indexed, + with position("sky") = 1 + position("blue"). Now, a phrase query "blue is the sky" + would find that document, because the same analyzer filters the same stop words from + that query. But also the phrase query "blue sky" would find that document. +
++ If this behavior does not fit the application needs, + a modified analyzer can be used, that would increment further the positions of + tokens following a removed stop word, using + {@link org.apache.lucene.analysis.Token#setPositionIncrement(int)}. + This can be done with something like: +
+ public TokenStream tokenStream(final String fieldName, Reader reader) { + final TokenStream ts = new SomeAnalyzer(fieldName, reader); + TokenStream res = new TokenStream() { + public Token next() throws IOException { + int extraIncrement = 0; + while (true) { + Token t = tf.next(); + if (t!=null) { + if (stopwords.contains(t.termText())) { + extraIncrement++; // filter this word + continue; + } + if (extraIncrement>0) { + t.setPositionIncrement(t.getPositionIncrement()+extraIncrement); + } + } + return t; + } + }; + return res; + } + } ++ Now, with this modified analyzer, the phrase query "blue sky" would find that document. + But note that this is yet not a perfect solution, because any phrase query "blue w1 w2 sky" + where both w1 and w2 are stop words would match that document. + +
+ Few more use cases for modifying position increments are: +