LUCENE-1701, LUCENE-1687: Add NumericField , make plain text numeric parsers public in FieldCache, move trie parsers to FieldCache, merge ExtendedFieldCache and FieldCache

git-svn-id: https://svn.apache.org/repos/asf/lucene/java/trunk@787723 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Uwe Schindler 2009-06-23 15:42:12 +00:00
parent 73b462a823
commit edfce675a5
28 changed files with 892 additions and 806 deletions

View File

@ -40,6 +40,19 @@ Changes in backwards compatibility policy
values internally in certain places, so if you have hits with such
scores it will cause problems. (Shai Erera via Mike McCandless)
2. LUCENE-1687: All methods and parsers from interface ExtendedFieldCache
were moved into FieldCache. ExtendedFieldCache was deprecated and now
contains only a few declarations for binary backwards compatibility and
will be removed in version 3.0. Users of FieldCache and ExtendedFieldCache
will therefore be able to plugin Lucene 2.9 without recompilation.
The auto cache (FieldCache.getAuto) was deprecated. Due to the merge of
ExtendedFieldCache and FieldCache, this method can now additionally return
long[] arrays in addition to int[] and float[] and StringIndex.
The interface changes are only notable for users implementing the interfaces,
which was unlikely done, because there is no possibility to change
Lucene's FieldCache implementation. (Grant Ingersoll, Uwe Schindler)
Changes in runtime behavior
1. LUCENE-1424: QueryParser now by default uses constant score query
@ -415,12 +428,17 @@ Bug fixes
See the Javadocs for NGramDistance.java for a reference paper on why
this is helpful (Tom Morton via Grant Ingersoll)
27. LUCENE-1470, LUCENE-1582, LUCENE-1602, LUCENE-1673: Added
NumericRangeQuery and NumericRangeFilter, a fast alternative to
27. LUCENE-1470, LUCENE-1582, LUCENE-1602, LUCENE-1673, LUCENE-1701:
Added NumericRangeQuery and NumericRangeFilter, a fast alternative to
RangeQuery/RangeFilter for numeric searches. They depend on a specific
structure of terms in the index that can be created by indexing
using the new NumericTokenStream class. (Uwe Schindler,
Yonik Seeley, Mike McCandless)
using the new NumericField or NumericTokenStream classes. NumericField
can only be used for indexing and optionally stores the values as
string representation in the doc store. Documents returned from
IndexReader/IndexSearcher will return only the String value using
the standard Fieldable interface. NumericFields can be sorted on
and loaded into the FieldCache. (Uwe Schindler, Yonik Seeley,
Mike McCandless)
28. LUCENE-1405: Added support for Ant resource collections in contrib/ant
<index> task. (Przemyslaw Sztoch via Erik Hatcher)
@ -429,6 +447,10 @@ Bug fixes
in conjunction with any other ways to specify stored field values,
currently binary or string values. (yonik)
30. LUCENE-1701: Made the standard FieldCache.Parsers public and added
parsers for fields generated using NumericField/NumericTokenStream.
All standard parsers now also implement Serializable and enforce
their singleton status. (Uwe Schindler, Mike McCandless)
Optimizations

View File

@ -42,7 +42,7 @@
<property name="Name" value="Lucene"/>
<property name="dev.version" value="2.9-dev"/>
<property name="version" value="${dev.version}"/>
<property name="compatibility.tag" value="lucene_2_4_back_compat_tests_20090614"/>
<property name="compatibility.tag" value="lucene_2_4_back_compat_tests_20090623"/>
<property name="spec.version" value="${version}"/>
<property name="year" value="2000-${current.year}"/>
<property name="final.name" value="lucene-${name}-${version}"/>

View File

@ -18,19 +18,26 @@ package org.apache.lucene.analysis;
*/
import org.apache.lucene.util.NumericUtils;
import org.apache.lucene.document.NumericField; // for javadocs
import org.apache.lucene.search.NumericRangeQuery; // for javadocs
import org.apache.lucene.search.NumericRangeFilter; // for javadocs
import org.apache.lucene.search.SortField; // for javadocs
import org.apache.lucene.search.FieldCache; // javadocs
import org.apache.lucene.analysis.tokenattributes.TermAttribute;
import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
/**
* This class provides a {@link TokenStream} for indexing numeric values
* <b>Expert:</b> This class provides a {@link TokenStream} for indexing numeric values
* that can be used by {@link NumericRangeQuery}/{@link NumericRangeFilter}.
* For more information, how to use this class and its configuration properties
* (<a href="../search/NumericRangeQuery.html#precisionStepDesc"><code>precisionStep</code></a>)
* read the docs of {@link NumericRangeQuery}.
*
* <p><b>For easy usage during indexing, there is a {@link NumericField}, that uses the optimal
* indexing settings (no norms, no term freqs). {@link NumericField} is a wrapper around this
* expert token stream.</b>
*
* <p>This stream is not intended to be used in analyzers, its more for iterating the
* different precisions during indexing a specific numeric value.
* A numeric value is indexed as multiple string encoded terms, each reduced
@ -64,12 +71,16 @@ import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
* writer.addDocument(document);
* ...
* </pre>
*
* <p><em>Please note:</em> Token streams are read, when the document is added to index.
* If you index more than one numeric field, use a separate instance for each.
*
* <p>Values indexed by this stream can be sorted on or loaded into the field cache.
* For that factories like {@link NumericUtils#getLongSortField} are available,
* as well as parsers for filling the field cache (e.g., {@link NumericUtils#FIELD_CACHE_LONG_PARSER})
* <p>Values indexed by this stream can be loaded into the {@link FieldCache}
* and can be sorted (use {@link SortField}{@code .TYPE} to specify the correct
* type; {@link SortField#AUTO} does not work with this type of field)
*
* <p><font color="red"><b>NOTE:</b> This API is experimental and
* might change in incompatible ways in the next release.</font>
*
* @since 2.9
*/

View File

@ -19,7 +19,6 @@ package org.apache.lucene.document;
import org.apache.lucene.search.PrefixQuery;
import org.apache.lucene.search.RangeQuery;
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.search.NumericRangeQuery; // for javadocs
import org.apache.lucene.util.NumericUtils; // for javadocs
@ -50,11 +49,11 @@ import java.util.Calendar; // for javadoc
* date/time are.
* For indexing a {@link Date} or {@link Calendar}, just get the unix timestamp as
* <code>long</code> using {@link Date#getTime} or {@link Calendar#getTimeInMillis} and
* index this as a numeric value with {@link NumericTokenStream}
* index this as a numeric value with {@link NumericField}
* and use {@link NumericRangeQuery} to query it.
*
* @deprecated If you build a new index, use {@link DateTools} or
* {@link NumericTokenStream} instead.
* {@link NumericField} instead.
* This class is included for use with existing
* indices and will be removed in a future release.
*/

View File

@ -22,7 +22,6 @@ import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.TimeZone;
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.search.NumericRangeQuery; // for javadocs
import org.apache.lucene.util.NumericUtils; // for javadocs
@ -46,7 +45,7 @@ import org.apache.lucene.util.NumericUtils; // for javadocs
* date/time are.
* For indexing a {@link Date} or {@link Calendar}, just get the unix timestamp as
* <code>long</code> using {@link Date#getTime} or {@link Calendar#getTimeInMillis} and
* index this as a numeric value with {@link NumericTokenStream}
* index this as a numeric value with {@link NumericField}
* and use {@link NumericRangeQuery} to query it.
*/
public class DateTools {

View File

@ -17,7 +17,7 @@ package org.apache.lucene.document;
* limitations under the License.
*/
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.document.NumericField; // for javadocs
import org.apache.lucene.search.NumericRangeQuery; // for javadocs
import org.apache.lucene.util.NumericUtils; // for javadocs
@ -39,7 +39,7 @@ import org.apache.lucene.util.NumericUtils; // for javadocs
* @deprecated For new indexes use {@link NumericUtils} instead, which
* provides a sortable binary representation (prefix encoded) of numeric
* values.
* To index and efficiently query numeric values use {@link NumericTokenStream}
* To index and efficiently query numeric values use {@link NumericField}
* and {@link NumericRangeQuery}.
* This class is included for use with existing
* indices and will be removed in a future release.

View File

@ -0,0 +1,192 @@
package org.apache.lucene.document;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.io.Reader;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.NumericTokenStream;
import org.apache.lucene.search.NumericRangeQuery; // javadocs
import org.apache.lucene.search.NumericRangeFilter; // javadocs
import org.apache.lucene.search.SortField; // javadocs
import org.apache.lucene.search.FieldCache; // javadocs
/**
* This class provides a {@link Field} for indexing numeric values
* that can be used by {@link NumericRangeQuery}/{@link NumericRangeFilter}.
* For more information, how to use this class and its configuration properties
* (<a href="../search/NumericRangeQuery.html#precisionStepDesc"><code>precisionStep</code></a>)
* read the docs of {@link NumericRangeQuery}.
*
* <p>A numeric value is indexed as multiple string encoded terms, each reduced
* by zeroing bits from the right. Each value is also prefixed (in the first char) by the
* <code>shift</code> value (number of bits removed) used during encoding.
* The number of bits removed from the right for each trie entry is called
* <code>precisionStep</code> in this API.
*
* <p>The usage pattern is:
* <pre>
* document.add(
* new NumericField(name, precisionStep, Field.Store.XXX, true).set<em>???</em>Value(value)
* );
* </pre>
* <p>For optimal performance, re-use the NumericField and {@link Document} instance
* for more than one document:
* <pre>
* <em>// init</em>
* NumericField field = new NumericField(name, precisionStep, Field.Store.XXX, true);
* Document doc = new Document();
* document.add(field);
* <em>// use this code to index many documents:</em>
* stream.set<em>???</em>Value(value1)
* writer.addDocument(document);
* stream.set<em>???</em>Value(value2)
* writer.addDocument(document);
* ...
* </pre>
*
* <p>More advanced users can instead use {@link NumericTokenStream} directly, when
* indexing numbers. This class is a wrapper around this token stream type for easier,
* more intuitive usage.
*
* <p><b>Please note:</b> This class is only used during indexing. You can also create
* numeric stored fields with it, but when retrieving the stored field value
* from a {@link Document} instance after search, you will get a conventional
* {@link Fieldable} instance where the numeric values are returned as {@link String}s
* (according to <code>toString(value)</code> of the used data type).
*
* <p>Values indexed by this field can be loaded into the {@link FieldCache}
* and can be sorted (use {@link SortField}{@code .TYPE} to specify the correct
* type; {@link SortField#AUTO} does not work with this type of field)
*
* <p><font color="red"><b>NOTE:</b> This API is experimental and
* might change in incompatible ways in the next release.</font>
*
* @since 2.9
*/
public final class NumericField extends AbstractField {
private final NumericTokenStream tokenStream;
/**
* Creates a field for numeric values. The instance is not yet initialized with
* a numeric value, before indexing a document containing this field,
* set a value using the various set<em>???</em>Value() methods.
* This constrcutor creates an indexed, but not stored field.
* @param name the field name
* @param precisionStep the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
*/
public NumericField(String name, int precisionStep) {
this(name, precisionStep, Field.Store.NO, true);
}
/**
* Creates a field for numeric values. The instance is not yet initialized with
* a numeric value, before indexing a document containing this field,
* set a value using the various set<em>???</em>Value() methods.
* @param name the field name
* @param precisionStep the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
* @param store if the field should be stored in plain text form
* (according to <code>toString(value)</code> of the used data type)
* @param index if the field should be indexed using {@link NumericTokenStream}
*/
public NumericField(String name, int precisionStep, Field.Store store, boolean index) {
super(name, store, index ? Field.Index.ANALYZED_NO_NORMS : Field.Index.NO, Field.TermVector.NO);
setOmitTermFreqAndPositions(true);
tokenStream = new NumericTokenStream(precisionStep);
}
/** Returns a {@link NumericTokenStream} for indexing the numeric value. */
public TokenStream tokenStreamValue() {
return isIndexed() ? tokenStream : null;
}
/** Returns always <code>null</code> for numeric fields */
public byte[] binaryValue() {
return null;
}
/** Returns always <code>null</code> for numeric fields */
public byte[] getBinaryValue(byte[] result){
return null;
}
/** Returns always <code>null</code> for numeric fields */
public Reader readerValue() {
return null;
}
/** Returns the numeric value as a string (how it is stored, when {@link Field.Store#YES} is choosen). */
public String stringValue() {
return (fieldsData == null) ? null : fieldsData.toString();
}
/** Returns the current numeric value as a subclass of {@link Number}, <code>null</code> if not yet initialized. */
public Number getNumericValue() {
return (Number) fieldsData;
}
/**
* Initializes the field with the supplied <code>long</code> value.
* @param value the numeric value
* @return this instance, because of this you can use it the following way:
* <code>document.add(new NumericField(name, precisionStep).setLongValue(value))</code>
*/
public NumericField setLongValue(final long value) {
tokenStream.setLongValue(value);
fieldsData = new Long(value);
return this;
}
/**
* Initializes the field with the supplied <code>int</code> value.
* @param value the numeric value
* @return this instance, because of this you can use it the following way:
* <code>document.add(new NumericField(name, precisionStep).setIntValue(value))</code>
*/
public NumericField setIntValue(final int value) {
tokenStream.setIntValue(value);
fieldsData = new Integer(value);
return this;
}
/**
* Initializes the field with the supplied <code>double</code> value.
* @param value the numeric value
* @return this instance, because of this you can use it the following way:
* <code>document.add(new NumericField(name, precisionStep).setDoubleValue(value))</code>
*/
public NumericField setDoubleValue(final double value) {
tokenStream.setDoubleValue(value);
fieldsData = new Double(value);
return this;
}
/**
* Initializes the field with the supplied <code>float</code> value.
* @param value the numeric value
* @return this instance, because of this you can use it the following way:
* <code>document.add(new NumericField(name, precisionStep).setFloatValue(value))</code>
*/
public NumericField setFloatValue(final float value) {
tokenStream.setFloatValue(value);
fieldsData = new Float(value);
return this;
}
}

View File

@ -21,84 +21,29 @@ import org.apache.lucene.index.IndexReader;
import java.io.IOException;
/**
*
*
* This interface is obsolete, use {@link FieldCache} instead.
* @deprecated Will be removed in Lucene 3.0
**/
public interface ExtendedFieldCache extends FieldCache {
public interface LongParser extends Parser {
/**
* Return an long representation of this field's value.
*/
public long parseLong(String string);
/** @deprecated Use {@link FieldCache#DEFAULT}; this will be removed in Lucene 3.0 */
public static ExtendedFieldCache EXT_DEFAULT = (ExtendedFieldCache) FieldCache.DEFAULT;
/** @deprecated Use {@link FieldCache.LongParser}, this will be removed in Lucene 3.0 */
public interface LongParser extends FieldCache.LongParser {
}
public interface DoubleParser extends Parser {
/**
* Return an long representation of this field's value.
*/
public double parseDouble(String string);
/** @deprecated Use {@link FieldCache.DoubleParser}, this will be removed in Lucene 3.0 */
public interface DoubleParser extends FieldCache.DoubleParser {
}
public static ExtendedFieldCache EXT_DEFAULT = (ExtendedFieldCache)FieldCache.DEFAULT;
/** @deprecated Will be removed in 3.0, this is for binary compatibility only */
public long[] getLongs(IndexReader reader, String field, ExtendedFieldCache.LongParser parser)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is
* found, reads the terms in <code>field</code> as longs and returns an array
* of size <code>reader.maxDoc()</code> of the value each document
* has in the given field.
*
* @param reader Used to get field values.
* @param field Which field contains the longs.
* @return The values in the given field for each document.
* @throws java.io.IOException If any error occurs.
*/
public long[] getLongs(IndexReader reader, String field)
throws IOException;
/** @deprecated Will be removed in 3.0, this is for binary compatibility only */
public double[] getDoubles(IndexReader reader, String field, ExtendedFieldCache.DoubleParser parser)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is found,
* reads the terms in <code>field</code> as longs and returns an array of
* size <code>reader.maxDoc()</code> of the value each document has in the
* given field.
*
* @param reader Used to get field values.
* @param field Which field contains the longs.
* @param parser Computes integer for string values.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public long[] getLongs(IndexReader reader, String field, LongParser parser)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is
* found, reads the terms in <code>field</code> as integers and returns an array
* of size <code>reader.maxDoc()</code> of the value each document
* has in the given field.
*
* @param reader Used to get field values.
* @param field Which field contains the doubles.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public double[] getDoubles(IndexReader reader, String field)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is found,
* reads the terms in <code>field</code> as doubles and returns an array of
* size <code>reader.maxDoc()</code> of the value each document has in the
* given field.
*
* @param reader Used to get field values.
* @param field Which field contains the doubles.
* @param parser Computes integer for string values.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public double[] getDoubles(IndexReader reader, String field, DoubleParser parser)
throws IOException;
}

View File

@ -1,184 +0,0 @@
package org.apache.lucene.search;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermDocs;
import org.apache.lucene.index.TermEnum;
import java.io.IOException;
/**
*
*
**/
class ExtendedFieldCacheImpl extends FieldCacheImpl implements ExtendedFieldCache {
private static final LongParser LONG_PARSER = new LongParser() {
public long parseLong(String value) {
return Long.parseLong(value);
}
};
private static final DoubleParser DOUBLE_PARSER = new DoubleParser() {
public double parseDouble(String value) {
return Double.parseDouble(value);
}
};
public long[] getLongs(IndexReader reader, String field) throws IOException {
return getLongs(reader, field, LONG_PARSER);
}
// inherit javadocs
public long[] getLongs(IndexReader reader, String field, LongParser parser)
throws IOException {
return (long[]) longsCache.get(reader, new Entry(field, parser));
}
Cache longsCache = new Cache() {
protected Object createValue(IndexReader reader, Object entryKey)
throws IOException {
Entry entry = (Entry) entryKey;
String field = entry.field;
LongParser parser = (LongParser) entry.custom;
final long[] retArray = new long[reader.maxDoc()];
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term(field));
try {
do {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
long termval = parser.parseLong(term.text());
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
}
} while (termEnum.next());
} catch (StopFillCacheException stop) {
} finally {
termDocs.close();
termEnum.close();
}
return retArray;
}
};
// inherit javadocs
public double[] getDoubles(IndexReader reader, String field)
throws IOException {
return getDoubles(reader, field, DOUBLE_PARSER);
}
// inherit javadocs
public double[] getDoubles(IndexReader reader, String field, DoubleParser parser)
throws IOException {
return (double[]) doublesCache.get(reader, new Entry(field, parser));
}
Cache doublesCache = new Cache() {
protected Object createValue(IndexReader reader, Object entryKey)
throws IOException {
Entry entry = (Entry) entryKey;
String field = entry.field;
DoubleParser parser = (DoubleParser) entry.custom;
final double[] retArray = new double[reader.maxDoc()];
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
try {
do {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
double termval = parser.parseDouble(term.text());
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
}
} while (termEnum.next());
} catch (StopFillCacheException stop) {
} finally {
termDocs.close();
termEnum.close();
}
return retArray;
}
};
// inherit javadocs
public Object getAuto(IndexReader reader, String field) throws IOException {
return autoCache.get(reader, field);
}
Cache autoCache = new Cache() {
protected Object createValue(IndexReader reader, Object fieldKey)
throws IOException {
String field = ((String)fieldKey).intern();
TermEnum enumerator = reader.terms (new Term (field));
try {
Term term = enumerator.term();
if (term == null) {
throw new RuntimeException ("no terms in field " + field + " - cannot determine sort type");
}
Object ret = null;
if (term.field() == field) {
String termtext = term.text().trim();
/**
* Java 1.4 level code:
if (pIntegers.matcher(termtext).matches())
return IntegerSortedHitQueue.comparator (reader, enumerator, field);
else if (pFloats.matcher(termtext).matches())
return FloatSortedHitQueue.comparator (reader, enumerator, field);
*/
// Java 1.3 level code:
try {
Integer.parseInt (termtext);
ret = getInts (reader, field);
} catch (NumberFormatException nfe1) {
try {
Long.parseLong(termtext);
ret = getLongs (reader, field);
} catch (NumberFormatException nfe2) {
try {
Float.parseFloat (termtext);
ret = getFloats (reader, field);
} catch (NumberFormatException nfe3) {
ret = getStringIndex (reader, field);
}
}
}
} else {
throw new RuntimeException ("field \"" + field + "\" does not appear to be indexed");
}
return ret;
} finally {
enumerator.close();
}
}
};
}

View File

@ -18,7 +18,12 @@ package org.apache.lucene.search;
*/
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.util.NumericUtils;
import org.apache.lucene.document.NumericField; // for javadocs
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import java.io.IOException;
import java.io.Serializable;
/**
* Expert: Maintains caches of term values.
@ -79,19 +84,7 @@ public interface FieldCache {
* is used to specify a custom parser to {@link
* SortField#SortField(String, FieldCache.Parser)}.
*/
public interface Parser {
}
/**
* Expert: when thrown from a custom Parser, this stops
* processing terms and returns the current FieldCache
* array.
*
* <p><b>NOTE</b>: This API is experimental and likely to
* change in incompatible ways, or be removed entirely, in
* the next release.
*/
public static class StopFillCacheException extends RuntimeException {
public interface Parser extends Serializable {
}
/** Interface to parse bytes from document fields.
@ -126,9 +119,149 @@ public interface FieldCache {
public float parseFloat(String string);
}
/** Expert: The cache used internally by sorting and range query classes. */
public static FieldCache DEFAULT = new ExtendedFieldCacheImpl();
/** Interface to parse long from document fields.
* @see FieldCache#getLongs(IndexReader, String, FieldCache.LongParser)
*/
public interface LongParser extends Parser {
/** Return an long representation of this field's value. */
public long parseLong(String string);
}
/** Interface to parse doubles from document fields.
* @see FieldCache#getDoubles(IndexReader, String, FieldCache.DoubleParser)
*/
public interface DoubleParser extends Parser {
/** Return an long representation of this field's value. */
public double parseDouble(String string);
}
/** Expert: The cache used internally by sorting and range query classes. */
public static FieldCache DEFAULT = new FieldCacheImpl();
/** The default parser for byte values, which are encoded by {@link Byte#toString(byte)} */
public static final ByteParser DEFAULT_BYTE_PARSER = new ByteParser() {
public byte parseByte(String value) {
return Byte.parseByte(value);
}
protected Object readResolve() {
return DEFAULT_BYTE_PARSER;
}
};
/** The default parser for short values, which are encoded by {@link Short#toString(short)} */
public static final ShortParser DEFAULT_SHORT_PARSER = new ShortParser() {
public short parseShort(String value) {
return Short.parseShort(value);
}
protected Object readResolve() {
return DEFAULT_SHORT_PARSER;
}
};
/** The default parser for int values, which are encoded by {@link Integer#toString(int)} */
public static final IntParser DEFAULT_INT_PARSER = new IntParser() {
public int parseInt(String value) {
return Integer.parseInt(value);
}
protected Object readResolve() {
return DEFAULT_INT_PARSER;
}
};
/** The default parser for float values, which are encoded by {@link Float#toString(float)} */
public static final FloatParser DEFAULT_FLOAT_PARSER = new FloatParser() {
public float parseFloat(String value) {
return Float.parseFloat(value);
}
protected Object readResolve() {
return DEFAULT_FLOAT_PARSER;
}
};
/** The default parser for long values, which are encoded by {@link Long#toString(long)} */
public static final LongParser DEFAULT_LONG_PARSER = new LongParser() {
public long parseLong(String value) {
return Long.parseLong(value);
}
protected Object readResolve() {
return DEFAULT_LONG_PARSER;
}
};
/** The default parser for double values, which are encoded by {@link Double#toString(double)} */
public static final DoubleParser DEFAULT_DOUBLE_PARSER = new DoubleParser() {
public double parseDouble(String value) {
return Double.parseDouble(value);
}
protected Object readResolve() {
return DEFAULT_DOUBLE_PARSER;
}
};
/**
* A parser instance for int values encoded by {@link NumericUtils#intToPrefixCoded(int)}, e.g. when indexed
* via {@link NumericField}/{@link NumericTokenStream}.
*/
public static final IntParser NUMERIC_UTILS_INT_PARSER=new IntParser(){
public int parseInt(String val) {
final int shift = val.charAt(0)-NumericUtils.SHIFT_START_INT;
if (shift>0 && shift<=31)
throw new FieldCacheImpl.StopFillCacheException();
return NumericUtils.prefixCodedToInt(val);
}
protected Object readResolve() {
return NUMERIC_UTILS_INT_PARSER;
}
};
/**
* A parser instance for float values encoded with {@link NumericUtils}, e.g. when indexed
* via {@link NumericField}/{@link NumericTokenStream}.
*/
public static final FloatParser NUMERIC_UTILS_FLOAT_PARSER=new FloatParser(){
public float parseFloat(String val) {
final int shift = val.charAt(0)-NumericUtils.SHIFT_START_INT;
if (shift>0 && shift<=31)
throw new FieldCacheImpl.StopFillCacheException();
return NumericUtils.sortableIntToFloat(NumericUtils.prefixCodedToInt(val));
}
protected Object readResolve() {
return NUMERIC_UTILS_FLOAT_PARSER;
}
};
/**
* A parser instance for long values encoded by {@link NumericUtils#longToPrefixCoded(long)}, e.g. when indexed
* via {@link NumericField}/{@link NumericTokenStream}.
*/
public static final LongParser NUMERIC_UTILS_LONG_PARSER = new LongParser(){
public long parseLong(String val) {
final int shift = val.charAt(0)-NumericUtils.SHIFT_START_LONG;
if (shift>0 && shift<=63)
throw new FieldCacheImpl.StopFillCacheException();
return NumericUtils.prefixCodedToLong(val);
}
protected Object readResolve() {
return NUMERIC_UTILS_LONG_PARSER;
}
};
/**
* A parser instance for double values encoded with {@link NumericUtils}, e.g. when indexed
* via {@link NumericField}/{@link NumericTokenStream}.
*/
public static final DoubleParser NUMERIC_UTILS_DOUBLE_PARSER = new DoubleParser(){
public double parseDouble(String val) {
final int shift = val.charAt(0)-NumericUtils.SHIFT_START_LONG;
if (shift>0 && shift<=63)
throw new FieldCacheImpl.StopFillCacheException();
return NumericUtils.sortableLongToDouble(NumericUtils.prefixCodedToLong(val));
}
protected Object readResolve() {
return NUMERIC_UTILS_DOUBLE_PARSER;
}
};
/** Checks the internal cache for an appropriate entry, and if none is
* found, reads the terms in <code>field</code> as a single byte and returns an array
* of size <code>reader.maxDoc()</code> of the value each document
@ -228,6 +361,65 @@ public interface FieldCache {
*/
public float[] getFloats (IndexReader reader, String field,
FloatParser parser) throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is
* found, reads the terms in <code>field</code> as longs and returns an array
* of size <code>reader.maxDoc()</code> of the value each document
* has in the given field.
*
* @param reader Used to get field values.
* @param field Which field contains the longs.
* @return The values in the given field for each document.
* @throws java.io.IOException If any error occurs.
*/
public long[] getLongs(IndexReader reader, String field)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is found,
* reads the terms in <code>field</code> as longs and returns an array of
* size <code>reader.maxDoc()</code> of the value each document has in the
* given field.
*
* @param reader Used to get field values.
* @param field Which field contains the longs.
* @param parser Computes integer for string values.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public long[] getLongs(IndexReader reader, String field, LongParser parser)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is
* found, reads the terms in <code>field</code> as integers and returns an array
* of size <code>reader.maxDoc()</code> of the value each document
* has in the given field.
*
* @param reader Used to get field values.
* @param field Which field contains the doubles.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public double[] getDoubles(IndexReader reader, String field)
throws IOException;
/**
* Checks the internal cache for an appropriate entry, and if none is found,
* reads the terms in <code>field</code> as doubles and returns an array of
* size <code>reader.maxDoc()</code> of the value each document has in the
* given field.
*
* @param reader Used to get field values.
* @param field Which field contains the doubles.
* @param parser Computes integer for string values.
* @return The values in the given field for each document.
* @throws IOException If any error occurs.
*/
public double[] getDoubles(IndexReader reader, String field, DoubleParser parser)
throws IOException;
/** Checks the internal cache for an appropriate entry, and if none
* is found, reads the term values in <code>field</code> and returns an array
@ -254,15 +446,18 @@ public interface FieldCache {
throws IOException;
/** Checks the internal cache for an appropriate entry, and if
* none is found reads <code>field</code> to see if it contains integers, floats
* none is found reads <code>field</code> to see if it contains integers, longs, floats
* or strings, and then calls one of the other methods in this class to get the
* values. For string values, a StringIndex is returned. After
* calling this method, there is an entry in the cache for both
* type <code>AUTO</code> and the actual found type.
* @param reader Used to get field values.
* @param field Which field contains the values.
* @return int[], float[] or StringIndex.
* @return int[], long[], float[] or StringIndex.
* @throws IOException If any error occurs.
* @deprecated Please specify the exact type, instead.
* Especially, guessing does <b>not</b> work with the new
* {@link NumericField} type.
*/
public Object getAuto (IndexReader reader, String field)
throws IOException;

View File

@ -37,9 +37,17 @@ import java.util.WeakHashMap;
* @since lucene 1.4
* @version $Id$
*/
class FieldCacheImpl
implements FieldCache {
// TODO: change interface to FieldCache in 3.0 when removed
class FieldCacheImpl implements ExtendedFieldCache {
/**
* Hack: When thrown from a Parser (NUMERIC_UTILS_* ones), this stops
* processing terms and returns the current FieldCache
* array.
*/
static final class StopFillCacheException extends RuntimeException {
}
/** Expert: Internal cache. */
abstract static class Cache {
private final Map readerCache = new WeakHashMap();
@ -140,34 +148,9 @@ implements FieldCache {
}
}
private static final ByteParser BYTE_PARSER = new ByteParser() {
public byte parseByte(String value) {
return Byte.parseByte(value);
}
};
private static final ShortParser SHORT_PARSER = new ShortParser() {
public short parseShort(String value) {
return Short.parseShort(value);
}
};
private static final IntParser INT_PARSER = new IntParser() {
public int parseInt(String value) {
return Integer.parseInt(value);
}
};
private static final FloatParser FLOAT_PARSER = new FloatParser() {
public float parseFloat(String value) {
return Float.parseFloat(value);
}
};
// inherit javadocs
public byte[] getBytes (IndexReader reader, String field) throws IOException {
return getBytes(reader, field, BYTE_PARSER);
return getBytes(reader, field, null);
}
// inherit javadocs
@ -183,6 +166,9 @@ implements FieldCache {
Entry entry = (Entry) entryKey;
String field = entry.field;
ByteParser parser = (ByteParser) entry.custom;
if (parser == null) {
return getBytes(reader, field, FieldCache.DEFAULT_BYTE_PARSER);
}
final byte[] retArray = new byte[reader.maxDoc()];
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
@ -207,7 +193,7 @@ implements FieldCache {
// inherit javadocs
public short[] getShorts (IndexReader reader, String field) throws IOException {
return getShorts(reader, field, SHORT_PARSER);
return getShorts(reader, field, null);
}
// inherit javadocs
@ -223,6 +209,9 @@ implements FieldCache {
Entry entry = (Entry) entryKey;
String field = entry.field;
ShortParser parser = (ShortParser) entry.custom;
if (parser == null) {
return getShorts(reader, field, FieldCache.DEFAULT_SHORT_PARSER);
}
final short[] retArray = new short[reader.maxDoc()];
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
@ -247,7 +236,7 @@ implements FieldCache {
// inherit javadocs
public int[] getInts (IndexReader reader, String field) throws IOException {
return getInts(reader, field, INT_PARSER);
return getInts(reader, field, null);
}
// inherit javadocs
@ -263,7 +252,14 @@ implements FieldCache {
Entry entry = (Entry) entryKey;
String field = entry.field;
IntParser parser = (IntParser) entry.custom;
final int[] retArray = new int[reader.maxDoc()];
if (parser == null) {
try {
return getInts(reader, field, DEFAULT_INT_PARSER);
} catch (NumberFormatException ne) {
return getInts(reader, field, NUMERIC_UTILS_INT_PARSER);
}
}
int[] retArray = null;
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
try {
@ -271,6 +267,8 @@ implements FieldCache {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
int termval = parser.parseInt(term.text());
if (retArray == null) // late init
retArray = new int[reader.maxDoc()];
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
@ -281,6 +279,8 @@ implements FieldCache {
termDocs.close();
termEnum.close();
}
if (retArray == null) // no values
retArray = new int[reader.maxDoc()];
return retArray;
}
};
@ -289,7 +289,7 @@ implements FieldCache {
// inherit javadocs
public float[] getFloats (IndexReader reader, String field)
throws IOException {
return getFloats(reader, field, FLOAT_PARSER);
return getFloats(reader, field, null);
}
// inherit javadocs
@ -305,7 +305,14 @@ implements FieldCache {
Entry entry = (Entry) entryKey;
String field = entry.field;
FloatParser parser = (FloatParser) entry.custom;
final float[] retArray = new float[reader.maxDoc()];
if (parser == null) {
try {
return getFloats(reader, field, DEFAULT_FLOAT_PARSER);
} catch (NumberFormatException ne) {
return getFloats(reader, field, NUMERIC_UTILS_FLOAT_PARSER);
}
}
float[] retArray = null;
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
try {
@ -313,6 +320,8 @@ implements FieldCache {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
float termval = parser.parseFloat(term.text());
if (retArray == null) // late init
retArray = new float[reader.maxDoc()];
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
@ -323,6 +332,123 @@ implements FieldCache {
termDocs.close();
termEnum.close();
}
if (retArray == null) // no values
retArray = new float[reader.maxDoc()];
return retArray;
}
};
public long[] getLongs(IndexReader reader, String field) throws IOException {
return getLongs(reader, field, null);
}
// inherit javadocs
public long[] getLongs(IndexReader reader, String field, FieldCache.LongParser parser)
throws IOException {
return (long[]) longsCache.get(reader, new Entry(field, parser));
}
/** @deprecated Will be removed in 3.0, this is for binary compatibility only */
public long[] getLongs(IndexReader reader, String field, ExtendedFieldCache.LongParser parser)
throws IOException {
return (long[]) longsCache.get(reader, new Entry(field, parser));
}
Cache longsCache = new Cache() {
protected Object createValue(IndexReader reader, Object entryKey)
throws IOException {
Entry entry = (Entry) entryKey;
String field = entry.field;
FieldCache.LongParser parser = (FieldCache.LongParser) entry.custom;
if (parser == null) {
try {
return getLongs(reader, field, DEFAULT_LONG_PARSER);
} catch (NumberFormatException ne) {
return getLongs(reader, field, NUMERIC_UTILS_LONG_PARSER);
}
}
long[] retArray = null;
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term(field));
try {
do {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
long termval = parser.parseLong(term.text());
if (retArray == null) // late init
retArray = new long[reader.maxDoc()];
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
}
} while (termEnum.next());
} catch (StopFillCacheException stop) {
} finally {
termDocs.close();
termEnum.close();
}
if (retArray == null) // no values
retArray = new long[reader.maxDoc()];
return retArray;
}
};
// inherit javadocs
public double[] getDoubles(IndexReader reader, String field)
throws IOException {
return getDoubles(reader, field, null);
}
// inherit javadocs
public double[] getDoubles(IndexReader reader, String field, FieldCache.DoubleParser parser)
throws IOException {
return (double[]) doublesCache.get(reader, new Entry(field, parser));
}
/** @deprecated Will be removed in 3.0, this is for binary compatibility only */
public double[] getDoubles(IndexReader reader, String field, ExtendedFieldCache.DoubleParser parser)
throws IOException {
return (double[]) doublesCache.get(reader, new Entry(field, parser));
}
Cache doublesCache = new Cache() {
protected Object createValue(IndexReader reader, Object entryKey)
throws IOException {
Entry entry = (Entry) entryKey;
String field = entry.field;
FieldCache.DoubleParser parser = (FieldCache.DoubleParser) entry.custom;
if (parser == null) {
try {
return getDoubles(reader, field, DEFAULT_DOUBLE_PARSER);
} catch (NumberFormatException ne) {
return getDoubles(reader, field, NUMERIC_UTILS_DOUBLE_PARSER);
}
}
double[] retArray = null;
TermDocs termDocs = reader.termDocs();
TermEnum termEnum = reader.terms (new Term (field));
try {
do {
Term term = termEnum.term();
if (term==null || term.field() != field) break;
double termval = parser.parseDouble(term.text());
if (retArray == null) // late init
retArray = new double[reader.maxDoc()];
termDocs.seek (termEnum);
while (termDocs.next()) {
retArray[termDocs.doc()] = termval;
}
} while (termEnum.next());
} catch (StopFillCacheException stop) {
} finally {
termDocs.close();
termEnum.close();
}
if (retArray == null) // no values
retArray = new double[reader.maxDoc()];
return retArray;
}
};
@ -439,6 +565,11 @@ implements FieldCache {
return autoCache.get(reader, field);
}
/**
* @deprecated Please specify the exact type, instead.
* Especially, guessing does <b>not</b> work with the new
* {@link NumericField} type.
*/
Cache autoCache = new Cache() {
protected Object createValue(IndexReader reader, Object fieldKey)
@ -448,33 +579,27 @@ implements FieldCache {
try {
Term term = enumerator.term();
if (term == null) {
throw new RuntimeException ("no terms in field " + field + " - cannot determine sort type");
throw new RuntimeException ("no terms in field " + field + " - cannot determine type");
}
Object ret = null;
if (term.field() == field) {
String termtext = term.text().trim();
/**
* Java 1.4 level code:
if (pIntegers.matcher(termtext).matches())
return IntegerSortedHitQueue.comparator (reader, enumerator, field);
else if (pFloats.matcher(termtext).matches())
return FloatSortedHitQueue.comparator (reader, enumerator, field);
*/
// Java 1.3 level code:
try {
Integer.parseInt (termtext);
ret = getInts (reader, field);
} catch (NumberFormatException nfe1) {
try {
Long.parseLong(termtext);
ret = getLongs (reader, field);
} catch (NumberFormatException nfe2) {
try {
Float.parseFloat (termtext);
ret = getFloats (reader, field);
} catch (NumberFormatException nfe3) {
ret = getStringIndex (reader, field);
}
}
}
} else {
throw new RuntimeException ("field \"" + field + "\" does not appear to be indexed");
@ -486,12 +611,13 @@ implements FieldCache {
}
};
// inherit javadocs
/** @deprecated */
public Comparable[] getCustom(IndexReader reader, String field,
SortComparator comparator) throws IOException {
return (Comparable[]) customCache.get(reader, new Entry(field, comparator));
}
/** @deprecated */
Cache customCache = new Cache() {
protected Object createValue(IndexReader reader, Object entryKey)

View File

@ -22,8 +22,8 @@ import java.text.Collator;
import java.util.Locale;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.ExtendedFieldCache.DoubleParser;
import org.apache.lucene.search.ExtendedFieldCache.LongParser;
import org.apache.lucene.search.FieldCache.DoubleParser;
import org.apache.lucene.search.FieldCache.LongParser;
import org.apache.lucene.search.FieldCache.ByteParser;
import org.apache.lucene.search.FieldCache.FloatParser;
import org.apache.lucene.search.FieldCache.IntParser;
@ -71,9 +71,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? ExtendedFieldCache.EXT_DEFAULT
.getBytes(reader, field, parser) : ExtendedFieldCache.EXT_DEFAULT
.getBytes(reader, field);
currentReaderValues = FieldCache.DEFAULT.getBytes(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -134,7 +132,7 @@ public abstract class FieldComparator {
}
/** Parses field's values as double (using {@link
* ExtendedFieldCache#getDoubles} and sorts by ascending value */
* FieldCache#getDoubles} and sorts by ascending value */
public static final class DoubleComparator extends FieldComparator {
private final double[] values;
private double[] currentReaderValues;
@ -176,9 +174,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? ExtendedFieldCache.EXT_DEFAULT
.getDoubles(reader, field, parser) : ExtendedFieldCache.EXT_DEFAULT
.getDoubles(reader, field);
currentReaderValues = FieldCache.DEFAULT.getDoubles(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -241,8 +237,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? FieldCache.DEFAULT.getFloats(
reader, field, parser) : FieldCache.DEFAULT.getFloats(reader, field);
currentReaderValues = FieldCache.DEFAULT.getFloats(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -309,8 +304,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? FieldCache.DEFAULT.getInts(reader,
field, parser) : FieldCache.DEFAULT.getInts(reader, field);
currentReaderValues = FieldCache.DEFAULT.getInts(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -327,7 +321,7 @@ public abstract class FieldComparator {
}
/** Parses field's values as long (using {@link
* ExtendedFieldCache#getLongs} and sorts by ascending value */
* FieldCache#getLongs} and sorts by ascending value */
public static final class LongComparator extends FieldComparator {
private final long[] values;
private long[] currentReaderValues;
@ -373,9 +367,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? ExtendedFieldCache.EXT_DEFAULT
.getLongs(reader, field, parser) : ExtendedFieldCache.EXT_DEFAULT
.getLongs(reader, field);
currentReaderValues = FieldCache.DEFAULT.getLongs(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -471,9 +463,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = parser != null ? ExtendedFieldCache.EXT_DEFAULT
.getShorts(reader, field, parser) : ExtendedFieldCache.EXT_DEFAULT
.getShorts(reader, field);
currentReaderValues = FieldCache.DEFAULT.getShorts(reader, field, parser);
}
public void setBottom(final int bottom) {
@ -537,8 +527,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = ExtendedFieldCache.EXT_DEFAULT.getStrings(reader,
field);
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
public void setBottom(final int bottom) {
@ -664,7 +653,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
StringIndex currentReaderValues = ExtendedFieldCache.EXT_DEFAULT.getStringIndex(reader, field);
StringIndex currentReaderValues = FieldCache.DEFAULT.getStringIndex(reader, field);
currentReaderGen++;
order = currentReaderValues.order;
lookup = currentReaderValues.lookup;
@ -756,8 +745,7 @@ public abstract class FieldComparator {
}
public void setNextReader(IndexReader reader, int docBase, int numSlotsFull) throws IOException {
currentReaderValues = ExtendedFieldCache.EXT_DEFAULT.getStrings(reader,
field);
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
public void setBottom(final int bottom) {

View File

@ -205,10 +205,10 @@ extends PriorityQueue {
comparator = comparatorFloat (reader, fieldname, (FieldCache.FloatParser)parser);
break;
case SortField.LONG:
comparator = comparatorLong(reader, fieldname, (ExtendedFieldCache.LongParser)parser);
comparator = comparatorLong(reader, fieldname, (FieldCache.LongParser)parser);
break;
case SortField.DOUBLE:
comparator = comparatorDouble(reader, fieldname, (ExtendedFieldCache.DoubleParser)parser);
comparator = comparatorDouble(reader, fieldname, (FieldCache.DoubleParser)parser);
break;
case SortField.SHORT:
comparator = comparatorShort(reader, fieldname, (FieldCache.ShortParser)parser);
@ -240,9 +240,7 @@ extends PriorityQueue {
static ScoreDocComparator comparatorByte(final IndexReader reader, final String fieldname, final FieldCache.ByteParser parser)
throws IOException {
final String field = fieldname.intern();
final byte[] fieldOrder = (parser==null)
? FieldCache.DEFAULT.getBytes(reader, field)
: FieldCache.DEFAULT.getBytes(reader, field, parser);
final byte[] fieldOrder = FieldCache.DEFAULT.getBytes(reader, field, parser);
return new ScoreDocComparator() {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -273,9 +271,7 @@ extends PriorityQueue {
static ScoreDocComparator comparatorShort(final IndexReader reader, final String fieldname, final FieldCache.ShortParser parser)
throws IOException {
final String field = fieldname.intern();
final short[] fieldOrder = (parser==null)
? FieldCache.DEFAULT.getShorts(reader, field)
: FieldCache.DEFAULT.getShorts(reader, field, parser);
final short[] fieldOrder = FieldCache.DEFAULT.getShorts(reader, field, parser);
return new ScoreDocComparator() {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -306,9 +302,7 @@ extends PriorityQueue {
static ScoreDocComparator comparatorInt (final IndexReader reader, final String fieldname, final FieldCache.IntParser parser)
throws IOException {
final String field = fieldname.intern();
final int[] fieldOrder = (parser==null)
? FieldCache.DEFAULT.getInts(reader, field)
: FieldCache.DEFAULT.getInts(reader, field, parser);
final int[] fieldOrder = FieldCache.DEFAULT.getInts(reader, field, parser);
return new ScoreDocComparator() {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -336,12 +330,10 @@ extends PriorityQueue {
* @return Comparator for sorting hits.
* @throws IOException If an error occurs reading the index.
*/
static ScoreDocComparator comparatorLong (final IndexReader reader, final String fieldname, final ExtendedFieldCache.LongParser parser)
static ScoreDocComparator comparatorLong (final IndexReader reader, final String fieldname, final FieldCache.LongParser parser)
throws IOException {
final String field = fieldname.intern();
final long[] fieldOrder = (parser==null)
? ExtendedFieldCache.EXT_DEFAULT.getLongs (reader, field)
: ExtendedFieldCache.EXT_DEFAULT.getLongs (reader, field, parser);
final long[] fieldOrder = FieldCache.DEFAULT.getLongs (reader, field, parser);
return new ScoreDocComparator() {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -373,9 +365,7 @@ extends PriorityQueue {
static ScoreDocComparator comparatorFloat (final IndexReader reader, final String fieldname, final FieldCache.FloatParser parser)
throws IOException {
final String field = fieldname.intern();
final float[] fieldOrder = (parser==null)
? FieldCache.DEFAULT.getFloats (reader, field)
: FieldCache.DEFAULT.getFloats (reader, field, parser);
final float[] fieldOrder = FieldCache.DEFAULT.getFloats (reader, field, parser);
return new ScoreDocComparator () {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -403,12 +393,10 @@ extends PriorityQueue {
* @return Comparator for sorting hits.
* @throws IOException If an error occurs reading the index.
*/
static ScoreDocComparator comparatorDouble(final IndexReader reader, final String fieldname, final ExtendedFieldCache.DoubleParser parser)
static ScoreDocComparator comparatorDouble(final IndexReader reader, final String fieldname, final FieldCache.DoubleParser parser)
throws IOException {
final String field = fieldname.intern();
final double[] fieldOrder = (parser==null)
? ExtendedFieldCache.EXT_DEFAULT.getDoubles (reader, field)
: ExtendedFieldCache.EXT_DEFAULT.getDoubles (reader, field, parser);
final double[] fieldOrder = FieldCache.DEFAULT.getDoubles (reader, field, parser);
return new ScoreDocComparator () {
public final int compare (final ScoreDoc i, final ScoreDoc j) {
@ -511,7 +499,7 @@ extends PriorityQueue {
static ScoreDocComparator comparatorAuto (final IndexReader reader, final String fieldname)
throws IOException {
final String field = fieldname.intern();
Object lookupArray = ExtendedFieldCache.EXT_DEFAULT.getAuto (reader, field);
Object lookupArray = FieldCache.DEFAULT.getAuto (reader, field);
if (lookupArray instanceof FieldCache.StringIndex) {
return comparatorString (reader, field);
} else if (lookupArray instanceof int[]) {

View File

@ -18,6 +18,7 @@ package org.apache.lucene.search;
*/
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.document.NumericField; // for javadocs
/**
* Implementation of a {@link Filter} that implements <em>trie-based</em> range filtering
@ -25,7 +26,7 @@ import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
* {@link NumericRangeQuery}.
*
* <p>This filter depends on a specific structure of terms in the index that can only be created
* by indexing using {@link NumericTokenStream}.
* by indexing using {@link NumericField} (expert: {@link NumericTokenStream}).
*
* <p><b>Please note:</b> This class has no constructor, you can create filters depending on the data type
* by using the static factories {@linkplain #newLongRange NumericRangeFilter.newLongRange()},
@ -36,6 +37,10 @@ import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
* new Float(0.3f), new Float(0.10f),
* true, true);
* </pre>
*
* <p><font color="red"><b>NOTE:</b> This API is experimental and
* might change in incompatible ways in the next release.</font>
*
* @since 2.9
**/
public final class NumericRangeFilter extends MultiTermQueryWrapperFilter {

View File

@ -21,6 +21,7 @@ import java.io.IOException;
import java.util.LinkedList;
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.document.NumericField; // for javadocs
import org.apache.lucene.util.NumericUtils;
import org.apache.lucene.util.ToStringUtils;
import org.apache.lucene.index.IndexReader;
@ -33,12 +34,12 @@ import org.apache.lucene.index.Term;
* <h3>Usage</h3>
* <h4>Indexing</h4>
* Before numeric values can be queried, they must be indexed in a special way. You can do this
* by adding numeric fields to the index by specifying a {@link NumericTokenStream}.
* by adding numeric fields to the index by specifying a {@link NumericField} (expert: {@link NumericTokenStream}).
* An important setting is the <a href="#precisionStepDesc"><code>precisionStep</code></a>, which specifies,
* how many different precisions per numeric value are indexed to speed up range queries.
* Lower values create more terms but speed up search, higher values create less terms, but
* slow down search. Suitable values are 2, 4, or 8. A good starting point to test is 4.
* For code examples see {@link NumericTokenStream}.
* For code examples see {@link NumericField}.
*
* <h4>Searching</h4>
* <p>This class has no constructor, you can create filters depending on the data type
@ -114,6 +115,10 @@ import org.apache.lucene.index.Term;
* <p>The query is in {@linkplain #setConstantScoreRewrite constant score mode} per default.
* With precision steps of &le;4, this query can be run in conventional {@link BooleanQuery}
* rewrite mode without changing the max clause count.
*
* <p><font color="red"><b>NOTE:</b> This API is experimental and
* might change in incompatible ways in the next release.</font>
*
* @since 2.9
**/
public final class NumericRangeQuery extends MultiTermQuery {

View File

@ -21,6 +21,7 @@ import java.io.IOException;
import java.io.Serializable;
import java.util.Locale;
import org.apache.lucene.document.NumericField; // javadocs
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermEnum;
@ -50,7 +51,10 @@ implements Serializable {
* to look at the first term indexed for the field and determine if it
* represents an integer number, a floating point number, or just arbitrary
* string characters.
* @deprecated Please specify the exact type, instead.*/
* @deprecated Please specify the exact type, instead.
* Especially, guessing does <b>not</b> work with the new
* {@link NumericField} type.
*/
public static final int AUTO = 2;
/** Sort using term values as Strings. Sort values are String and lower
@ -161,9 +165,8 @@ implements Serializable {
* @param field Name of field to sort by. Must not be null.
* @param parser Instance of a {@link FieldCache.Parser},
* which must subclass one of the existing numeric
* parsers from {@link FieldCache} or {@link
* ExtendedFieldCache}. Sort type is inferred by testing
* which numeric parser the parser subclasses.
* parsers from {@link FieldCache}. Sort type is inferred
* by testing which numeric parser the parser subclasses.
* @throws IllegalArgumentException if the parser fails to
* subclass an existing numeric parser, or field is null
*/
@ -176,9 +179,8 @@ implements Serializable {
* @param field Name of field to sort by. Must not be null.
* @param parser Instance of a {@link FieldCache.Parser},
* which must subclass one of the existing numeric
* parsers from {@link FieldCache} or {@link
* ExtendedFieldCache}. Sort type is inferred by testing
* which numeric parser the parser subclasses.
* parsers from {@link FieldCache}. Sort type is inferred
* by testing which numeric parser the parser subclasses.
* @param reverse True if natural order should be reversed.
* @throws IllegalArgumentException if the parser fails to
* subclass an existing numeric parser, or field is null
@ -188,10 +190,10 @@ implements Serializable {
else if (parser instanceof FieldCache.FloatParser) initFieldType(field, FLOAT);
else if (parser instanceof FieldCache.ShortParser) initFieldType(field, SHORT);
else if (parser instanceof FieldCache.ByteParser) initFieldType(field, BYTE);
else if (parser instanceof ExtendedFieldCache.LongParser) initFieldType(field, LONG);
else if (parser instanceof ExtendedFieldCache.DoubleParser) initFieldType(field, DOUBLE);
else if (parser instanceof FieldCache.LongParser) initFieldType(field, LONG);
else if (parser instanceof FieldCache.DoubleParser) initFieldType(field, DOUBLE);
else
throw new IllegalArgumentException("Parser instance does not subclass existing numeric parser from FieldCache or ExtendedFieldCache (got " + parser + ")");
throw new IllegalArgumentException("Parser instance does not subclass existing numeric parser from FieldCache (got " + parser + ")");
this.reverse = reverse;
this.parser = parser;
@ -499,7 +501,10 @@ implements Serializable {
}
}
/** Attempts to detect the given field type for an IndexReader. */
/**
* Attempts to detect the given field type for an IndexReader.
* @deprecated
*/
static int detectFieldType(IndexReader reader, String fieldKey) throws IOException {
String field = fieldKey.intern();
TermEnum enumerator = reader.terms(new Term(field));
@ -512,17 +517,6 @@ implements Serializable {
if (term.field() == field) {
String termtext = term.text().trim();
/**
* Java 1.4 level code:
if (pIntegers.matcher(termtext).matches())
return IntegerSortedHitQueue.comparator (reader, enumerator, field);
else if (pFloats.matcher(termtext).matches())
return FloatSortedHitQueue.comparator (reader, enumerator, field);
*/
// Java 1.3 level code:
try {
Integer.parseInt (termtext);
ret = SortField.INT;

View File

@ -62,9 +62,7 @@ public class ByteFieldSource extends FieldCacheSource {
/*(non-Javadoc) @see org.apache.lucene.search.function.FieldCacheSource#getCachedValues(org.apache.lucene.search.FieldCache, java.lang.String, org.apache.lucene.index.IndexReader) */
public DocValues getCachedFieldValues (FieldCache cache, String field, IndexReader reader) throws IOException {
final byte[] arr = (parser==null) ?
cache.getBytes(reader, field) :
cache.getBytes(reader, field, parser);
final byte[] arr = cache.getBytes(reader, field, parser);
return new DocValues() {
/*(non-Javadoc) @see org.apache.lucene.search.function.DocValues#floatVal(int) */
public float floatVal(int doc) {

View File

@ -63,9 +63,7 @@ public class FloatFieldSource extends FieldCacheSource {
/*(non-Javadoc) @see org.apache.lucene.search.function.FieldCacheSource#getCachedValues(org.apache.lucene.search.FieldCache, java.lang.String, org.apache.lucene.index.IndexReader) */
public DocValues getCachedFieldValues (FieldCache cache, String field, IndexReader reader) throws IOException {
final float[] arr = (parser==null) ?
cache.getFloats(reader, field) :
cache.getFloats(reader, field, parser);
final float[] arr = cache.getFloats(reader, field, parser);
return new DocValues() {
/*(non-Javadoc) @see org.apache.lucene.search.function.DocValues#floatVal(int) */
public float floatVal(int doc) {

View File

@ -64,9 +64,7 @@ public class IntFieldSource extends FieldCacheSource {
/*(non-Javadoc) @see org.apache.lucene.search.function.FieldCacheSource#getCachedValues(org.apache.lucene.search.FieldCache, java.lang.String, org.apache.lucene.index.IndexReader) */
public DocValues getCachedFieldValues (FieldCache cache, String field, IndexReader reader) throws IOException {
final int[] arr = (parser==null) ?
cache.getInts(reader, field) :
cache.getInts(reader, field, parser);
final int[] arr = cache.getInts(reader, field, parser);
return new DocValues() {
/*(non-Javadoc) @see org.apache.lucene.search.function.DocValues#floatVal(int) */
public float floatVal(int doc) {

View File

@ -62,9 +62,7 @@ public class ShortFieldSource extends FieldCacheSource {
/*(non-Javadoc) @see org.apache.lucene.search.function.FieldCacheSource#getCachedValues(org.apache.lucene.search.FieldCache, java.lang.String, org.apache.lucene.index.IndexReader) */
public DocValues getCachedFieldValues (FieldCache cache, String field, IndexReader reader) throws IOException {
final short[] arr = (parser==null) ?
cache.getShorts(reader, field) :
cache.getShorts(reader, field, parser);
final short[] arr = cache.getShorts(reader, field, parser);
return new DocValues() {
/*(non-Javadoc) @see org.apache.lucene.search.function.DocValues#floatVal(int) */
public float floatVal(int doc) {

View File

@ -166,8 +166,8 @@ org.apache.lucene.search.Searcher#search(Query,Filter)}.
<a href="NumericRangeQuery.html">NumericRangeQuery</a>
matches all documents that occur in a numeric range.
For NumericRangeQuery to work, you must index the values
using a special <a href="../analysis/NumericTokenStream.html">
NumericTokenStream</a>.
using a special <a href="../document/NumericField.html">
NumericField</a>.
</p>
<h4>

View File

@ -20,9 +20,6 @@ package org.apache.lucene.util;
import org.apache.lucene.analysis.NumericTokenStream; // for javadocs
import org.apache.lucene.search.NumericRangeQuery; // for javadocs
import org.apache.lucene.search.NumericRangeFilter; // for javadocs
import org.apache.lucene.search.SortField;
import org.apache.lucene.search.FieldCache;
import org.apache.lucene.search.ExtendedFieldCache;
/**
* This is a helper class to generate prefix-encoded representations for numerical values
@ -57,9 +54,9 @@ import org.apache.lucene.search.ExtendedFieldCache;
* {@link String#compareTo(String)}) representations of numeric data types for other
* usages (e.g. sorting).
*
* <p>Prefix encoded fields can also be sorted using the {@link SortField} factories
* {@link #getLongSortField}, {@link #getIntSortField}, {@link #getDoubleSortField}
* or {@link #getFloatSortField}.
* <p><font color="red"><b>NOTE:</b> This API is experimental and
* might change in incompatible ways in the next release.</font>
*
* @since 2.9
*/
public final class NumericUtils {
@ -92,56 +89,6 @@ public final class NumericUtils {
*/
public static final int INT_BUF_SIZE = 31/7 + 2;
/**
* A parser instance for filling a {@link ExtendedFieldCache}, that parses prefix encoded fields as longs.
*/
public static final ExtendedFieldCache.LongParser FIELD_CACHE_LONG_PARSER=new ExtendedFieldCache.LongParser(){
public final long parseLong(final String val) {
final int shift = val.charAt(0)-SHIFT_START_LONG;
if (shift>0 && shift<=63)
throw new FieldCache.StopFillCacheException();
return prefixCodedToLong(val);
}
};
/**
* A parser instance for filling a {@link FieldCache}, that parses prefix encoded fields as ints.
*/
public static final FieldCache.IntParser FIELD_CACHE_INT_PARSER=new FieldCache.IntParser(){
public final int parseInt(final String val) {
final int shift = val.charAt(0)-SHIFT_START_INT;
if (shift>0 && shift<=31)
throw new FieldCache.StopFillCacheException();
return prefixCodedToInt(val);
}
};
/**
* A parser instance for filling a {@link ExtendedFieldCache}, that parses prefix encoded fields as doubles.
* This uses {@link #sortableLongToDouble} to convert the encoded long to a double.
*/
public static final ExtendedFieldCache.DoubleParser FIELD_CACHE_DOUBLE_PARSER=new ExtendedFieldCache.DoubleParser(){
public final double parseDouble(final String val) {
final int shift = val.charAt(0)-SHIFT_START_LONG;
if (shift>0 && shift<=63)
throw new FieldCache.StopFillCacheException();
return sortableLongToDouble(prefixCodedToLong(val));
}
};
/**
* A parser instance for filling a {@link FieldCache}, that parses prefix encoded fields as floats.
* This uses {@link #sortableIntToFloat} to convert the encoded int to a float.
*/
public static final FieldCache.FloatParser FIELD_CACHE_FLOAT_PARSER=new FieldCache.FloatParser(){
public final float parseFloat(final String val) {
final int shift = val.charAt(0)-SHIFT_START_INT;
if (shift>0 && shift<=31)
throw new FieldCache.StopFillCacheException();
return sortableIntToFloat(prefixCodedToInt(val));
}
};
/**
* Expert: Returns prefix coded bits after reducing the precision by <code>shift</code> bits.
* This is method is used by {@link NumericTokenStream}.
@ -152,6 +99,8 @@ public final class NumericUtils {
* @return number of chars written to buffer
*/
public static int longToPrefixCoded(final long val, final int shift, final char[] buffer) {
if (shift>63 || shift<0)
throw new IllegalArgumentException("Illegal shift value, must be 0..63");
int nChars = (63-shift)/7 + 1, len = nChars+1;
buffer[0] = (char)(SHIFT_START_LONG + shift);
long sortableBits = val ^ 0x8000000000000000L;
@ -173,8 +122,6 @@ public final class NumericUtils {
* @param shift how many bits to strip from the right
*/
public static String longToPrefixCoded(final long val, final int shift) {
if (shift>63 || shift<0)
throw new IllegalArgumentException("Illegal shift value, must be 0..63");
final char[] buffer = new char[LONG_BUF_SIZE];
final int len = longToPrefixCoded(val, shift, buffer);
return new String(buffer, 0, len);
@ -200,6 +147,8 @@ public final class NumericUtils {
* @return number of chars written to buffer
*/
public static int intToPrefixCoded(final int val, final int shift, final char[] buffer) {
if (shift>31 || shift<0)
throw new IllegalArgumentException("Illegal shift value, must be 0..31");
int nChars = (31-shift)/7 + 1, len = nChars+1;
buffer[0] = (char)(SHIFT_START_INT + shift);
int sortableBits = val ^ 0x80000000;
@ -221,8 +170,6 @@ public final class NumericUtils {
* @param shift how many bits to strip from the right
*/
public static String intToPrefixCoded(final int val, final int shift) {
if (shift>31 || shift<0)
throw new IllegalArgumentException("Illegal shift value, must be 0..31");
final char[] buffer = new char[INT_BUF_SIZE];
final int len = intToPrefixCoded(val, shift, buffer);
return new String(buffer, 0, len);
@ -336,26 +283,6 @@ public final class NumericUtils {
return Float.intBitsToFloat(val);
}
/** A factory method, that generates a {@link SortField} instance for sorting prefix encoded long values. */
public static SortField getLongSortField(final String field, final boolean reverse) {
return new SortField(field, FIELD_CACHE_LONG_PARSER, reverse);
}
/** A factory method, that generates a {@link SortField} instance for sorting prefix encoded int values. */
public static SortField getIntSortField(final String field, final boolean reverse) {
return new SortField(field, FIELD_CACHE_INT_PARSER, reverse);
}
/** A factory method, that generates a {@link SortField} instance for sorting prefix encoded double values. */
public static SortField getDoubleSortField(final String field, final boolean reverse) {
return new SortField(field, FIELD_CACHE_DOUBLE_PARSER, reverse);
}
/** A factory method, that generates a {@link SortField} instance for sorting prefix encoded float values. */
public static SortField getFloatSortField(final String field, final boolean reverse) {
return new SortField(field, FIELD_CACHE_FLOAT_PARSER, reverse);
}
/**
* Expert: Splits a long range recursively.
* You may implement a builder that adds clauses to a
@ -451,7 +378,7 @@ public final class NumericUtils {
/**
* Expert: Callback for {@link #splitLongRange}.
* You need to overwrite only one of the methods.
* <p><font color="red">WARNING: This is a very low-level interface,
* <p><font color="red"><b>NOTE:</b> This is a very low-level interface,
* the method signatures may change in later versions.</font>
*/
public static abstract class LongRangeBuilder {
@ -477,7 +404,7 @@ public final class NumericUtils {
/**
* Expert: Callback for {@link #splitIntRange}.
* You need to overwrite only one of the methods.
* <p><font color="red">WARNING: This is a very low-level interface,
* <p><font color="red"><b>NOTE:</b> This is a very low-level interface,
* the method signatures may change in later versions.</font>
*/
public static abstract class IntRangeBuilder {

View File

@ -203,150 +203,7 @@ final class JustCompileSearch {
}
}
static final class JustCompileFieldCache implements FieldCache {
public Object getAuto(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public byte[] getBytes(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public byte[] getBytes(IndexReader reader, String field, ByteParser parser)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
/** @deprecated */
public Comparable[] getCustom(IndexReader reader, String field,
SortComparator comparator) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public float[] getFloats(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public float[] getFloats(IndexReader reader, String field,
FloatParser parser) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public int[] getInts(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public int[] getInts(IndexReader reader, String field, IntParser parser)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public short[] getShorts(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public short[] getShorts(IndexReader reader, String field,
ShortParser parser) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public StringIndex getStringIndex(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public String[] getStrings(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
}
static final class JustCompileExtendedFieldCache implements ExtendedFieldCache {
public double[] getDoubles(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public double[] getDoubles(IndexReader reader, String field,
DoubleParser parser) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public long[] getLongs(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public long[] getLongs(IndexReader reader, String field, LongParser parser)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public Object getAuto(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public byte[] getBytes(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public byte[] getBytes(IndexReader reader, String field, ByteParser parser)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
/** @deprecated */
public Comparable[] getCustom(IndexReader reader, String field,
SortComparator comparator) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public float[] getFloats(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public float[] getFloats(IndexReader reader, String field,
FloatParser parser) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public int[] getInts(IndexReader reader, String field) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public int[] getInts(IndexReader reader, String field, IntParser parser)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public short[] getShorts(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public short[] getShorts(IndexReader reader, String field,
ShortParser parser) throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public StringIndex getStringIndex(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
public String[] getStrings(IndexReader reader, String field)
throws IOException {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
}
}
static final class JustCompileExtendedFieldCacheLongParser implements ExtendedFieldCache.LongParser {
static final class JustCompileExtendedFieldCacheLongParser implements FieldCache.LongParser {
public long parseLong(String string) {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);
@ -354,7 +211,7 @@ final class JustCompileSearch {
}
static final class JustCompileExtendedFieldCacheDoubleParser implements ExtendedFieldCache.DoubleParser {
static final class JustCompileExtendedFieldCacheDoubleParser implements FieldCache.DoubleParser {
public double parseDouble(String string) {
throw new UnsupportedOperationException(UNSUPPORTED_MSG);

View File

@ -1,71 +0,0 @@
package org.apache.lucene.search;
/**
* Copyright 2004 The Apache Software Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.English;
import org.apache.lucene.util.LuceneTestCase;
import java.io.IOException;
public class TestExtendedFieldCache extends LuceneTestCase {
protected IndexReader reader;
private static final int NUM_DOCS = 1000;
public TestExtendedFieldCache(String s) {
super(s);
}
protected void setUp() throws Exception {
super.setUp();
RAMDirectory directory = new RAMDirectory();
IndexWriter writer= new IndexWriter(directory, new WhitespaceAnalyzer(), true, IndexWriter.MaxFieldLength.LIMITED);
long theLong = Long.MAX_VALUE;
double theDouble = Double.MAX_VALUE;
for (int i = 0; i < NUM_DOCS; i++){
Document doc = new Document();
doc.add(new Field("theLong", String.valueOf(theLong--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theDouble", String.valueOf(theDouble--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("text", English.intToEnglish(i), Field.Store.NO, Field.Index.ANALYZED));
writer.addDocument(doc);
}
writer.close();
reader = IndexReader.open(directory);
}
public void test() throws IOException {
ExtendedFieldCache cache = new ExtendedFieldCacheImpl();
double [] doubles = cache.getDoubles(reader, "theDouble");
assertTrue("doubles Size: " + doubles.length + " is not: " + NUM_DOCS, doubles.length == NUM_DOCS);
for (int i = 0; i < doubles.length; i++) {
assertTrue(doubles[i] + " does not equal: " + (Double.MAX_VALUE - i), doubles[i] == (Double.MAX_VALUE - i));
}
long [] longs = cache.getLongs(reader, "theLong");
assertTrue("longs Size: " + longs.length + " is not: " + NUM_DOCS, longs.length == NUM_DOCS);
for (int i = 0; i < longs.length; i++) {
assertTrue(longs[i] + " does not equal: " + (Long.MAX_VALUE - i), longs[i] == (Long.MAX_VALUE - i));
}
}
}

View File

@ -0,0 +1,118 @@
package org.apache.lucene.search;
/**
* Copyright 2004 The Apache Software Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
import java.io.IOException;
public class TestFieldCache extends LuceneTestCase {
protected IndexReader reader;
private static final int NUM_DOCS = 1000;
public TestFieldCache(String s) {
super(s);
}
protected void setUp() throws Exception {
super.setUp();
RAMDirectory directory = new RAMDirectory();
IndexWriter writer= new IndexWriter(directory, new WhitespaceAnalyzer(), true, IndexWriter.MaxFieldLength.LIMITED);
long theLong = Long.MAX_VALUE;
double theDouble = Double.MAX_VALUE;
byte theByte = Byte.MAX_VALUE;
short theShort = Short.MAX_VALUE;
int theInt = Integer.MAX_VALUE;
float theFloat = Float.MAX_VALUE;
for (int i = 0; i < NUM_DOCS; i++){
Document doc = new Document();
doc.add(new Field("theLong", String.valueOf(theLong--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theDouble", String.valueOf(theDouble--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theByte", String.valueOf(theByte--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theShort", String.valueOf(theShort--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theInt", String.valueOf(theInt--), Field.Store.NO, Field.Index.NOT_ANALYZED));
doc.add(new Field("theFloat", String.valueOf(theFloat--), Field.Store.NO, Field.Index.NOT_ANALYZED));
writer.addDocument(doc);
}
writer.close();
reader = IndexReader.open(directory);
}
public void test() throws IOException {
FieldCache cache = FieldCache.DEFAULT;
double [] doubles = cache.getDoubles(reader, "theDouble");
assertSame("Second request to cache return same array", doubles, cache.getDoubles(reader, "theDouble"));
assertSame("Second request with explicit parser return same array", doubles, cache.getDoubles(reader, "theDouble", FieldCache.DEFAULT_DOUBLE_PARSER));
assertTrue("doubles Size: " + doubles.length + " is not: " + NUM_DOCS, doubles.length == NUM_DOCS);
for (int i = 0; i < doubles.length; i++) {
assertTrue(doubles[i] + " does not equal: " + (Double.MAX_VALUE - i), doubles[i] == (Double.MAX_VALUE - i));
}
long [] longs = cache.getLongs(reader, "theLong");
assertSame("Second request to cache return same array", longs, cache.getLongs(reader, "theLong"));
assertSame("Second request with explicit parser return same array", longs, cache.getLongs(reader, "theLong", FieldCache.DEFAULT_LONG_PARSER));
assertTrue("longs Size: " + longs.length + " is not: " + NUM_DOCS, longs.length == NUM_DOCS);
for (int i = 0; i < longs.length; i++) {
assertTrue(longs[i] + " does not equal: " + (Long.MAX_VALUE - i), longs[i] == (Long.MAX_VALUE - i));
}
byte [] bytes = cache.getBytes(reader, "theByte");
assertSame("Second request to cache return same array", bytes, cache.getBytes(reader, "theByte"));
assertSame("Second request with explicit parser return same array", bytes, cache.getBytes(reader, "theByte", FieldCache.DEFAULT_BYTE_PARSER));
assertTrue("bytes Size: " + bytes.length + " is not: " + NUM_DOCS, bytes.length == NUM_DOCS);
for (int i = 0; i < bytes.length; i++) {
assertTrue(bytes[i] + " does not equal: " + (Byte.MAX_VALUE - i), bytes[i] == (byte) (Byte.MAX_VALUE - i));
}
short [] shorts = cache.getShorts(reader, "theShort");
assertSame("Second request to cache return same array", shorts, cache.getShorts(reader, "theShort"));
assertSame("Second request with explicit parser return same array", shorts, cache.getShorts(reader, "theShort", FieldCache.DEFAULT_SHORT_PARSER));
assertTrue("shorts Size: " + shorts.length + " is not: " + NUM_DOCS, shorts.length == NUM_DOCS);
for (int i = 0; i < shorts.length; i++) {
assertTrue(shorts[i] + " does not equal: " + (Short.MAX_VALUE - i), shorts[i] == (short) (Short.MAX_VALUE - i));
}
int [] ints = cache.getInts(reader, "theInt");
assertSame("Second request to cache return same array", ints, cache.getInts(reader, "theInt"));
assertSame("Second request with explicit parser return same array", ints, cache.getInts(reader, "theInt", FieldCache.DEFAULT_INT_PARSER));
assertTrue("ints Size: " + ints.length + " is not: " + NUM_DOCS, ints.length == NUM_DOCS);
for (int i = 0; i < ints.length; i++) {
assertTrue(ints[i] + " does not equal: " + (Integer.MAX_VALUE - i), ints[i] == (Integer.MAX_VALUE - i));
}
float [] floats = cache.getFloats(reader, "theFloat");
assertSame("Second request to cache return same array", floats, cache.getFloats(reader, "theFloat"));
assertSame("Second request with explicit parser return same array", floats, cache.getFloats(reader, "theFloat", FieldCache.DEFAULT_FLOAT_PARSER));
assertTrue("floats Size: " + floats.length + " is not: " + NUM_DOCS, floats.length == NUM_DOCS);
for (int i = 0; i < floats.length; i++) {
assertTrue(floats[i] + " does not equal: " + (Float.MAX_VALUE - i), floats[i] == (Float.MAX_VALUE - i));
}
}
}

View File

@ -19,12 +19,13 @@ package org.apache.lucene.search;
import java.util.Random;
import org.apache.lucene.analysis.NumericTokenStream;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.NumericField;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriter.MaxFieldLength;
import org.apache.lucene.search.SortField;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util.NumericUtils;
@ -37,15 +38,6 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
// number of docs to generate for testing
private static final int noDocs = 10000;
private static Field newField(String name, int precisionStep) {
NumericTokenStream stream = new NumericTokenStream(precisionStep);
stream.setUseNewAPI(true);
Field f=new Field(name, stream);
f.setOmitTermFreqAndPositions(true);
f.setOmitNorms(true);
return f;
}
private static final RAMDirectory directory;
private static final IndexSearcher searcher;
static {
@ -57,34 +49,31 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
IndexWriter writer = new IndexWriter(directory, new WhitespaceAnalyzer(),
true, MaxFieldLength.UNLIMITED);
Field
field8 = newField("field8", 8),
field4 = newField("field4", 4),
field2 = newField("field2", 2),
ascfield8 = newField("ascfield8", 8),
ascfield4 = newField("ascfield4", 4),
ascfield2 = newField("ascfield2", 2);
NumericField
field8 = new NumericField("field8", 8, Field.Store.YES, true),
field4 = new NumericField("field4", 4, Field.Store.YES, true),
field2 = new NumericField("field2", 2, Field.Store.YES, true),
ascfield8 = new NumericField("ascfield8", 8, Field.Store.NO, true),
ascfield4 = new NumericField("ascfield4", 4, Field.Store.NO, true),
ascfield2 = new NumericField("ascfield2", 2, Field.Store.NO, true);
Document doc = new Document();
// add fields, that have a distance to test general functionality
doc.add(field8); doc.add(field4); doc.add(field2);
// add ascending fields with a distance of 1, beginning at -noDocs/2 to test the correct splitting of range and inclusive/exclusive
doc.add(ascfield8); doc.add(ascfield4); doc.add(ascfield2);
// Add a series of noDocs docs with increasing int values
for (int l=0; l<noDocs; l++) {
Document doc=new Document();
// add fields, that have a distance to test general functionality
int val=distance*l+startOffset;
doc.add(new Field("value", Integer.toString(val), Field.Store.YES, Field.Index.NO));
((NumericTokenStream)field8.tokenStreamValue()).setIntValue(val);
doc.add(field8);
((NumericTokenStream)field4.tokenStreamValue()).setIntValue(val);
doc.add(field4);
((NumericTokenStream)field2.tokenStreamValue()).setIntValue(val);
doc.add(field2);
// add ascending fields with a distance of 1, beginning at -noDocs/2 to test the correct splitting of range and inclusive/exclusive
field8.setIntValue(val);
field4.setIntValue(val);
field2.setIntValue(val);
val=l-(noDocs/2);
((NumericTokenStream)ascfield8.tokenStreamValue()).setIntValue(val);
doc.add(ascfield8);
((NumericTokenStream)ascfield4.tokenStreamValue()).setIntValue(val);
doc.add(ascfield4);
((NumericTokenStream)ascfield2.tokenStreamValue()).setIntValue(val);
doc.add(ascfield2);
ascfield8.setIntValue(val);
ascfield4.setIntValue(val);
ascfield2.setIntValue(val);
writer.addDocument(doc);
}
@ -136,9 +125,9 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count"+type, count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc"+type, 2*distance+startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("First doc"+type, 2*distance+startOffset, Integer.parseInt(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc"+type, (1+count)*distance+startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("Last doc"+type, (1+count)*distance+startOffset, Integer.parseInt(doc.get(field)) );
if (i>0) {
assertEquals("Distinct term number is equal for all query types", lastTerms, terms);
}
@ -174,9 +163,9 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count", count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc", startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("First doc", startOffset, Integer.parseInt(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc", (count-1)*distance+startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("Last doc", (count-1)*distance+startOffset, Integer.parseInt(doc.get(field)) );
}
public void testLeftOpenRange_8bit() throws Exception {
@ -202,9 +191,9 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count", noDocs-count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc", count*distance+startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("First doc", count*distance+startOffset, Integer.parseInt(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc", (noDocs-1)*distance+startOffset, Integer.parseInt(doc.get("value")) );
assertEquals("Last doc", (noDocs-1)*distance+startOffset, Integer.parseInt(doc.get(field)) );
}
public void testRightOpenRange_8bit() throws Exception {
@ -364,13 +353,13 @@ public class TestNumericRangeQuery32 extends LuceneTestCase {
int a=lower; lower=upper; upper=a;
}
Query tq=NumericRangeQuery.newIntRange(field, precisionStep, new Integer(lower), new Integer(upper), true, true);
TopDocs topDocs = searcher.search(tq, null, noDocs, new Sort(NumericUtils.getIntSortField(field, true)));
TopDocs topDocs = searcher.search(tq, null, noDocs, new Sort(new SortField(field, SortField.INT, true)));
if (topDocs.totalHits==0) continue;
ScoreDoc[] sd = topDocs.scoreDocs;
assertNotNull(sd);
int last=Integer.parseInt(searcher.doc(sd[0].doc).get("value"));
int last=Integer.parseInt(searcher.doc(sd[0].doc).get(field));
for (int j=1; j<sd.length; j++) {
int act=Integer.parseInt(searcher.doc(sd[j].doc).get("value"));
int act=Integer.parseInt(searcher.doc(sd[j].doc).get(field));
assertTrue("Docs should be sorted backwards", last>act );
last=act;
}

View File

@ -19,12 +19,13 @@ package org.apache.lucene.search;
import java.util.Random;
import org.apache.lucene.analysis.NumericTokenStream;
import org.apache.lucene.analysis.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.NumericField;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriter.MaxFieldLength;
import org.apache.lucene.search.SortField;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.LuceneTestCase;
import org.apache.lucene.util.NumericUtils;
@ -37,15 +38,6 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
// number of docs to generate for testing
private static final int noDocs = 10000;
private static Field newField(String name, int precisionStep) {
NumericTokenStream stream = new NumericTokenStream(precisionStep);
stream.setUseNewAPI(true);
Field f=new Field(name, stream);
f.setOmitTermFreqAndPositions(true);
f.setOmitNorms(true);
return f;
}
private static final RAMDirectory directory;
private static final IndexSearcher searcher;
static {
@ -57,34 +49,31 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
IndexWriter writer = new IndexWriter(directory, new WhitespaceAnalyzer(),
true, MaxFieldLength.UNLIMITED);
Field
field8 = newField("field8", 8),
field4 = newField("field4", 4),
field2 = newField("field2", 2),
ascfield8 = newField("ascfield8", 8),
ascfield4 = newField("ascfield4", 4),
ascfield2 = newField("ascfield2", 2);
NumericField
field8 = new NumericField("field8", 8, Field.Store.YES, true),
field4 = new NumericField("field4", 4, Field.Store.YES, true),
field2 = new NumericField("field2", 2, Field.Store.YES, true),
ascfield8 = new NumericField("ascfield8", 8, Field.Store.NO, true),
ascfield4 = new NumericField("ascfield4", 4, Field.Store.NO, true),
ascfield2 = new NumericField("ascfield2", 2, Field.Store.NO, true);
// Add a series of noDocs docs with increasing long values
Document doc = new Document();
// add fields, that have a distance to test general functionality
doc.add(field8); doc.add(field4); doc.add(field2);
// add ascending fields with a distance of 1, beginning at -noDocs/2 to test the correct splitting of range and inclusive/exclusive
doc.add(ascfield8); doc.add(ascfield4); doc.add(ascfield2);
// Add a series of noDocs docs with increasing long values, by updating the fields
for (int l=0; l<noDocs; l++) {
Document doc=new Document();
// add fields, that have a distance to test general functionality
long val=distance*l+startOffset;
doc.add(new Field("value", Long.toString(val), Field.Store.YES, Field.Index.NO));
((NumericTokenStream)field8.tokenStreamValue()).setLongValue(val);
doc.add(field8);
((NumericTokenStream)field4.tokenStreamValue()).setLongValue(val);
doc.add(field4);
((NumericTokenStream)field2.tokenStreamValue()).setLongValue(val);
doc.add(field2);
// add ascending fields with a distance of 1, beginning at -noDocs/2 to test the correct splitting of range and inclusive/exclusive
field8.setLongValue(val);
field4.setLongValue(val);
field2.setLongValue(val);
val=l-(noDocs/2);
((NumericTokenStream)ascfield8.tokenStreamValue()).setLongValue(val);
doc.add(ascfield8);
((NumericTokenStream)ascfield4.tokenStreamValue()).setLongValue(val);
doc.add(ascfield4);
((NumericTokenStream)ascfield2.tokenStreamValue()).setLongValue(val);
doc.add(ascfield2);
ascfield8.setLongValue(val);
ascfield4.setLongValue(val);
ascfield2.setLongValue(val);
writer.addDocument(doc);
}
@ -136,9 +125,9 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count"+type, count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc"+type, 2*distance+startOffset, Long.parseLong(doc.get("value")) );
assertEquals("First doc"+type, 2*distance+startOffset, Long.parseLong(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc"+type, (1+count)*distance+startOffset, Long.parseLong(doc.get("value")) );
assertEquals("Last doc"+type, (1+count)*distance+startOffset, Long.parseLong(doc.get(field)) );
if (i>0) {
assertEquals("Distinct term number is equal for all query types", lastTerms, terms);
}
@ -174,9 +163,9 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count", count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc", startOffset, Long.parseLong(doc.get("value")) );
assertEquals("First doc", startOffset, Long.parseLong(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc", (count-1)*distance+startOffset, Long.parseLong(doc.get("value")) );
assertEquals("Last doc", (count-1)*distance+startOffset, Long.parseLong(doc.get(field)) );
}
public void testLeftOpenRange_8bit() throws Exception {
@ -202,9 +191,9 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
assertNotNull(sd);
assertEquals("Score doc count", noDocs-count, sd.length );
Document doc=searcher.doc(sd[0].doc);
assertEquals("First doc", count*distance+startOffset, Long.parseLong(doc.get("value")) );
assertEquals("First doc", count*distance+startOffset, Long.parseLong(doc.get(field)) );
doc=searcher.doc(sd[sd.length-1].doc);
assertEquals("Last doc", (noDocs-1)*distance+startOffset, Long.parseLong(doc.get("value")) );
assertEquals("Last doc", (noDocs-1)*distance+startOffset, Long.parseLong(doc.get(field)) );
}
public void testRightOpenRange_8bit() throws Exception {
@ -364,13 +353,13 @@ public class TestNumericRangeQuery64 extends LuceneTestCase {
long a=lower; lower=upper; upper=a;
}
Query tq=NumericRangeQuery.newLongRange(field, precisionStep, new Long(lower), new Long(upper), true, true);
TopDocs topDocs = searcher.search(tq, null, noDocs, new Sort(NumericUtils.getLongSortField(field, true)));
TopDocs topDocs = searcher.search(tq, null, noDocs, new Sort(new SortField(field, SortField.LONG, true)));
if (topDocs.totalHits==0) continue;
ScoreDoc[] sd = topDocs.scoreDocs;
assertNotNull(sd);
long last=Long.parseLong(searcher.doc(sd[0].doc).get("value"));
long last=Long.parseLong(searcher.doc(sd[0].doc).get(field));
for (int j=1; j<sd.length; j++) {
long act=Long.parseLong(searcher.doc(sd[j].doc).get("value"));
long act=Long.parseLong(searcher.doc(sd[j].doc).get(field));
assertTrue("Docs should be sorted backwards", last>act );
last=act;
}

View File

@ -338,14 +338,14 @@ public class TestSort extends LuceneTestCase implements Serializable {
}), SortField.FIELD_DOC });
assertMatches (full, queryA, sort, "JIHGFEDCBA");
sort.setSort (new SortField[] { new SortField ("parser", new ExtendedFieldCache.LongParser(){
sort.setSort (new SortField[] { new SortField ("parser", new FieldCache.LongParser(){
public final long parseLong(final String val) {
return (val.charAt(0)-'A') * 1234567890L;
}
}), SortField.FIELD_DOC });
assertMatches (full, queryA, sort, "JIHGFEDCBA");
sort.setSort (new SortField[] { new SortField ("parser", new ExtendedFieldCache.DoubleParser(){
sort.setSort (new SortField[] { new SortField ("parser", new FieldCache.DoubleParser(){
public final double parseDouble(final String val) {
return Math.pow( val.charAt(0), (val.charAt(0)-'A') );
}