mirror of https://github.com/apache/lucene.git
192 lines
6.4 KiB
Plaintext
192 lines
6.4 KiB
Plaintext
Lucene Change Log
|
|
|
|
$Id$
|
|
|
|
|
|
1.2 RC6
|
|
|
|
1. Changed QueryParser.jj to have "?" be a special character which
|
|
allowed it to be used as a wildcard term. Updated TestWildcard
|
|
unit test also. (Ralf Hettesheimer via carlson)
|
|
|
|
1.2 RC5
|
|
|
|
1. Renamed build.properties to default.properties and updated
|
|
the BUILD.txt document to describe how to override the
|
|
default.property settings without having to edit the file. This
|
|
brings the build process closer to Scarab's build process.
|
|
(jon)
|
|
|
|
2. Added MultiFieldQueryParser class. (Kelvin Tan, via otis)
|
|
|
|
3. Updated "powered by" links. (otis)
|
|
|
|
4. Fixed instruction for setting up JavaCC - Bug #7017 (otis)
|
|
|
|
5. Added throwing exception if FSDirectory could not create diectory
|
|
- Bug #6914 (Eugene Gluzberg via otis)
|
|
|
|
6. Update MultiSearcher, MultiFieldParse, Constants, DateFilter,
|
|
LowerCaseTokenizer javadoc (otis)
|
|
|
|
7. Added fix to avoid NullPointerException in results.jsp
|
|
(Mark Hayes via otis)
|
|
|
|
8. Changed Wildcard search to find 0 or more char instead of 1 or more
|
|
(Lee Mallobone, via otis)
|
|
|
|
9. Fixed error in offset issue in GermanStemFilter - Bug #7412
|
|
(Rodrigo Reyes, via otis)
|
|
|
|
10. Added unit tests for wildcard search and DateFilter (otis)
|
|
|
|
11. Allow co-existence of indexed and non-indexed fields with the same name
|
|
(cutting/casper, via otis)
|
|
|
|
12. Add escape character to query parser.
|
|
(briangoetz)
|
|
|
|
13. Applied a patch that ensures that searches that use DateFilter
|
|
don't throw an exception when no matches are found. (David Smiley, via
|
|
otis)
|
|
|
|
14. Fixed bugs in DateFilter and wildcardquery unit tests. (cutting, otis, carlson)
|
|
|
|
|
|
1.2 RC4
|
|
|
|
1. Updated contributions section of website.
|
|
Add XML Document #3 implementation to Document Section.
|
|
Also added Term Highlighting to Misc Section. (carlson)
|
|
|
|
2. Fixed NullPointerException for phrase searches containing
|
|
unindexed terms, introduced in 1.2RC3. (cutting)
|
|
|
|
3. Changed document deletion code to obtain the index write lock,
|
|
enforcing the fact that document addition and deletion cannot be
|
|
performed concurrently. (cutting)
|
|
|
|
4. Various documentation cleanups. (otis, acoliver)
|
|
|
|
5. Updated "powered by" links. (cutting, jon)
|
|
|
|
6. Fixed a bug in the GermanStemmer. (Bernhard Messer, via otis)
|
|
|
|
7. Changed Term and Query to implement Serializable. (scottganyo)
|
|
|
|
8. Fixed to never delete indexes added with IndexWriter.addIndexes().
|
|
(cutting)
|
|
|
|
9. Upgraded to JUnit 3.7. (otis)
|
|
|
|
1.2 RC3
|
|
|
|
1. IndexWriter: fixed a bug where adding an optimized index to an
|
|
empty index failed. This was encountered using addIndexes to copy
|
|
a RAMDirectory index to an FSDirectory.
|
|
|
|
2. RAMDirectory: fixed a bug where RAMInputStream could not read
|
|
across more than across a single buffer boundary.
|
|
|
|
3. Fix query parser so it accepts queries with unicode characters.
|
|
(briangoetz)
|
|
|
|
4. Fix query parser so that PrefixQuery is used in preference to
|
|
WildcardQuery when there's only an asterisk at the end of the
|
|
term. Previously PrefixQuery would never be used.
|
|
|
|
5. Fix tests so they compile; fix ant file so it compiles tests
|
|
properly. Added test cases for Analyzers and PriorityQueue.
|
|
|
|
6. Updated demos, added Getting Started documentation. (acoliver)
|
|
|
|
7. Added 'contributions' section to website & docs. (carlson)
|
|
|
|
8. Removed JavaCC from source distribution for copyright reasons.
|
|
Folks must now download this separately from metamata in order to
|
|
compile Lucene. (cutting)
|
|
|
|
9. Substantially improved the performance of DateFilter by adding the
|
|
ability to reuse TermDocs objects. (cutting)
|
|
|
|
10. Added IndexReader methods:
|
|
public static boolean indexExists(String directory);
|
|
public static boolean indexExists(File directory);
|
|
public static boolean indexExists(Directory directory);
|
|
public static boolean isLocked(Directory directory);
|
|
public static void unlock(Directory directory);
|
|
(cutting, otis)
|
|
|
|
11. Fixed bugs in GermanAnalyzer (gschwarz)
|
|
|
|
|
|
1.2 RC2, 19 October 2001:
|
|
- added sources to distribution
|
|
- removed broken build scripts and libraries from distribution
|
|
- SegmentsReader: fixed potential race condition
|
|
- FSDirectory: fixed so that getDirectory(xxx,true) correctly
|
|
erases the directory contents, even when the directory
|
|
has already been accessed in this JVM.
|
|
- RangeQuery: Fix issue where an inclusive range query would
|
|
include the nearest term in the index above a non-existant
|
|
specified upper term.
|
|
- SegmentTermEnum: Fix NullPointerException in clone() method
|
|
when the Term is null.
|
|
- JDK 1.1 compatibility fix: disabled lock files for JDK 1.1,
|
|
since they rely on a feature added in JDK 1.2.
|
|
|
|
1.2 RC1 (first Apache release), 2 October 2001:
|
|
- packages renamed from com.lucene to org.apache.lucene
|
|
- license switched from LGPL to Apache
|
|
- ant-only build -- no more makefiles
|
|
- addition of lock files--now fully thread & process safe
|
|
- addition of German stemmer
|
|
- MultiSearcher now supports low-level search API
|
|
- added RangeQuery, for term-range searching
|
|
- Analyzers can choose tokenizer based on field name
|
|
- misc bug fixes.
|
|
|
|
1.01b (last Sourceforge release), 2 July 2001
|
|
. a few bug fixes
|
|
. new Query Parser
|
|
. new prefix query (search for "foo*" matches "food")
|
|
|
|
1.0, 2000-10-04
|
|
|
|
This release fixes a few serious bugs and also includes some
|
|
performance optimizations, a stemmer, and a few other minor
|
|
enhancements.
|
|
|
|
0.04 2000-04-19
|
|
|
|
Lucene now includes a grammar-based tokenizer, StandardTokenizer.
|
|
|
|
The only tokenizer included in the previous release (LetterTokenizer)
|
|
identified terms consisting entirely of alphabetic characters. The
|
|
new tokenizer uses a regular-expression grammar to identify more
|
|
complex classes of terms, including numbers, acronyms, email
|
|
addresses, etc.
|
|
|
|
StandardTokenizer serves two purposes:
|
|
|
|
1. It is a much better, general purpose tokenizer for use by
|
|
applications as is.
|
|
|
|
The easiest way for applications to start using
|
|
StandardTokenizer is to use StandardAnalyzer.
|
|
|
|
2. It provides a good example of grammar-based tokenization.
|
|
|
|
If an application has special tokenization requirements, it can
|
|
implement a custom tokenizer by copying the directory containing
|
|
the new tokenizer into the application and modifying it
|
|
accordingly.
|
|
|
|
0.01, 2000-03-30
|
|
|
|
First open source release.
|
|
|
|
The code has been re-organized into a new package and directory
|
|
structure for this release. It builds OK, but has not been tested
|
|
beyond that since the re-organization.
|