Merged with trunk up to r1135758
git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/branches/solr2452@1135759 13f79535-47bb-0310-9956-ffa450edef68
|
@ -9,6 +9,7 @@
|
|||
<entry name="?*.brk" />
|
||||
<entry name="?*.bz2" />
|
||||
<entry name="?*.csv" />
|
||||
<entry name="?*.docx"/>
|
||||
<entry name="?*.dtd" />
|
||||
<entry name="?*.ftl" />
|
||||
<entry name="?*.gif" />
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
<exclude-output />
|
||||
<content url="file://$MODULE_DIR$">
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/tools/java" isTestSource="false" />
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/test-framework" isTestSource="true" />
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
|
||||
<excludeFolder url="file://$MODULE_DIR$/build" />
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
<content url="file://$MODULE_DIR$">
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
|
||||
<sourceFolder url="file://$MODULE_DIR$/src/test-files" isTestSource="true" />
|
||||
</content>
|
||||
<orderEntry type="inheritedJdk" />
|
||||
<orderEntry type="sourceFolder" forTests="false" />
|
||||
|
|
|
@ -12,15 +12,15 @@ Changes in backwards compatibility policy
|
|||
- On upgrading to 3.1, if you do not fully reindex your documents,
|
||||
Lucene will emulate the new flex API on top of the old index,
|
||||
incurring some performance cost (up to ~10% slowdown, typically).
|
||||
Likewise, if you use the deprecated pre-flex APIs on a newly
|
||||
created flex index, this emulation will also incur some
|
||||
performance loss.
|
||||
To prevent this slowdown, use oal.index.IndexUpgrader
|
||||
to upgrade your indexes to latest file format (LUCENE-3082).
|
||||
|
||||
Mixed flex/pre-flex indexes are perfectly fine -- the two
|
||||
emulation layers (flex API on pre-flex index, and pre-flex API on
|
||||
flex index) will remap the access as required. So on upgrading to
|
||||
3.1 you can start indexing new documents into an existing index.
|
||||
But for best performance you should fully reindex.
|
||||
To get optimal performance, use oal.index.IndexUpgrader
|
||||
to upgrade your indexes to latest file format (LUCENE-3082).
|
||||
|
||||
- The postings APIs (TermEnum, TermDocsEnum, TermPositionsEnum)
|
||||
have been removed in favor of the new flexible
|
||||
|
@ -482,6 +482,19 @@ Changes in runtime behavior
|
|||
|
||||
* LUCENE-3146: IndexReader.setNorm throws IllegalStateException if the field
|
||||
does not store norms. (Shai Erera, Mike McCandless)
|
||||
|
||||
* LUCENE-3198: On Linux, if the JRE is 64 bit and supports unmapping,
|
||||
FSDirectory.open now defaults to MMapDirectory instead of
|
||||
NIOFSDirectory since MMapDirectory gives better performance. (Mike
|
||||
McCandless)
|
||||
|
||||
* LUCENE-3200: MMapDirectory now uses chunk sizes that are powers of 2.
|
||||
When setting the chunk size, it is rounded down to the next possible
|
||||
value. The new default value for 64 bit platforms is 2^30 (1 GiB),
|
||||
for 32 bit platforms it stays unchanged at 2^28 (256 MiB).
|
||||
Internally, MMapDirectory now only uses one dedicated final IndexInput
|
||||
implementation supporting multiple chunks, which makes Hotspot's life
|
||||
easier. (Uwe Schindler, Robert Muir, Mike McCandless)
|
||||
|
||||
Bug fixes
|
||||
|
||||
|
@ -503,6 +516,10 @@ New Features
|
|||
* LUCENE-3140: Added experimental FST implementation to Lucene.
|
||||
(Robert Muir, Dawid Weiss, Mike McCandless)
|
||||
|
||||
* LUCENE-3193: A new TwoPhaseCommitTool allows running a 2-phase commit
|
||||
algorithm over objects that implement the new TwoPhaseCommit interface (such
|
||||
as IndexWriter). (Shai Erera)
|
||||
|
||||
Build
|
||||
|
||||
* LUCENE-1344: Create OSGi bundle using dev-tools/maven.
|
||||
|
|
|
@ -1,18 +1,21 @@
|
|||
* This is a placeholder for 4.1, when 4.0 will be branched *
|
||||
|
||||
============== DISABLED =============================================================================
|
||||
|
||||
This folder contains the src/ folder of the previous Lucene major version.
|
||||
|
||||
The test-backwards ANT task compiles the previous version's tests (bundled) against the
|
||||
previous released lucene-core.jar file (bundled). After that the compiled test classes
|
||||
are run against the new lucene-core.jar file, created by ANT before.
|
||||
|
||||
After branching a new Lucene major version (branch name "lucene_X_Y") do the following:
|
||||
After tagging a new Lucene *major* version (tag name "lucene_solr_X_Y_0") do the following
|
||||
(for minor versions never do this); also always use the x.y.0 version for the backwards folder,
|
||||
later bugfix releases should not be tested (the reason is that the new version must be backwards
|
||||
compatible to the last base version, bugfixes should not taken into account):
|
||||
|
||||
* svn rm backwards/src/test
|
||||
* svn cp https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_X_Y/lucene/src/test backwards/src/test
|
||||
* Copy the lucene-core.jar from the last release tarball to backwards/lib and delete old one.
|
||||
* cd lucene/backwards
|
||||
* svn rm src/test src/test-framework lib/lucene-core*.jar
|
||||
* svn commit (1st commit; you must do this, else you will corrupt your checkout)
|
||||
* svn cp https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_X_Y_0/lucene/src/test-framework src
|
||||
* svn cp https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_X_Y_0/lucene/src/test src
|
||||
* Copy the lucene-core.jar from the last release tarball to lib.
|
||||
* Check that everything is correct: The backwards folder should contain a src/ folder
|
||||
that now contains "test". The files should be the ones from the branch.
|
||||
that now contains "test" and "test-framework". The files should be the ones from the last version.
|
||||
* Run "ant test-backwards"
|
||||
* Commit the stuff again (2nd commit)
|
||||
|
|
|
@ -50,7 +50,7 @@
|
|||
excludes="*-src.jar"
|
||||
/>
|
||||
<patternset id="binary.root.dist.patterns"
|
||||
includes="docs/,CHANGES.txt,LICENSE.txt,NOTICE.txt,README.txt,MIGRATE.txt,JRE_VERSION_MIGRATION.txt,contrib/**/README*,**/CHANGES.txt,contrib/**/*.sh contrib/**/docs/ contrib/xml-query-parser/*.dtd,lib/*.jar,lib/*LICENSE*.txt,lib/*NOTICE*.txt,contrib/*/lib/*.jar,contrib/*/lib/*LICENSE*.txt,contrib/*/lib/*NOTICE*.txt"
|
||||
includes="CHANGES.txt,LICENSE.txt,NOTICE.txt,README.txt,MIGRATE.txt,JRE_VERSION_MIGRATION.txt,contrib/**/README*,**/CHANGES.txt,contrib/**/*.sh contrib/**/docs/ contrib/xml-query-parser/*.dtd,lib/*.jar,lib/*LICENSE*.txt,lib/*NOTICE*.txt,contrib/*/lib/*.jar,contrib/*/lib/*LICENSE*.txt,contrib/*/lib/*NOTICE*.txt"
|
||||
/>
|
||||
|
||||
|
||||
|
@ -147,8 +147,11 @@
|
|||
<!-- ================================================================== -->
|
||||
<!-- -->
|
||||
<!-- ================================================================== -->
|
||||
<target name="docs" description="Build the website">
|
||||
<echo>DEPRECATED - Doing Nothing. See http://wiki.apache.org/lucene-java/HowToUpdateTheWebsite</echo>
|
||||
<target name="docs">
|
||||
<!-- copies the docs over to the docs folder -->
|
||||
<copy todir="build/docs">
|
||||
<fileset dir="src/site/build/site"/>
|
||||
</copy>
|
||||
</target>
|
||||
|
||||
<target name="javadoc" depends="javadocs"/>
|
||||
|
@ -260,7 +263,7 @@
|
|||
<!-- ================================================================== -->
|
||||
<!-- -->
|
||||
<!-- ================================================================== -->
|
||||
<target name="package" depends="jar-core, jar-test-framework, javadocs, build-contrib, init-dist, changes-to-html"/>
|
||||
<target name="package" depends="jar-core, jar-test-framework, docs, javadocs, build-contrib, init-dist, changes-to-html"/>
|
||||
|
||||
<target name="nightly" depends="test, package-tgz">
|
||||
</target>
|
||||
|
@ -636,11 +639,12 @@
|
|||
<attribute name="changes.target.dir" default="${changes.target.dir}"/>
|
||||
<sequential>
|
||||
<mkdir dir="@{changes.target.dir}"/>
|
||||
<exec executable="perl" input="CHANGES.txt" output="@{changes.target.dir}/Changes.html" failonerror="true">
|
||||
<exec executable="perl" input="CHANGES.txt" output="@{changes.target.dir}/Changes.html"
|
||||
failonerror="true" logError="true">
|
||||
<arg value="@{changes.src.dir}/changes2html.pl"/>
|
||||
</exec>
|
||||
<exec executable="perl" input="contrib/CHANGES.txt" output="@{changes.target.dir}/Contrib-Changes.html"
|
||||
failonerror="true">
|
||||
failonerror="true" logError="true">
|
||||
<arg value="@{changes.src.dir}/changes2html.pl"/>
|
||||
</exec>
|
||||
<copy todir="@{changes.target.dir}">
|
||||
|
|
|
@ -70,6 +70,11 @@ New Features
|
|||
document sharing the same group was indexed as a doc block
|
||||
(IndexWriter.add/updateDocuments). (Mike McCandless)
|
||||
|
||||
* LUCENE-2955: Added NRTManager and NRTManagerReopenThread, to
|
||||
simplify handling NRT reopen with multiple search threads, and to
|
||||
allow an app to control which indexing changes must be visible to
|
||||
which search requests. (Mike McCandless)
|
||||
|
||||
API Changes
|
||||
|
||||
* LUCENE-3141: add getter method to access fragInfos in FieldFragList.
|
||||
|
@ -81,9 +86,13 @@ API Changes
|
|||
|
||||
Bug Fixes
|
||||
|
||||
* LUCENE-3185: Fix bug in NRTCachingDirectory.deleteFile that would
|
||||
always throw exception and sometimes fail to actually delete the
|
||||
file. (Mike McCandless)
|
||||
* LUCENE-3185: Fix bug in NRTCachingDirectory.deleteFile that would
|
||||
always throw exception and sometimes fail to actually delete the
|
||||
file. (Mike McCandless)
|
||||
|
||||
* LUCENE-3188: contrib/misc IndexSplitter creates indexes with incorrect
|
||||
SegmentInfos.counter; added CheckIndex check & fix for this problem.
|
||||
(Ivan Dimitrov Vasilev via Steve Rowe)
|
||||
|
||||
Build
|
||||
|
||||
|
|
|
@ -147,6 +147,7 @@ public class IndexSplitter {
|
|||
destDir.mkdirs();
|
||||
FSDirectory destFSDir = FSDirectory.open(destDir);
|
||||
SegmentInfos destInfos = new SegmentInfos(codecs);
|
||||
destInfos.counter = infos.counter;
|
||||
for (String n : segs) {
|
||||
SegmentInfo info = getInfo(n);
|
||||
destInfos.add(info);
|
||||
|
|
|
@ -0,0 +1,358 @@
|
|||
package org.apache.lucene.index;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import java.io.Closeable;
|
||||
import java.io.IOException;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.atomic.AtomicLong;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.lucene.analysis.Analyzer;
|
||||
import org.apache.lucene.index.IndexReader; // javadocs
|
||||
import org.apache.lucene.document.Document;
|
||||
import org.apache.lucene.search.IndexSearcher;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.ThreadInterruptedException;
|
||||
|
||||
// TODO
|
||||
// - we could make this work also w/ "normal" reopen/commit?
|
||||
|
||||
/**
|
||||
* Utility class to manage sharing near-real-time searchers
|
||||
* across multiple searching threads.
|
||||
*
|
||||
* <p>NOTE: to use this class, you must call reopen
|
||||
* periodically. The {@link NRTManagerReopenThread} is a
|
||||
* simple class to do this on a periodic basis. If you
|
||||
* implement your own reopener, be sure to call {@link
|
||||
* #addWaitingListener} so your reopener is notified when a
|
||||
* caller is waiting for a specific generation searcher. </p>
|
||||
*
|
||||
* @lucene.experimental
|
||||
*/
|
||||
|
||||
public class NRTManager implements Closeable {
|
||||
private final IndexWriter writer;
|
||||
private final ExecutorService es;
|
||||
private final AtomicLong indexingGen;
|
||||
private final AtomicLong searchingGen;
|
||||
private final AtomicLong noDeletesSearchingGen;
|
||||
private final List<WaitingListener> waitingListeners = new CopyOnWriteArrayList<WaitingListener>();
|
||||
|
||||
private volatile IndexSearcher currentSearcher;
|
||||
private volatile IndexSearcher noDeletesCurrentSearcher;
|
||||
|
||||
/**
|
||||
* Create new NRTManager. Note that this installs a
|
||||
* merged segment warmer on the provided IndexWriter's
|
||||
* config.
|
||||
*
|
||||
* @param writer IndexWriter to open near-real-time
|
||||
* readers
|
||||
*/
|
||||
public NRTManager(IndexWriter writer) throws IOException {
|
||||
this(writer, null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create new NRTManager. Note that this installs a
|
||||
* merged segment warmer on the provided IndexWriter's
|
||||
* config.
|
||||
*
|
||||
* @param writer IndexWriter to open near-real-time
|
||||
* readers
|
||||
* @param es ExecutorService to pass to the IndexSearcher
|
||||
*/
|
||||
public NRTManager(IndexWriter writer, ExecutorService es) throws IOException {
|
||||
|
||||
this.writer = writer;
|
||||
this.es = es;
|
||||
indexingGen = new AtomicLong(1);
|
||||
searchingGen = new AtomicLong(-1);
|
||||
noDeletesSearchingGen = new AtomicLong(-1);
|
||||
|
||||
// Create initial reader:
|
||||
swapSearcher(new IndexSearcher(IndexReader.open(writer, true), es), 0, true);
|
||||
|
||||
writer.getConfig().setMergedSegmentWarmer(
|
||||
new IndexWriter.IndexReaderWarmer() {
|
||||
@Override
|
||||
public void warm(IndexReader reader) throws IOException {
|
||||
NRTManager.this.warm(reader);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/** NRTManager invokes this interface to notify it when a
|
||||
* caller is waiting for a specific generation searcher
|
||||
* to be visible. */
|
||||
public static interface WaitingListener {
|
||||
public void waiting(boolean requiresDeletes, long targetGen);
|
||||
}
|
||||
|
||||
/** Adds a listener, to be notified when a caller is
|
||||
* waiting for a specific generation searcher to be
|
||||
* visible. */
|
||||
public void addWaitingListener(WaitingListener l) {
|
||||
waitingListeners.add(l);
|
||||
}
|
||||
|
||||
/** Remove a listener added with {@link
|
||||
* #addWaitingListener}. */
|
||||
public void removeWaitingListener(WaitingListener l) {
|
||||
waitingListeners.remove(l);
|
||||
}
|
||||
|
||||
public long updateDocument(Term t, Document d, Analyzer a) throws IOException {
|
||||
writer.updateDocument(t, d, a);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long updateDocument(Term t, Document d) throws IOException {
|
||||
writer.updateDocument(t, d);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long updateDocuments(Term t, Iterable<Document> docs, Analyzer a) throws IOException {
|
||||
writer.updateDocuments(t, docs, a);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long updateDocuments(Term t, Iterable<Document> docs) throws IOException {
|
||||
writer.updateDocuments(t, docs);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long deleteDocuments(Term t) throws IOException {
|
||||
writer.deleteDocuments(t);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long deleteDocuments(Query q) throws IOException {
|
||||
writer.deleteDocuments(q);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long addDocument(Document d, Analyzer a) throws IOException {
|
||||
writer.addDocument(d, a);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long addDocuments(Iterable<Document> docs, Analyzer a) throws IOException {
|
||||
writer.addDocuments(docs, a);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long addDocument(Document d) throws IOException {
|
||||
writer.addDocument(d);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
public long addDocuments(Iterable<Document> docs) throws IOException {
|
||||
writer.addDocuments(docs);
|
||||
// Return gen as of when indexing finished:
|
||||
return indexingGen.get();
|
||||
}
|
||||
|
||||
/** Returns the most current searcher. If you require a
|
||||
* certain indexing generation be visible in the returned
|
||||
* searcher, call {@link #get(long)}
|
||||
* instead.
|
||||
*/
|
||||
public synchronized IndexSearcher get() {
|
||||
return get(true);
|
||||
}
|
||||
|
||||
/** Just like {@link #get}, but by passing <code>false</code> for
|
||||
* requireDeletes, you can get faster reopen time, but
|
||||
* the returned reader is allowed to not reflect all
|
||||
* deletions. See {@link IndexReader#open(IndexWriter,boolean)} */
|
||||
public synchronized IndexSearcher get(boolean requireDeletes) {
|
||||
final IndexSearcher s;
|
||||
if (requireDeletes) {
|
||||
s = currentSearcher;
|
||||
} else if (noDeletesSearchingGen.get() > searchingGen.get()) {
|
||||
s = noDeletesCurrentSearcher;
|
||||
} else {
|
||||
s = currentSearcher;
|
||||
}
|
||||
s.getIndexReader().incRef();
|
||||
return s;
|
||||
}
|
||||
|
||||
/** Call this if you require a searcher reflecting all
|
||||
* changes as of the target generation.
|
||||
*
|
||||
* @param targetGen Returned searcher must reflect changes
|
||||
* as of this generation
|
||||
*/
|
||||
public synchronized IndexSearcher get(long targetGen) {
|
||||
return get(targetGen, true);
|
||||
}
|
||||
|
||||
/** Call this if you require a searcher reflecting all
|
||||
* changes as of the target generation, and you don't
|
||||
* require deletions to be reflected. Note that the
|
||||
* returned searcher may still reflect some or all
|
||||
* deletions.
|
||||
*
|
||||
* @param targetGen Returned searcher must reflect changes
|
||||
* as of this generation
|
||||
*
|
||||
* @param requireDeletes If true, the returned searcher must
|
||||
* reflect all deletions. This can be substantially more
|
||||
* costly than not applying deletes. Note that if you
|
||||
* pass false, it's still possible that some or all
|
||||
* deletes may have been applied.
|
||||
**/
|
||||
public synchronized IndexSearcher get(long targetGen, boolean requireDeletes) {
|
||||
|
||||
assert noDeletesSearchingGen.get() >= searchingGen.get();
|
||||
|
||||
if (targetGen > getCurrentSearchingGen(requireDeletes)) {
|
||||
// Must wait
|
||||
//final long t0 = System.nanoTime();
|
||||
for(WaitingListener listener : waitingListeners) {
|
||||
listener.waiting(requireDeletes, targetGen);
|
||||
}
|
||||
while (targetGen > getCurrentSearchingGen(requireDeletes)) {
|
||||
//System.out.println(Thread.currentThread().getName() + ": wait fresh searcher targetGen=" + targetGen + " vs searchingGen=" + getCurrentSearchingGen(requireDeletes) + " requireDeletes=" + requireDeletes);
|
||||
try {
|
||||
wait();
|
||||
} catch (InterruptedException ie) {
|
||||
throw new ThreadInterruptedException(ie);
|
||||
}
|
||||
}
|
||||
//final long waitNS = System.nanoTime()-t0;
|
||||
//System.out.println(Thread.currentThread().getName() + ": done wait fresh searcher targetGen=" + targetGen + " vs searchingGen=" + getCurrentSearchingGen(requireDeletes) + " requireDeletes=" + requireDeletes + " WAIT msec=" + (waitNS/1000000.0));
|
||||
}
|
||||
|
||||
return get(requireDeletes);
|
||||
}
|
||||
|
||||
/** Returns generation of current searcher. */
|
||||
public long getCurrentSearchingGen(boolean requiresDeletes) {
|
||||
return requiresDeletes ? searchingGen.get() : noDeletesSearchingGen.get();
|
||||
}
|
||||
|
||||
/** Release the searcher obtained from {@link
|
||||
* #get()} or {@link #get(long)}. */
|
||||
public void release(IndexSearcher s) throws IOException {
|
||||
s.getIndexReader().decRef();
|
||||
}
|
||||
|
||||
/** Call this when you need the NRT reader to reopen.
|
||||
*
|
||||
* @param applyDeletes If true, the newly opened reader
|
||||
* will reflect all deletes
|
||||
*/
|
||||
public boolean reopen(boolean applyDeletes) throws IOException {
|
||||
|
||||
// Mark gen as of when reopen started:
|
||||
final long newSearcherGen = indexingGen.getAndIncrement();
|
||||
|
||||
if (applyDeletes && currentSearcher.getIndexReader().isCurrent()) {
|
||||
//System.out.println("reopen: skip: isCurrent both force gen=" + newSearcherGen + " vs current gen=" + searchingGen);
|
||||
searchingGen.set(newSearcherGen);
|
||||
noDeletesSearchingGen.set(newSearcherGen);
|
||||
synchronized(this) {
|
||||
notifyAll();
|
||||
}
|
||||
//System.out.println("reopen: skip: return");
|
||||
return false;
|
||||
} else if (!applyDeletes && noDeletesCurrentSearcher.getIndexReader().isCurrent()) {
|
||||
//System.out.println("reopen: skip: isCurrent force gen=" + newSearcherGen + " vs current gen=" + noDeletesSearchingGen);
|
||||
noDeletesSearchingGen.set(newSearcherGen);
|
||||
synchronized(this) {
|
||||
notifyAll();
|
||||
}
|
||||
//System.out.println("reopen: skip: return");
|
||||
return false;
|
||||
}
|
||||
|
||||
//System.out.println("indexingGen now " + indexingGen);
|
||||
|
||||
// .reopen() returns a new reference:
|
||||
|
||||
// Start from whichever searcher is most current:
|
||||
final IndexSearcher startSearcher = noDeletesSearchingGen.get() > searchingGen.get() ? noDeletesCurrentSearcher : currentSearcher;
|
||||
final IndexReader nextReader = startSearcher.getIndexReader().reopen(writer, applyDeletes);
|
||||
|
||||
warm(nextReader);
|
||||
|
||||
// Transfer reference to swapSearcher:
|
||||
swapSearcher(new IndexSearcher(nextReader, es),
|
||||
newSearcherGen,
|
||||
applyDeletes);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/** Override this to warm the newly opened reader before
|
||||
* it's swapped in. Note that this is called both for
|
||||
* newly merged segments and for new top-level readers
|
||||
* opened by #reopen. */
|
||||
protected void warm(IndexReader reader) throws IOException {
|
||||
}
|
||||
|
||||
// Steals a reference from newSearcher:
|
||||
private synchronized void swapSearcher(IndexSearcher newSearcher, long newSearchingGen, boolean applyDeletes) throws IOException {
|
||||
//System.out.println(Thread.currentThread().getName() + ": swap searcher gen=" + newSearchingGen + " applyDeletes=" + applyDeletes);
|
||||
|
||||
// Always replace noDeletesCurrentSearcher:
|
||||
if (noDeletesCurrentSearcher != null) {
|
||||
noDeletesCurrentSearcher.getIndexReader().decRef();
|
||||
}
|
||||
noDeletesCurrentSearcher = newSearcher;
|
||||
assert newSearchingGen > noDeletesSearchingGen.get(): "newSearchingGen=" + newSearchingGen + " noDeletesSearchingGen=" + noDeletesSearchingGen;
|
||||
noDeletesSearchingGen.set(newSearchingGen);
|
||||
|
||||
if (applyDeletes) {
|
||||
// Deletes were applied, so we also update currentSearcher:
|
||||
if (currentSearcher != null) {
|
||||
currentSearcher.getIndexReader().decRef();
|
||||
}
|
||||
currentSearcher = newSearcher;
|
||||
if (newSearcher != null) {
|
||||
newSearcher.getIndexReader().incRef();
|
||||
}
|
||||
assert newSearchingGen > searchingGen.get(): "newSearchingGen=" + newSearchingGen + " searchingGen=" + searchingGen;
|
||||
searchingGen.set(newSearchingGen);
|
||||
}
|
||||
|
||||
notifyAll();
|
||||
//System.out.println(Thread.currentThread().getName() + ": done");
|
||||
}
|
||||
|
||||
/** NOTE: caller must separately close the writer. */
|
||||
// @Override -- not until Java 1.6
|
||||
public void close() throws IOException {
|
||||
swapSearcher(null, indexingGen.getAndIncrement(), true);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,202 @@
|
|||
package org.apache.lucene.index;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import java.io.Closeable;
|
||||
import java.io.IOException;
|
||||
|
||||
import org.apache.lucene.util.ThreadInterruptedException;
|
||||
|
||||
/**
|
||||
* Utility class that runs a reopen thread to periodically
|
||||
* reopen the NRT searchers in the provided {@link
|
||||
* NRTManager}.
|
||||
*
|
||||
* <p> Typical usage looks like this:
|
||||
*
|
||||
* <pre>
|
||||
* ... open your own writer ...
|
||||
*
|
||||
* NRTManager manager = new NRTManager(writer);
|
||||
*
|
||||
* // Refreshes searcher every 5 seconds when nobody is waiting, and up to 100 msec delay
|
||||
* // when somebody is waiting:
|
||||
* NRTManagerReopenThread reopenThread = new NRTManagerReopenThread(manager, 5.0, 0.1);
|
||||
* reopenThread.setName("NRT Reopen Thread");
|
||||
* reopenThread.setPriority(Math.min(Thread.currentThread().getPriority()+2, Thread.MAX_PRIORITY));
|
||||
* reopenThread.setDaemon(true);
|
||||
* reopenThread.start();
|
||||
* </pre>
|
||||
*
|
||||
* Then, for each incoming query, do this:
|
||||
*
|
||||
* <pre>
|
||||
* // For each incoming query:
|
||||
* IndexSearcher searcher = manager.get();
|
||||
* try {
|
||||
* // Use searcher to search...
|
||||
* } finally {
|
||||
* manager.release(searcher);
|
||||
* }
|
||||
* </pre>
|
||||
*
|
||||
* You should make changes using the <code>NRTManager</code>; if you later need to obtain
|
||||
* a searcher reflecting those changes:
|
||||
*
|
||||
* <pre>
|
||||
* // ... or updateDocument, deleteDocuments, etc:
|
||||
* long gen = manager.addDocument(...);
|
||||
*
|
||||
* // Returned searcher is guaranteed to reflect the just added document
|
||||
* IndexSearcher searcher = manager.get(gen);
|
||||
* try {
|
||||
* // Use searcher to search...
|
||||
* } finally {
|
||||
* manager.release(searcher);
|
||||
* }
|
||||
* </pre>
|
||||
*
|
||||
*
|
||||
* When you are done be sure to close both the manager and the reopen thrad:
|
||||
* <pre>
|
||||
* reopenThread.close();
|
||||
* manager.close();
|
||||
* </pre>
|
||||
*/
|
||||
|
||||
public class NRTManagerReopenThread extends Thread implements NRTManager.WaitingListener, Closeable {
|
||||
private final NRTManager manager;
|
||||
private final long targetMaxStaleNS;
|
||||
private final long targetMinStaleNS;
|
||||
private boolean finish;
|
||||
private boolean waitingNeedsDeletes;
|
||||
private long waitingGen;
|
||||
|
||||
/**
|
||||
* Create NRTManagerReopenThread, to periodically reopen the NRT searcher.
|
||||
*
|
||||
* @param targetMaxStaleSec Maximum time until a new
|
||||
* reader must be opened; this sets the upper bound
|
||||
* on how slowly reopens may occur
|
||||
*
|
||||
* @param targetMinStaleSec Mininum time until a new
|
||||
* reader can be opened; this sets the lower bound
|
||||
* on how quickly reopens may occur, when a caller
|
||||
* is waiting for a specific indexing change to
|
||||
* become visible.
|
||||
*/
|
||||
|
||||
public NRTManagerReopenThread(NRTManager manager, double targetMaxStaleSec, double targetMinStaleSec) {
|
||||
if (targetMaxStaleSec < targetMinStaleSec) {
|
||||
throw new IllegalArgumentException("targetMaxScaleSec (= " + targetMaxStaleSec + ") < targetMinStaleSec (=" + targetMinStaleSec + ")");
|
||||
}
|
||||
this.manager = manager;
|
||||
this.targetMaxStaleNS = (long) (1000000000*targetMaxStaleSec);
|
||||
this.targetMinStaleNS = (long) (1000000000*targetMinStaleSec);
|
||||
manager.addWaitingListener(this);
|
||||
}
|
||||
|
||||
public synchronized void close() {
|
||||
//System.out.println("NRT: set finish");
|
||||
manager.removeWaitingListener(this);
|
||||
this.finish = true;
|
||||
notify();
|
||||
try {
|
||||
join();
|
||||
} catch (InterruptedException ie) {
|
||||
throw new ThreadInterruptedException(ie);
|
||||
}
|
||||
}
|
||||
|
||||
public synchronized void waiting(boolean needsDeletes, long targetGen) {
|
||||
waitingNeedsDeletes |= needsDeletes;
|
||||
waitingGen = Math.max(waitingGen, targetGen);
|
||||
notify();
|
||||
//System.out.println(Thread.currentThread().getName() + ": force wakeup waitingGen=" + waitingGen + " applyDeletes=" + applyDeletes + " waitingNeedsDeletes=" + waitingNeedsDeletes);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
// TODO: maybe use private thread ticktock timer, in
|
||||
// case clock shift messes up nanoTime?
|
||||
long lastReopenStartNS = System.nanoTime();
|
||||
|
||||
//System.out.println("reopen: start");
|
||||
try {
|
||||
while (true) {
|
||||
|
||||
final boolean doApplyDeletes;
|
||||
|
||||
boolean hasWaiting = false;
|
||||
|
||||
synchronized(this) {
|
||||
// TODO: try to guestimate how long reopen might
|
||||
// take based on past data?
|
||||
|
||||
while (!finish) {
|
||||
//System.out.println("reopen: cycle");
|
||||
|
||||
// True if we have someone waiting for reopen'd searcher:
|
||||
hasWaiting = waitingGen > manager.getCurrentSearchingGen(waitingNeedsDeletes);
|
||||
final long nextReopenStartNS = lastReopenStartNS + (hasWaiting ? targetMinStaleNS : targetMaxStaleNS);
|
||||
|
||||
final long sleepNS = nextReopenStartNS - System.nanoTime();
|
||||
|
||||
if (sleepNS > 0) {
|
||||
//System.out.println("reopen: sleep " + (sleepNS/1000000.0) + " ms (hasWaiting=" + hasWaiting + ")");
|
||||
try {
|
||||
wait(sleepNS/1000000, (int) (sleepNS%1000000));
|
||||
} catch (InterruptedException ie) {
|
||||
Thread.currentThread().interrupt();
|
||||
//System.out.println("NRT: set finish on interrupt");
|
||||
finish = true;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (finish) {
|
||||
//System.out.println("reopen: finish");
|
||||
return;
|
||||
}
|
||||
|
||||
doApplyDeletes = hasWaiting ? waitingNeedsDeletes : true;
|
||||
waitingNeedsDeletes = false;
|
||||
//System.out.println("reopen: start hasWaiting=" + hasWaiting);
|
||||
}
|
||||
|
||||
lastReopenStartNS = System.nanoTime();
|
||||
try {
|
||||
//final long t0 = System.nanoTime();
|
||||
manager.reopen(doApplyDeletes);
|
||||
//System.out.println("reopen took " + ((System.nanoTime()-t0)/1000000.0) + " msec");
|
||||
} catch (IOException ioe) {
|
||||
//System.out.println(Thread.currentThread().getName() + ": IOE");
|
||||
//ioe.printStackTrace();
|
||||
throw new RuntimeException(ioe);
|
||||
}
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
//System.out.println("REOPEN EXC");
|
||||
//t.printStackTrace(System.out);
|
||||
throw new RuntimeException(t);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -20,6 +20,7 @@ import java.io.File;
|
|||
|
||||
import org.apache.lucene.analysis.MockAnalyzer;
|
||||
import org.apache.lucene.document.Document;
|
||||
import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
|
||||
import org.apache.lucene.store.Directory;
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
|
@ -91,4 +92,64 @@ public class TestIndexSplitter extends LuceneTestCase {
|
|||
r.close();
|
||||
fsDir.close();
|
||||
}
|
||||
|
||||
public void testDeleteThenOptimize() throws Exception {
|
||||
// Create directories where the indexes will reside
|
||||
File indexPath = new File(TEMP_DIR, "testfilesplitter");
|
||||
_TestUtil.rmDir(indexPath);
|
||||
indexPath.mkdirs();
|
||||
File indexSplitPath = new File(TEMP_DIR, "testfilesplitterdest");
|
||||
_TestUtil.rmDir(indexSplitPath);
|
||||
indexSplitPath.mkdirs();
|
||||
|
||||
// Create the original index
|
||||
LogMergePolicy mergePolicy = new LogByteSizeMergePolicy();
|
||||
mergePolicy.setNoCFSRatio(1);
|
||||
IndexWriterConfig iwConfig
|
||||
= new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random))
|
||||
.setOpenMode(OpenMode.CREATE)
|
||||
.setMergePolicy(mergePolicy);
|
||||
Directory fsDir = newFSDirectory(indexPath);
|
||||
IndexWriter indexWriter = new IndexWriter(fsDir, iwConfig);
|
||||
Document doc = new Document();
|
||||
doc.add(new Field("content", "doc 1", Field.Store.YES, Field.Index.ANALYZED_NO_NORMS));
|
||||
indexWriter.addDocument(doc);
|
||||
doc = new Document();
|
||||
doc.add(new Field("content", "doc 2", Field.Store.YES, Field.Index.ANALYZED_NO_NORMS));
|
||||
indexWriter.addDocument(doc);
|
||||
indexWriter.close();
|
||||
fsDir.close();
|
||||
|
||||
// Create the split index
|
||||
IndexSplitter indexSplitter = new IndexSplitter(indexPath);
|
||||
String splitSegName = indexSplitter.infos.info(0).name;
|
||||
indexSplitter.split(indexSplitPath, new String[] {splitSegName});
|
||||
|
||||
// Delete the first document in the split index
|
||||
Directory fsDirDest = newFSDirectory(indexSplitPath);
|
||||
IndexReader indexReader = IndexReader.open(fsDirDest, false);
|
||||
indexReader.deleteDocument(0);
|
||||
assertEquals(1, indexReader.numDocs());
|
||||
indexReader.close();
|
||||
fsDirDest.close();
|
||||
|
||||
// Optimize the split index
|
||||
mergePolicy = new LogByteSizeMergePolicy();
|
||||
mergePolicy.setNoCFSRatio(1);
|
||||
iwConfig = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random))
|
||||
.setOpenMode(OpenMode.APPEND)
|
||||
.setMergePolicy(mergePolicy);
|
||||
fsDirDest = newFSDirectory(indexSplitPath);
|
||||
indexWriter = new IndexWriter(fsDirDest, iwConfig);
|
||||
indexWriter.optimize();
|
||||
indexWriter.close();
|
||||
fsDirDest.close();
|
||||
|
||||
// Read the number of docs in the index
|
||||
fsDirDest = newFSDirectory(indexSplitPath);
|
||||
indexReader = IndexReader.open(fsDirDest);
|
||||
assertEquals(1, indexReader.numDocs());
|
||||
indexReader.close();
|
||||
fsDirDest.close();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,686 @@
|
|||
package org.apache.lucene.index;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
import org.apache.lucene.analysis.MockAnalyzer;
|
||||
import org.apache.lucene.document.Document;
|
||||
import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.Fieldable;
|
||||
import org.apache.lucene.index.codecs.CodecProvider;
|
||||
import org.apache.lucene.search.IndexSearcher;
|
||||
import org.apache.lucene.search.PhraseQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.ScoreDoc;
|
||||
import org.apache.lucene.search.Sort;
|
||||
import org.apache.lucene.search.SortField;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.TopDocs;
|
||||
import org.apache.lucene.store.Directory;
|
||||
import org.apache.lucene.store.MockDirectoryWrapper;
|
||||
import org.apache.lucene.store.NRTCachingDirectory;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.apache.lucene.util.LineFileDocs;
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
import org.apache.lucene.util.NamedThreadFactory;
|
||||
import org.apache.lucene.util._TestUtil;
|
||||
import org.junit.Test;
|
||||
|
||||
// TODO
|
||||
// - mix in optimize, addIndexes
|
||||
// - randomoly mix in non-congruent docs
|
||||
|
||||
// NOTE: This is a copy of TestNRTThreads, but swapping in
|
||||
// NRTManager for adding/updating/searching
|
||||
|
||||
public class TestNRTManager extends LuceneTestCase {
|
||||
|
||||
private static class SubDocs {
|
||||
public final String packID;
|
||||
public final List<String> subIDs;
|
||||
public boolean deleted;
|
||||
|
||||
public SubDocs(String packID, List<String> subIDs) {
|
||||
this.packID = packID;
|
||||
this.subIDs = subIDs;
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: is there a pre-existing way to do this!!!
|
||||
private Document cloneDoc(Document doc1) {
|
||||
final Document doc2 = new Document();
|
||||
for(Fieldable f : doc1.getFields()) {
|
||||
Field field1 = (Field) f;
|
||||
|
||||
Field field2 = new Field(field1.name(),
|
||||
field1.stringValue(),
|
||||
field1.isStored() ? Field.Store.YES : Field.Store.NO,
|
||||
field1.isIndexed() ? (field1.isTokenized() ? Field.Index.ANALYZED : Field.Index.NOT_ANALYZED) : Field.Index.NO);
|
||||
if (field1.getOmitNorms()) {
|
||||
field2.setOmitNorms(true);
|
||||
}
|
||||
if (field1.getOmitTermFreqAndPositions()) {
|
||||
field2.setOmitTermFreqAndPositions(true);
|
||||
}
|
||||
doc2.add(field2);
|
||||
}
|
||||
|
||||
return doc2;
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testNRTManager() throws Exception {
|
||||
|
||||
final long t0 = System.currentTimeMillis();
|
||||
|
||||
if (CodecProvider.getDefault().getDefaultFieldCodec().equals("SimpleText")) {
|
||||
// no
|
||||
CodecProvider.getDefault().setDefaultFieldCodec("Standard");
|
||||
}
|
||||
|
||||
final LineFileDocs docs = new LineFileDocs(random);
|
||||
final File tempDir = _TestUtil.getTempDir("nrtopenfiles");
|
||||
final MockDirectoryWrapper _dir = newFSDirectory(tempDir);
|
||||
_dir.setCheckIndexOnClose(false); // don't double-checkIndex, we do it ourselves
|
||||
Directory dir = _dir;
|
||||
final IndexWriterConfig conf = newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setOpenMode(IndexWriterConfig.OpenMode.CREATE);
|
||||
|
||||
if (LuceneTestCase.TEST_NIGHTLY) {
|
||||
// newIWConfig makes smallish max seg size, which
|
||||
// results in tons and tons of segments for this test
|
||||
// when run nightly:
|
||||
MergePolicy mp = conf.getMergePolicy();
|
||||
if (mp instanceof TieredMergePolicy) {
|
||||
((TieredMergePolicy) mp).setMaxMergedSegmentMB(5000.);
|
||||
} else if (mp instanceof LogByteSizeMergePolicy) {
|
||||
((LogByteSizeMergePolicy) mp).setMaxMergeMB(1000.);
|
||||
} else if (mp instanceof LogMergePolicy) {
|
||||
((LogMergePolicy) mp).setMaxMergeDocs(100000);
|
||||
}
|
||||
}
|
||||
|
||||
conf.setMergedSegmentWarmer(new IndexWriter.IndexReaderWarmer() {
|
||||
@Override
|
||||
public void warm(IndexReader reader) throws IOException {
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: now warm merged reader=" + reader);
|
||||
}
|
||||
final int maxDoc = reader.maxDoc();
|
||||
final Bits delDocs = reader.getDeletedDocs();
|
||||
int sum = 0;
|
||||
final int inc = Math.max(1, maxDoc/50);
|
||||
for(int docID=0;docID<maxDoc;docID += inc) {
|
||||
if (delDocs == null || !delDocs.get(docID)) {
|
||||
final Document doc = reader.document(docID);
|
||||
sum += doc.getFields().size();
|
||||
}
|
||||
}
|
||||
|
||||
IndexSearcher searcher = newSearcher(reader);
|
||||
sum += searcher.search(new TermQuery(new Term("body", "united")), 10).totalHits;
|
||||
searcher.close();
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: warm visited " + sum + " fields");
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (random.nextBoolean()) {
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: wrap NRTCachingDir");
|
||||
}
|
||||
|
||||
NRTCachingDirectory nrtDir = new NRTCachingDirectory(dir, 5.0, 60.0);
|
||||
conf.setMergeScheduler(nrtDir.getMergeScheduler());
|
||||
dir = nrtDir;
|
||||
}
|
||||
|
||||
final IndexWriter writer = new IndexWriter(dir, conf);
|
||||
|
||||
if (VERBOSE) {
|
||||
writer.setInfoStream(System.out);
|
||||
}
|
||||
_TestUtil.reduceOpenFiles(writer);
|
||||
//System.out.println("TEST: conf=" + writer.getConfig());
|
||||
|
||||
final ExecutorService es = random.nextBoolean() ? null : Executors.newCachedThreadPool(new NamedThreadFactory("NRT search threads"));
|
||||
|
||||
final double minReopenSec = 0.01 + 0.05 * random.nextDouble();
|
||||
final double maxReopenSec = minReopenSec * (1.0 + 10 * random.nextDouble());
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: make NRTManager maxReopenSec=" + maxReopenSec + " minReopenSec=" + minReopenSec);
|
||||
}
|
||||
|
||||
final NRTManager nrt = new NRTManager(writer, es);
|
||||
final NRTManagerReopenThread nrtThread = new NRTManagerReopenThread(nrt, maxReopenSec, minReopenSec);
|
||||
nrtThread.setName("NRT Reopen Thread");
|
||||
nrtThread.setPriority(Math.min(Thread.currentThread().getPriority()+2, Thread.MAX_PRIORITY));
|
||||
nrtThread.setDaemon(true);
|
||||
nrtThread.start();
|
||||
|
||||
final int NUM_INDEX_THREADS = _TestUtil.nextInt(random, 1, 3);
|
||||
final int NUM_SEARCH_THREADS = _TestUtil.nextInt(random, 1, 3);
|
||||
//final int NUM_INDEX_THREADS = 1;
|
||||
//final int NUM_SEARCH_THREADS = 1;
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: " + NUM_INDEX_THREADS + " index threads; " + NUM_SEARCH_THREADS + " search threads");
|
||||
}
|
||||
|
||||
final int RUN_TIME_SEC = LuceneTestCase.TEST_NIGHTLY ? 300 : RANDOM_MULTIPLIER;
|
||||
|
||||
final AtomicBoolean failed = new AtomicBoolean();
|
||||
final AtomicInteger addCount = new AtomicInteger();
|
||||
final AtomicInteger delCount = new AtomicInteger();
|
||||
final AtomicInteger packCount = new AtomicInteger();
|
||||
final List<Long> lastGens = new ArrayList<Long>();
|
||||
|
||||
final Set<String> delIDs = Collections.synchronizedSet(new HashSet<String>());
|
||||
final List<SubDocs> allSubDocs = Collections.synchronizedList(new ArrayList<SubDocs>());
|
||||
|
||||
final long stopTime = System.currentTimeMillis() + RUN_TIME_SEC*1000;
|
||||
Thread[] threads = new Thread[NUM_INDEX_THREADS];
|
||||
for(int thread=0;thread<NUM_INDEX_THREADS;thread++) {
|
||||
threads[thread] = new Thread() {
|
||||
@Override
|
||||
public void run() {
|
||||
// TODO: would be better if this were cross thread, so that we make sure one thread deleting anothers added docs works:
|
||||
final List<String> toDeleteIDs = new ArrayList<String>();
|
||||
final List<SubDocs> toDeleteSubDocs = new ArrayList<SubDocs>();
|
||||
|
||||
long gen = 0;
|
||||
while(System.currentTimeMillis() < stopTime && !failed.get()) {
|
||||
|
||||
//System.out.println(Thread.currentThread().getName() + ": cycle");
|
||||
try {
|
||||
// Occassional longish pause if running
|
||||
// nightly
|
||||
if (LuceneTestCase.TEST_NIGHTLY && random.nextInt(6) == 3) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": now long sleep");
|
||||
}
|
||||
Thread.sleep(_TestUtil.nextInt(random, 50, 500));
|
||||
}
|
||||
|
||||
// Rate limit ingest rate:
|
||||
Thread.sleep(_TestUtil.nextInt(random, 1, 10));
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread() + ": done sleep");
|
||||
}
|
||||
|
||||
Document doc = docs.nextDoc();
|
||||
if (doc == null) {
|
||||
break;
|
||||
}
|
||||
final String addedField;
|
||||
if (random.nextBoolean()) {
|
||||
addedField = "extra" + random.nextInt(10);
|
||||
doc.add(new Field(addedField, "a random field", Field.Store.NO, Field.Index.ANALYZED));
|
||||
} else {
|
||||
addedField = null;
|
||||
}
|
||||
if (random.nextBoolean()) {
|
||||
|
||||
if (random.nextBoolean()) {
|
||||
// Add a pack of adjacent sub-docs
|
||||
final String packID;
|
||||
final SubDocs delSubDocs;
|
||||
if (toDeleteSubDocs.size() > 0 && random.nextBoolean()) {
|
||||
delSubDocs = toDeleteSubDocs.get(random.nextInt(toDeleteSubDocs.size()));
|
||||
assert !delSubDocs.deleted;
|
||||
toDeleteSubDocs.remove(delSubDocs);
|
||||
// reuse prior packID
|
||||
packID = delSubDocs.packID;
|
||||
} else {
|
||||
delSubDocs = null;
|
||||
// make new packID
|
||||
packID = packCount.getAndIncrement() + "";
|
||||
}
|
||||
|
||||
final Field packIDField = newField("packID", packID, Field.Store.YES, Field.Index.NOT_ANALYZED);
|
||||
final List<String> docIDs = new ArrayList<String>();
|
||||
final SubDocs subDocs = new SubDocs(packID, docIDs);
|
||||
final List<Document> docsList = new ArrayList<Document>();
|
||||
|
||||
allSubDocs.add(subDocs);
|
||||
doc.add(packIDField);
|
||||
docsList.add(cloneDoc(doc));
|
||||
docIDs.add(doc.get("docid"));
|
||||
|
||||
final int maxDocCount = _TestUtil.nextInt(random, 1, 10);
|
||||
while(docsList.size() < maxDocCount) {
|
||||
doc = docs.nextDoc();
|
||||
if (doc == null) {
|
||||
break;
|
||||
}
|
||||
docsList.add(cloneDoc(doc));
|
||||
docIDs.add(doc.get("docid"));
|
||||
}
|
||||
addCount.addAndGet(docsList.size());
|
||||
|
||||
if (delSubDocs != null) {
|
||||
delSubDocs.deleted = true;
|
||||
delIDs.addAll(delSubDocs.subIDs);
|
||||
delCount.addAndGet(delSubDocs.subIDs.size());
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: update pack packID=" + delSubDocs.packID + " count=" + docsList.size() + " docs=" + docIDs);
|
||||
}
|
||||
gen = nrt.updateDocuments(new Term("packID", delSubDocs.packID), docsList);
|
||||
/*
|
||||
// non-atomic:
|
||||
nrt.deleteDocuments(new Term("packID", delSubDocs.packID));
|
||||
for(Document subDoc : docsList) {
|
||||
nrt.addDocument(subDoc);
|
||||
}
|
||||
*/
|
||||
} else {
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: add pack packID=" + packID + " count=" + docsList.size() + " docs=" + docIDs);
|
||||
}
|
||||
gen = nrt.addDocuments(docsList);
|
||||
|
||||
/*
|
||||
// non-atomic:
|
||||
for(Document subDoc : docsList) {
|
||||
nrt.addDocument(subDoc);
|
||||
}
|
||||
*/
|
||||
}
|
||||
doc.removeField("packID");
|
||||
|
||||
if (random.nextInt(5) == 2) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": buffer del id:" + packID);
|
||||
}
|
||||
toDeleteSubDocs.add(subDocs);
|
||||
}
|
||||
|
||||
// randomly verify the add/update "took":
|
||||
if (random.nextInt(20) == 2) {
|
||||
final boolean applyDeletes = delSubDocs != null;
|
||||
final IndexSearcher s = nrt.get(gen, applyDeletes);
|
||||
try {
|
||||
assertEquals(docsList.size(), s.search(new TermQuery(new Term("packID", packID)), 10).totalHits);
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
}
|
||||
|
||||
} else {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": add doc docid:" + doc.get("docid"));
|
||||
}
|
||||
|
||||
gen = nrt.addDocument(doc);
|
||||
addCount.getAndIncrement();
|
||||
|
||||
// randomly verify the add "took":
|
||||
if (random.nextInt(20) == 2) {
|
||||
//System.out.println(Thread.currentThread().getName() + ": verify");
|
||||
final IndexSearcher s = nrt.get(gen, false);
|
||||
//System.out.println(Thread.currentThread().getName() + ": got s=" + s);
|
||||
try {
|
||||
assertEquals(1, s.search(new TermQuery(new Term("docid", doc.get("docid"))), 10).totalHits);
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
//System.out.println(Thread.currentThread().getName() + ": done verify");
|
||||
}
|
||||
|
||||
if (random.nextInt(5) == 3) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": buffer del id:" + doc.get("docid"));
|
||||
}
|
||||
toDeleteIDs.add(doc.get("docid"));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// we use update but it never replaces a
|
||||
// prior doc
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": update doc id:" + doc.get("docid"));
|
||||
}
|
||||
gen = nrt.updateDocument(new Term("docid", doc.get("docid")), doc);
|
||||
addCount.getAndIncrement();
|
||||
|
||||
// randomly verify the add "took":
|
||||
if (random.nextInt(20) == 2) {
|
||||
final IndexSearcher s = nrt.get(gen, true);
|
||||
try {
|
||||
assertEquals(1, s.search(new TermQuery(new Term("docid", doc.get("docid"))), 10).totalHits);
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
}
|
||||
|
||||
if (random.nextInt(5) == 3) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": buffer del id:" + doc.get("docid"));
|
||||
}
|
||||
toDeleteIDs.add(doc.get("docid"));
|
||||
}
|
||||
}
|
||||
|
||||
if (random.nextInt(30) == 17) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": apply " + toDeleteIDs.size() + " deletes");
|
||||
}
|
||||
for(String id : toDeleteIDs) {
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": del term=id:" + id);
|
||||
}
|
||||
gen = nrt.deleteDocuments(new Term("docid", id));
|
||||
|
||||
// randomly verify the delete "took":
|
||||
if (random.nextInt(20) == 7) {
|
||||
final IndexSearcher s = nrt.get(gen, true);
|
||||
try {
|
||||
assertEquals(0, s.search(new TermQuery(new Term("docid", id)), 10).totalHits);
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
final int count = delCount.addAndGet(toDeleteIDs.size());
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": tot " + count + " deletes");
|
||||
}
|
||||
delIDs.addAll(toDeleteIDs);
|
||||
toDeleteIDs.clear();
|
||||
|
||||
for(SubDocs subDocs : toDeleteSubDocs) {
|
||||
assertTrue(!subDocs.deleted);
|
||||
gen = nrt.deleteDocuments(new Term("packID", subDocs.packID));
|
||||
subDocs.deleted = true;
|
||||
if (VERBOSE) {
|
||||
System.out.println(" del subs: " + subDocs.subIDs + " packID=" + subDocs.packID);
|
||||
}
|
||||
delIDs.addAll(subDocs.subIDs);
|
||||
delCount.addAndGet(subDocs.subIDs.size());
|
||||
|
||||
// randomly verify the delete "took":
|
||||
if (random.nextInt(20) == 7) {
|
||||
final IndexSearcher s = nrt.get(gen, true);
|
||||
try {
|
||||
assertEquals(0, s.search(new TermQuery(new Term("packID", subDocs.packID)), 1).totalHits);
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
}
|
||||
}
|
||||
toDeleteSubDocs.clear();
|
||||
}
|
||||
if (addedField != null) {
|
||||
doc.removeField(addedField);
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
System.out.println(Thread.currentThread().getName() + ": FAILED: hit exc");
|
||||
t.printStackTrace();
|
||||
failed.set(true);
|
||||
throw new RuntimeException(t);
|
||||
}
|
||||
}
|
||||
|
||||
lastGens.add(gen);
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": indexing done");
|
||||
}
|
||||
}
|
||||
};
|
||||
threads[thread].setDaemon(true);
|
||||
threads[thread].start();
|
||||
}
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: DONE start indexing threads [" + (System.currentTimeMillis()-t0) + " ms]");
|
||||
}
|
||||
|
||||
// let index build up a bit
|
||||
Thread.sleep(100);
|
||||
|
||||
// silly starting guess:
|
||||
final AtomicInteger totTermCount = new AtomicInteger(100);
|
||||
|
||||
// run search threads
|
||||
final Thread[] searchThreads = new Thread[NUM_SEARCH_THREADS];
|
||||
final AtomicInteger totHits = new AtomicInteger();
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: start search threads");
|
||||
}
|
||||
|
||||
for(int thread=0;thread<NUM_SEARCH_THREADS;thread++) {
|
||||
searchThreads[thread] = new Thread() {
|
||||
@Override
|
||||
public void run() {
|
||||
while(System.currentTimeMillis() < stopTime && !failed.get()) {
|
||||
final IndexSearcher s = nrt.get(random.nextBoolean());
|
||||
try {
|
||||
try {
|
||||
smokeTestSearcher(s);
|
||||
if (s.getIndexReader().numDocs() > 0) {
|
||||
Fields fields = MultiFields.getFields(s.getIndexReader());
|
||||
if (fields == null) {
|
||||
continue;
|
||||
}
|
||||
Terms terms = fields.terms("body");
|
||||
if (terms == null) {
|
||||
continue;
|
||||
}
|
||||
|
||||
TermsEnum termsEnum = terms.iterator();
|
||||
int seenTermCount = 0;
|
||||
int shift;
|
||||
int trigger;
|
||||
if (totTermCount.get() == 0) {
|
||||
shift = 0;
|
||||
trigger = 1;
|
||||
} else {
|
||||
shift = random.nextInt(totTermCount.get()/10);
|
||||
trigger = totTermCount.get()/10;
|
||||
}
|
||||
|
||||
while(System.currentTimeMillis() < stopTime) {
|
||||
BytesRef term = termsEnum.next();
|
||||
if (term == null) {
|
||||
if (seenTermCount == 0) {
|
||||
break;
|
||||
}
|
||||
totTermCount.set(seenTermCount);
|
||||
seenTermCount = 0;
|
||||
if (totTermCount.get() == 0) {
|
||||
shift = 0;
|
||||
trigger = 1;
|
||||
} else {
|
||||
trigger = totTermCount.get()/10;
|
||||
//System.out.println("trigger " + trigger);
|
||||
shift = random.nextInt(totTermCount.get()/10);
|
||||
}
|
||||
termsEnum.seek(new BytesRef(""));
|
||||
continue;
|
||||
}
|
||||
seenTermCount++;
|
||||
// search 10 terms
|
||||
if (trigger == 0) {
|
||||
trigger = 1;
|
||||
}
|
||||
if ((seenTermCount + shift) % trigger == 0) {
|
||||
//if (VERBOSE) {
|
||||
//System.out.println(Thread.currentThread().getName() + " now search body:" + term.utf8ToString());
|
||||
//}
|
||||
totHits.addAndGet(runQuery(s, new TermQuery(new Term("body", term))));
|
||||
}
|
||||
}
|
||||
if (VERBOSE) {
|
||||
System.out.println(Thread.currentThread().getName() + ": search done");
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
nrt.release(s);
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
System.out.println(Thread.currentThread().getName() + ": FAILED: hit exc");
|
||||
failed.set(true);
|
||||
t.printStackTrace(System.out);
|
||||
throw new RuntimeException(t);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
searchThreads[thread].setDaemon(true);
|
||||
searchThreads[thread].start();
|
||||
}
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: now join");
|
||||
}
|
||||
for(int thread=0;thread<NUM_INDEX_THREADS;thread++) {
|
||||
threads[thread].join();
|
||||
}
|
||||
for(int thread=0;thread<NUM_SEARCH_THREADS;thread++) {
|
||||
searchThreads[thread].join();
|
||||
}
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: done join [" + (System.currentTimeMillis()-t0) + " ms]; addCount=" + addCount + " delCount=" + delCount);
|
||||
System.out.println("TEST: search totHits=" + totHits);
|
||||
}
|
||||
|
||||
long maxGen = 0;
|
||||
for(long gen : lastGens) {
|
||||
maxGen = Math.max(maxGen, gen);
|
||||
}
|
||||
|
||||
final IndexSearcher s = nrt.get(maxGen, true);
|
||||
|
||||
boolean doFail = false;
|
||||
for(String id : delIDs) {
|
||||
final TopDocs hits = s.search(new TermQuery(new Term("docid", id)), 1);
|
||||
if (hits.totalHits != 0) {
|
||||
System.out.println("doc id=" + id + " is supposed to be deleted, but got docID=" + hits.scoreDocs[0].doc);
|
||||
doFail = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Make sure each group of sub-docs are still in docID order:
|
||||
for(SubDocs subDocs : allSubDocs) {
|
||||
if (!subDocs.deleted) {
|
||||
// We sort by relevance but the scores should be identical so sort falls back to by docID:
|
||||
TopDocs hits = s.search(new TermQuery(new Term("packID", subDocs.packID)), 20);
|
||||
assertEquals(subDocs.subIDs.size(), hits.totalHits);
|
||||
int lastDocID = -1;
|
||||
int startDocID = -1;
|
||||
for(ScoreDoc scoreDoc : hits.scoreDocs) {
|
||||
final int docID = scoreDoc.doc;
|
||||
if (lastDocID != -1) {
|
||||
assertEquals(1+lastDocID, docID);
|
||||
} else {
|
||||
startDocID = docID;
|
||||
}
|
||||
lastDocID = docID;
|
||||
final Document doc = s.doc(docID);
|
||||
assertEquals(subDocs.packID, doc.get("packID"));
|
||||
}
|
||||
|
||||
lastDocID = startDocID - 1;
|
||||
for(String subID : subDocs.subIDs) {
|
||||
hits = s.search(new TermQuery(new Term("docid", subID)), 1);
|
||||
assertEquals(1, hits.totalHits);
|
||||
final int docID = hits.scoreDocs[0].doc;
|
||||
if (lastDocID != -1) {
|
||||
assertEquals(1+lastDocID, docID);
|
||||
}
|
||||
lastDocID = docID;
|
||||
}
|
||||
} else {
|
||||
for(String subID : subDocs.subIDs) {
|
||||
assertEquals(0, s.search(new TermQuery(new Term("docid", subID)), 1).totalHits);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
final int endID = Integer.parseInt(docs.nextDoc().get("docid"));
|
||||
for(int id=0;id<endID;id++) {
|
||||
String stringID = ""+id;
|
||||
if (!delIDs.contains(stringID)) {
|
||||
final TopDocs hits = s.search(new TermQuery(new Term("docid", stringID)), 1);
|
||||
if (hits.totalHits != 1) {
|
||||
System.out.println("doc id=" + stringID + " is not supposed to be deleted, but got hitCount=" + hits.totalHits);
|
||||
doFail = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
assertFalse(doFail);
|
||||
|
||||
assertEquals("index=" + writer.segString() + " addCount=" + addCount + " delCount=" + delCount, addCount.get() - delCount.get(), s.getIndexReader().numDocs());
|
||||
nrt.release(s);
|
||||
|
||||
if (es != null) {
|
||||
es.shutdown();
|
||||
es.awaitTermination(1, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
writer.commit();
|
||||
assertEquals("index=" + writer.segString() + " addCount=" + addCount + " delCount=" + delCount, addCount.get() - delCount.get(), writer.numDocs());
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: now close NRTManager");
|
||||
}
|
||||
nrtThread.close();
|
||||
nrt.close();
|
||||
assertFalse(writer.anyNonBulkMerges);
|
||||
writer.close(false);
|
||||
_TestUtil.checkIndex(dir);
|
||||
dir.close();
|
||||
_TestUtil.rmDir(tempDir);
|
||||
docs.close();
|
||||
|
||||
if (VERBOSE) {
|
||||
System.out.println("TEST: done [" + (System.currentTimeMillis()-t0) + " ms]");
|
||||
}
|
||||
}
|
||||
|
||||
private int runQuery(IndexSearcher s, Query q) throws Exception {
|
||||
s.search(q, 10);
|
||||
return s.search(q, null, 10, new Sort(new SortField("title", SortField.STRING))).totalHits;
|
||||
}
|
||||
|
||||
private void smokeTestSearcher(IndexSearcher s) throws Exception {
|
||||
runQuery(s, new TermQuery(new Term("body", "united")));
|
||||
runQuery(s, new TermQuery(new Term("titleTokenized", "states")));
|
||||
PhraseQuery pq = new PhraseQuery();
|
||||
pq.add(new Term("body", "united"));
|
||||
pq.add(new Term("body", "states"));
|
||||
runQuery(s, pq);
|
||||
}
|
||||
}
|
Before Width: | Height: | Size: 4.3 KiB |
|
@ -1,86 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
Copyright 2002-2004 The Apache Software Foundation or its licensors,
|
||||
as applicable.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
|
||||
<!-- This is not used by Forrest but makes it possible to debug the
|
||||
stylesheet in standalone editors -->
|
||||
<xsl:output method = "text" omit-xml-declaration="yes" />
|
||||
|
||||
<!--
|
||||
If the skin doesn't override this, at least aural styles
|
||||
and extra-css are present
|
||||
-->
|
||||
<xsl:template match="skinconfig">
|
||||
|
||||
<xsl:call-template name="aural"/>
|
||||
<xsl:call-template name="a-external"/>
|
||||
<xsl:apply-templates/>
|
||||
<xsl:call-template name="add-extra-css"/>
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="colors">
|
||||
<xsl:apply-templates/>
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template name="aural">
|
||||
|
||||
/* ==================== aural ============================ */
|
||||
|
||||
@media aural {
|
||||
h1, h2, h3, h4, h5, h6 { voice-family: paul, male; stress: 20; richness: 90 }
|
||||
h1 { pitch: x-low; pitch-range: 90 }
|
||||
h2 { pitch: x-low; pitch-range: 80 }
|
||||
h3 { pitch: low; pitch-range: 70 }
|
||||
h4 { pitch: medium; pitch-range: 60 }
|
||||
h5 { pitch: medium; pitch-range: 50 }
|
||||
h6 { pitch: medium; pitch-range: 40 }
|
||||
li, dt, dd { pitch: medium; richness: 60 }
|
||||
dt { stress: 80 }
|
||||
pre, code, tt { pitch: medium; pitch-range: 0; stress: 0; richness: 80 }
|
||||
em { pitch: medium; pitch-range: 60; stress: 60; richness: 50 }
|
||||
strong { pitch: medium; pitch-range: 60; stress: 90; richness: 90 }
|
||||
dfn { pitch: high; pitch-range: 60; stress: 60 }
|
||||
s, strike { richness: 0 }
|
||||
i { pitch: medium; pitch-range: 60; stress: 60; richness: 50 }
|
||||
b { pitch: medium; pitch-range: 60; stress: 90; richness: 90 }
|
||||
u { richness: 0 }
|
||||
|
||||
:link { voice-family: harry, male }
|
||||
:visited { voice-family: betty, female }
|
||||
:active { voice-family: betty, female; pitch-range: 80; pitch: x-high }
|
||||
}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template name="a-external">
|
||||
a.external {
|
||||
padding: 0 20px 0px 0px;
|
||||
display:inline;
|
||||
background-repeat: no-repeat;
|
||||
background-position: center right;
|
||||
background-image: url(images/external-link.gif);
|
||||
}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template name="add-extra-css">
|
||||
<xsl:text>/* extra-css */</xsl:text>
|
||||
<xsl:value-of select="extra-css"/>
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="*"></xsl:template>
|
||||
<xsl:template match="text()"></xsl:template>
|
||||
|
||||
</xsl:stylesheet>
|
Before Width: | Height: | Size: 49 B |
Before Width: | Height: | Size: 1.7 KiB |
Before Width: | Height: | Size: 37 B |
Before Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 285 B |
Before Width: | Height: | Size: 54 B |
Before Width: | Height: | Size: 200 B |
Before Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 1.2 KiB |
|
@ -1,208 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
Copyright 2002-2004 The Apache Software Foundation or its licensors,
|
||||
as applicable.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
|
||||
|
||||
<xsl:import href="../../common/css/forrest.css.xslt"/>
|
||||
|
||||
<!-- xsl:output is not used by Forrest but makes it possible to debug the
|
||||
stylesheet in standalone editors -->
|
||||
<xsl:output method = "text" omit-xml-declaration="yes" />
|
||||
|
||||
<!-- ==================== main block colors ============================ -->
|
||||
|
||||
<xsl:template match="color[@name='header']">
|
||||
#top { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='tab-selected']">
|
||||
#top .header .current { background-color: <xsl:value-of select="@value"/>;}
|
||||
#top .header .current a:link { color: <xsl:value-of select="@link"/>; }
|
||||
#top .header .current a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
#top .header .current a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='tab-unselected']">
|
||||
#tabs li { background-color: <xsl:value-of select="@value"/> ;}
|
||||
#tabs li a:link { color: <xsl:value-of select="@link"/>; }
|
||||
#tabs li a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
#tabs li a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='subtab-selected']">
|
||||
#level2tabs { background-color: <xsl:value-of select="@value"/> ;}
|
||||
#level2tabs a:link { color: <xsl:value-of select="@link"/>; }
|
||||
#level2tabs a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
#level2tabs a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
</xsl:template>
|
||||
|
||||
<!--xsl:template match="color[@name='subtab-unselected']">
|
||||
.level2tabstrip { background-color: <xsl:value-of select="@value"/>;}
|
||||
.datenote { background-color: <xsl:value-of select="@value"/>;}
|
||||
.level2tabstrip.unselected a:link { color: <xsl:value-of select="@link"/>; }
|
||||
.level2tabstrip.unselected a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
.level2tabstrip.unselected a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
</xsl:template-->
|
||||
|
||||
<xsl:template match="color[@name='heading']">
|
||||
.heading { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='subheading']">
|
||||
.boxed { background-color: <xsl:value-of select="@value"/>;}
|
||||
.underlined_5 {border-bottom: solid 5px <xsl:value-of select="@value"/>;}
|
||||
.underlined_10 {border-bottom: solid 10px <xsl:value-of select="@value"/>;}
|
||||
table caption {
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
<xsl:template match="color[@name='feedback']">
|
||||
#feedback {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
background: <xsl:value-of select="@value"/>;
|
||||
text-align: <xsl:value-of select="@align"/>;
|
||||
}
|
||||
#feedback #feedbackto {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='breadtrail']">
|
||||
#main .breadtrail {
|
||||
background: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
#main .breadtrail a:link { color: <xsl:value-of select="@link"/>; }
|
||||
#main .breadtrail a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
#main .breadtrail a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
#top .breadtrail {
|
||||
background: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
#top .breadtrail a:link { color: <xsl:value-of select="@link"/>; }
|
||||
#top .breadtrail a:visited { color: <xsl:value-of select="@vlink"/>; }
|
||||
#top .breadtrail a:hover { color: <xsl:value-of select="@hlink"/>; }
|
||||
</xsl:template>
|
||||
<!--Fix for other (old) profiles-->
|
||||
<xsl:template match="color[@name='navstrip']">
|
||||
#publishedStrip {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
background: <xsl:value-of select="@value"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
<!--has to go after the nav-strip (no 'navstrip')-->
|
||||
<xsl:template match="color[@name='published']">
|
||||
#publishedStrip {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
background: <xsl:value-of select="@value"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
<xsl:template match="color[@name='toolbox']">
|
||||
#menu .menupagetitle { background-color: <xsl:value-of select="@value"/>}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='border']">
|
||||
#menu { border-color: <xsl:value-of select="@value"/>;}
|
||||
#menu .menupagetitle { border-color: <xsl:value-of select="@value"/>;}
|
||||
#menu .menupageitemgroup { border-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='menu']">
|
||||
#menu { background-color: <xsl:value-of select="@value"/>;}
|
||||
#menu { color: <xsl:value-of select="@font"/>;}
|
||||
#menu a:link { color: <xsl:value-of select="@link"/>;}
|
||||
#menu a:visited { color: <xsl:value-of select="@vlink"/>;}
|
||||
#menu a:hover {
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@hlink"/>;}
|
||||
</xsl:template>
|
||||
<xsl:template match="color[@name='dialog']">
|
||||
#menu .menupagetitle { color: <xsl:value-of select="@font"/>;}
|
||||
#menu .menupageitemgroup {
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
}
|
||||
#menu .menupageitem {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
#menu .menupageitem a:link { color: <xsl:value-of select="@link"/>;}
|
||||
#menu .menupageitem a:visited { color: <xsl:value-of select="@vlink"/>;}
|
||||
#menu .menupageitem a:hover {
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@hlink"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
<xsl:template match="color[@name='menuheading']">
|
||||
#menu h1 {
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
<xsl:template match="color[@name='searchbox']">
|
||||
#top .searchbox {
|
||||
background-color: <xsl:value-of select="@value"/> ;
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='body']">
|
||||
body{
|
||||
background-color: <xsl:value-of select="@value"/>;
|
||||
color: <xsl:value-of select="@font"/>;
|
||||
}
|
||||
a:link { color:<xsl:value-of select="@link"/>}
|
||||
a:visited { color:<xsl:value-of select="@vlink"/>}
|
||||
a:hover { color:<xsl:value-of select="@hlink"/>}
|
||||
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='footer']">
|
||||
#footer { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<!-- ==================== other colors ============================ -->
|
||||
<xsl:template match="color[@name='highlight']">
|
||||
.highlight { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='fixme']">
|
||||
.fixme { border-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='note']">
|
||||
.note { border-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='warning']">
|
||||
.warning { border-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='code']">
|
||||
.code { border-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='table']">
|
||||
.ForrestTable { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
<xsl:template match="color[@name='table-cell']">
|
||||
.ForrestTable td { background-color: <xsl:value-of select="@value"/>;}
|
||||
</xsl:template>
|
||||
|
||||
|
||||
</xsl:stylesheet>
|
|
@ -119,6 +119,12 @@ public class CheckIndex {
|
|||
* argument). */
|
||||
public boolean partial;
|
||||
|
||||
/** The greatest segment name. */
|
||||
public int maxSegmentName;
|
||||
|
||||
/** Whether the SegmentInfos.counter is greater than any of the segments' names. */
|
||||
public boolean validCounter;
|
||||
|
||||
/** Holds the userData of the last commit in the index */
|
||||
public Map<String, String> userData;
|
||||
|
||||
|
@ -413,9 +419,14 @@ public class CheckIndex {
|
|||
|
||||
result.newSegments = (SegmentInfos) sis.clone();
|
||||
result.newSegments.clear();
|
||||
result.maxSegmentName = -1;
|
||||
|
||||
for(int i=0;i<numSegments;i++) {
|
||||
final SegmentInfo info = sis.info(i);
|
||||
int segmentName = Integer.parseInt(info.name.substring(1), Character.MAX_RADIX);
|
||||
if (segmentName > result.maxSegmentName) {
|
||||
result.maxSegmentName = segmentName;
|
||||
}
|
||||
if (onlySegments != null && !onlySegments.contains(info.name))
|
||||
continue;
|
||||
Status.SegmentInfoStatus segInfoStat = new Status.SegmentInfoStatus();
|
||||
|
@ -555,10 +566,19 @@ public class CheckIndex {
|
|||
|
||||
if (0 == result.numBadSegments) {
|
||||
result.clean = true;
|
||||
msg("No problems were detected with this index.\n");
|
||||
} else
|
||||
msg("WARNING: " + result.numBadSegments + " broken segments (containing " + result.totLoseDocCount + " documents) detected");
|
||||
|
||||
if ( ! (result.validCounter = (result.maxSegmentName < sis.counter))) {
|
||||
result.clean = false;
|
||||
result.newSegments.counter = result.maxSegmentName + 1;
|
||||
msg("ERROR: Next segment name counter " + sis.counter + " is not greater than max segment name " + result.maxSegmentName);
|
||||
}
|
||||
|
||||
if (result.clean) {
|
||||
msg("No problems were detected with this index.\n");
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
|
|
@ -28,7 +28,6 @@ import java.util.HashMap;
|
|||
import java.io.FileNotFoundException;
|
||||
import java.io.IOException;
|
||||
|
||||
|
||||
/**
|
||||
* Class for accessing a compound stream.
|
||||
* This class implements a directory, but is limited to only read operations.
|
||||
|
@ -36,289 +35,273 @@ import java.io.IOException;
|
|||
* @lucene.experimental
|
||||
*/
|
||||
public class CompoundFileReader extends Directory {
|
||||
|
||||
private int readBufferSize;
|
||||
|
||||
private static final class FileEntry {
|
||||
long offset;
|
||||
long length;
|
||||
}
|
||||
|
||||
|
||||
// Base info
|
||||
private Directory directory;
|
||||
private String fileName;
|
||||
|
||||
private IndexInput stream;
|
||||
private HashMap<String,FileEntry> entries = new HashMap<String,FileEntry>();
|
||||
|
||||
|
||||
|
||||
private int readBufferSize;
|
||||
|
||||
private static final class FileEntry {
|
||||
long offset;
|
||||
long length;
|
||||
}
|
||||
|
||||
// Base info
|
||||
private Directory directory;
|
||||
private String fileName;
|
||||
|
||||
private IndexInput stream;
|
||||
private HashMap<String,FileEntry> entries = new HashMap<String,FileEntry>();
|
||||
|
||||
public CompoundFileReader(Directory dir, String name) throws IOException {
|
||||
this(dir, name, BufferedIndexInput.BUFFER_SIZE);
|
||||
}
|
||||
|
||||
|
||||
public CompoundFileReader(Directory dir, String name, int readBufferSize) throws IOException {
|
||||
directory = dir;
|
||||
fileName = name;
|
||||
this.readBufferSize = readBufferSize;
|
||||
|
||||
boolean success = false;
|
||||
|
||||
assert !(dir instanceof CompoundFileReader) : "compound file inside of compound file: " + name;
|
||||
directory = dir;
|
||||
fileName = name;
|
||||
this.readBufferSize = readBufferSize;
|
||||
|
||||
boolean success = false;
|
||||
|
||||
try {
|
||||
stream = dir.openInput(name, readBufferSize);
|
||||
|
||||
// read the first VInt. If it is negative, it's the version number
|
||||
// otherwise it's the count (pre-3.1 indexes)
|
||||
int firstInt = stream.readVInt();
|
||||
|
||||
final int count;
|
||||
final boolean stripSegmentName;
|
||||
if (firstInt < CompoundFileWriter.FORMAT_PRE_VERSION) {
|
||||
if (firstInt < CompoundFileWriter.FORMAT_CURRENT) {
|
||||
throw new CorruptIndexException("Incompatible format version: "
|
||||
+ firstInt + " expected " + CompoundFileWriter.FORMAT_CURRENT);
|
||||
}
|
||||
// It's a post-3.1 index, read the count.
|
||||
count = stream.readVInt();
|
||||
stripSegmentName = false;
|
||||
} else {
|
||||
count = firstInt;
|
||||
stripSegmentName = true;
|
||||
}
|
||||
|
||||
// read the directory and init files
|
||||
FileEntry entry = null;
|
||||
for (int i=0; i<count; i++) {
|
||||
long offset = stream.readLong();
|
||||
String id = stream.readString();
|
||||
|
||||
if (stripSegmentName) {
|
||||
// Fix the id to not include the segment names. This is relevant for
|
||||
// pre-3.1 indexes.
|
||||
id = IndexFileNames.stripSegmentName(id);
|
||||
}
|
||||
|
||||
if (entry != null) {
|
||||
// set length of the previous entry
|
||||
entry.length = offset - entry.offset;
|
||||
}
|
||||
|
||||
entry = new FileEntry();
|
||||
entry.offset = offset;
|
||||
entries.put(id, entry);
|
||||
}
|
||||
|
||||
// set the length of the final entry
|
||||
if (entry != null) {
|
||||
entry.length = stream.length() - entry.offset;
|
||||
}
|
||||
|
||||
success = true;
|
||||
|
||||
} finally {
|
||||
if (!success && (stream != null)) {
|
||||
try {
|
||||
stream = dir.openInput(name, readBufferSize);
|
||||
|
||||
// read the first VInt. If it is negative, it's the version number
|
||||
// otherwise it's the count (pre-3.1 indexes)
|
||||
int firstInt = stream.readVInt();
|
||||
|
||||
final int count;
|
||||
final boolean stripSegmentName;
|
||||
if (firstInt < CompoundFileWriter.FORMAT_PRE_VERSION) {
|
||||
if (firstInt < CompoundFileWriter.FORMAT_CURRENT) {
|
||||
throw new CorruptIndexException("Incompatible format version: "
|
||||
+ firstInt + " expected " + CompoundFileWriter.FORMAT_CURRENT);
|
||||
}
|
||||
// It's a post-3.1 index, read the count.
|
||||
count = stream.readVInt();
|
||||
stripSegmentName = false;
|
||||
} else {
|
||||
count = firstInt;
|
||||
stripSegmentName = true;
|
||||
}
|
||||
|
||||
// read the directory and init files
|
||||
FileEntry entry = null;
|
||||
for (int i=0; i<count; i++) {
|
||||
long offset = stream.readLong();
|
||||
String id = stream.readString();
|
||||
|
||||
if (stripSegmentName) {
|
||||
// Fix the id to not include the segment names. This is relevant for
|
||||
// pre-3.1 indexes.
|
||||
id = IndexFileNames.stripSegmentName(id);
|
||||
}
|
||||
|
||||
if (entry != null) {
|
||||
// set length of the previous entry
|
||||
entry.length = offset - entry.offset;
|
||||
}
|
||||
|
||||
entry = new FileEntry();
|
||||
entry.offset = offset;
|
||||
entries.put(id, entry);
|
||||
}
|
||||
|
||||
// set the length of the final entry
|
||||
if (entry != null) {
|
||||
entry.length = stream.length() - entry.offset;
|
||||
}
|
||||
|
||||
success = true;
|
||||
|
||||
} finally {
|
||||
if (!success && (stream != null)) {
|
||||
try {
|
||||
stream.close();
|
||||
} catch (IOException e) { }
|
||||
}
|
||||
}
|
||||
stream.close();
|
||||
} catch (IOException e) { }
|
||||
}
|
||||
}
|
||||
|
||||
public Directory getDirectory() {
|
||||
return directory;
|
||||
}
|
||||
|
||||
public Directory getDirectory() {
|
||||
return directory;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return fileName;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void close() throws IOException {
|
||||
if (stream == null)
|
||||
throw new IOException("Already closed");
|
||||
|
||||
entries.clear();
|
||||
stream.close();
|
||||
stream = null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized IndexInput openInput(String id) throws IOException {
|
||||
// Default to readBufferSize passed in when we were opened
|
||||
return openInput(id, readBufferSize);
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized IndexInput openInput(String id, int readBufferSize) throws IOException {
|
||||
if (stream == null)
|
||||
throw new IOException("Stream closed");
|
||||
|
||||
id = IndexFileNames.stripSegmentName(id);
|
||||
final FileEntry entry = entries.get(id);
|
||||
if (entry == null)
|
||||
throw new IOException("No sub-file with id " + id + " found (files: " + entries.keySet() + ")");
|
||||
|
||||
return new CSIndexInput(stream, entry.offset, entry.length, readBufferSize);
|
||||
}
|
||||
|
||||
/** Returns an array of strings, one for each file in the directory. */
|
||||
@Override
|
||||
public String[] listAll() {
|
||||
String[] res = entries.keySet().toArray(new String[entries.size()]);
|
||||
// Add the segment name
|
||||
String seg = fileName.substring(0, fileName.indexOf('.'));
|
||||
for (int i = 0; i < res.length; i++) {
|
||||
res[i] = seg + res[i];
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return fileName;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void close() throws IOException {
|
||||
if (stream == null)
|
||||
throw new IOException("Already closed");
|
||||
|
||||
entries.clear();
|
||||
stream.close();
|
||||
stream = null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized IndexInput openInput(String id)
|
||||
throws IOException
|
||||
{
|
||||
// Default to readBufferSize passed in when we were opened
|
||||
return openInput(id, readBufferSize);
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized IndexInput openInput(String id, int readBufferSize)
|
||||
throws IOException
|
||||
{
|
||||
if (stream == null)
|
||||
throw new IOException("Stream closed");
|
||||
|
||||
id = IndexFileNames.stripSegmentName(id);
|
||||
final FileEntry entry = entries.get(id);
|
||||
if (entry == null)
|
||||
throw new IOException("No sub-file with id " + id + " found (files: " + entries.keySet() + ")");
|
||||
|
||||
return new CSIndexInput(stream, entry.offset, entry.length, readBufferSize);
|
||||
}
|
||||
|
||||
/** Returns an array of strings, one for each file in the directory. */
|
||||
@Override
|
||||
public String[] listAll() {
|
||||
String[] res = entries.keySet().toArray(new String[entries.size()]);
|
||||
// Add the segment name
|
||||
String seg = fileName.substring(0, fileName.indexOf('.'));
|
||||
for (int i = 0; i < res.length; i++) {
|
||||
res[i] = seg + res[i];
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
/** Returns true iff a file with the given name exists. */
|
||||
@Override
|
||||
public boolean fileExists(String name) {
|
||||
return entries.containsKey(IndexFileNames.stripSegmentName(name));
|
||||
}
|
||||
|
||||
/** Returns the time the compound file was last modified. */
|
||||
@Override
|
||||
public long fileModified(String name) throws IOException {
|
||||
return directory.fileModified(fileName);
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public void deleteFile(String name)
|
||||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
public void renameFile(String from, String to)
|
||||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Returns the length of a file in the directory.
|
||||
* @throws IOException if the file does not exist */
|
||||
@Override
|
||||
public long fileLength(String name) throws IOException {
|
||||
FileEntry e = entries.get(IndexFileNames.stripSegmentName(name));
|
||||
if (e == null)
|
||||
throw new FileNotFoundException(name);
|
||||
return e.length;
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public IndexOutput createOutput(String name)
|
||||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void sync(Collection<String> names) throws IOException {
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public Lock makeLock(String name)
|
||||
{
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Implementation of an IndexInput that reads from a portion of the
|
||||
* compound file. The visibility is left as "package" *only* because
|
||||
* this helps with testing since JUnit test cases in a different class
|
||||
* can then access package fields of this class.
|
||||
*/
|
||||
static final class CSIndexInput extends BufferedIndexInput {
|
||||
|
||||
IndexInput base;
|
||||
long fileOffset;
|
||||
long length;
|
||||
|
||||
CSIndexInput(final IndexInput base, final long fileOffset, final long length)
|
||||
{
|
||||
this(base, fileOffset, length, BufferedIndexInput.BUFFER_SIZE);
|
||||
}
|
||||
|
||||
CSIndexInput(final IndexInput base, final long fileOffset, final long length, int readBufferSize)
|
||||
{
|
||||
super(readBufferSize);
|
||||
this.base = (IndexInput)base.clone();
|
||||
this.fileOffset = fileOffset;
|
||||
this.length = length;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Object clone() {
|
||||
CSIndexInput clone = (CSIndexInput)super.clone();
|
||||
clone.base = (IndexInput)base.clone();
|
||||
clone.fileOffset = fileOffset;
|
||||
clone.length = length;
|
||||
return clone;
|
||||
}
|
||||
|
||||
/** Expert: implements buffer refill. Reads bytes from the current
|
||||
* position in the input.
|
||||
* @param b the array to read bytes into
|
||||
* @param offset the offset in the array to start storing bytes
|
||||
* @param len the number of bytes to read
|
||||
*/
|
||||
@Override
|
||||
protected void readInternal(byte[] b, int offset, int len)
|
||||
throws IOException
|
||||
{
|
||||
long start = getFilePointer();
|
||||
if(start + len > length)
|
||||
throw new IOException("read past EOF");
|
||||
base.seek(fileOffset + start);
|
||||
base.readBytes(b, offset, len, false);
|
||||
}
|
||||
|
||||
/** Expert: implements seek. Sets current position in this file, where
|
||||
* the next {@link #readInternal(byte[],int,int)} will occur.
|
||||
* @see #readInternal(byte[],int,int)
|
||||
*/
|
||||
@Override
|
||||
protected void seekInternal(long pos) {}
|
||||
|
||||
/** Closes the stream to further operations. */
|
||||
@Override
|
||||
public void close() throws IOException {
|
||||
base.close();
|
||||
}
|
||||
|
||||
@Override
|
||||
public long length() {
|
||||
return length;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void copyBytes(IndexOutput out, long numBytes) throws IOException {
|
||||
// Copy first whatever is in the buffer
|
||||
numBytes -= flushBuffer(out, numBytes);
|
||||
|
||||
// If there are more bytes left to copy, delegate the copy task to the
|
||||
// base IndexInput, in case it can do an optimized copy.
|
||||
if (numBytes > 0) {
|
||||
long start = getFilePointer();
|
||||
if (start + numBytes > length) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
base.seek(fileOffset + start);
|
||||
base.copyBytes(out, numBytes);
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
/** Returns true iff a file with the given name exists. */
|
||||
@Override
|
||||
public boolean fileExists(String name) {
|
||||
return entries.containsKey(IndexFileNames.stripSegmentName(name));
|
||||
}
|
||||
|
||||
/** Returns the time the compound file was last modified. */
|
||||
@Override
|
||||
public long fileModified(String name) throws IOException {
|
||||
return directory.fileModified(fileName);
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public void deleteFile(String name) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
public void renameFile(String from, String to) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Returns the length of a file in the directory.
|
||||
* @throws IOException if the file does not exist */
|
||||
@Override
|
||||
public long fileLength(String name) throws IOException {
|
||||
FileEntry e = entries.get(IndexFileNames.stripSegmentName(name));
|
||||
if (e == null)
|
||||
throw new FileNotFoundException(name);
|
||||
return e.length;
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public IndexOutput createOutput(String name) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void sync(Collection<String> names) throws IOException {
|
||||
}
|
||||
|
||||
/** Not implemented
|
||||
* @throws UnsupportedOperationException */
|
||||
@Override
|
||||
public Lock makeLock(String name) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
/** Implementation of an IndexInput that reads from a portion of the
|
||||
* compound file. The visibility is left as "package" *only* because
|
||||
* this helps with testing since JUnit test cases in a different class
|
||||
* can then access package fields of this class.
|
||||
*/
|
||||
static final class CSIndexInput extends BufferedIndexInput {
|
||||
IndexInput base;
|
||||
long fileOffset;
|
||||
long length;
|
||||
|
||||
CSIndexInput(final IndexInput base, final long fileOffset, final long length) {
|
||||
this(base, fileOffset, length, BufferedIndexInput.BUFFER_SIZE);
|
||||
}
|
||||
|
||||
CSIndexInput(final IndexInput base, final long fileOffset, final long length, int readBufferSize) {
|
||||
super(readBufferSize);
|
||||
this.base = (IndexInput)base.clone();
|
||||
this.fileOffset = fileOffset;
|
||||
this.length = length;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Object clone() {
|
||||
CSIndexInput clone = (CSIndexInput)super.clone();
|
||||
clone.base = (IndexInput)base.clone();
|
||||
clone.fileOffset = fileOffset;
|
||||
clone.length = length;
|
||||
return clone;
|
||||
}
|
||||
|
||||
/** Expert: implements buffer refill. Reads bytes from the current
|
||||
* position in the input.
|
||||
* @param b the array to read bytes into
|
||||
* @param offset the offset in the array to start storing bytes
|
||||
* @param len the number of bytes to read
|
||||
*/
|
||||
@Override
|
||||
protected void readInternal(byte[] b, int offset, int len) throws IOException {
|
||||
long start = getFilePointer();
|
||||
if(start + len > length)
|
||||
throw new IOException("read past EOF");
|
||||
base.seek(fileOffset + start);
|
||||
base.readBytes(b, offset, len, false);
|
||||
}
|
||||
|
||||
/** Expert: implements seek. Sets current position in this file, where
|
||||
* the next {@link #readInternal(byte[],int,int)} will occur.
|
||||
* @see #readInternal(byte[],int,int)
|
||||
*/
|
||||
@Override
|
||||
protected void seekInternal(long pos) {}
|
||||
|
||||
/** Closes the stream to further operations. */
|
||||
@Override
|
||||
public void close() throws IOException {
|
||||
base.close();
|
||||
}
|
||||
|
||||
@Override
|
||||
public long length() {
|
||||
return length;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void copyBytes(IndexOutput out, long numBytes) throws IOException {
|
||||
// Copy first whatever is in the buffer
|
||||
numBytes -= flushBuffer(out, numBytes);
|
||||
|
||||
// If there are more bytes left to copy, delegate the copy task to the
|
||||
// base IndexInput, in case it can do an optimized copy.
|
||||
if (numBytes > 0) {
|
||||
long start = getFilePointer();
|
||||
if (start + numBytes > length) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
base.seek(fileOffset + start);
|
||||
base.copyBytes(out, numBytes);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -99,7 +99,7 @@ public final class DocumentsWriterFlushControl {
|
|||
final long ram = flushBytes + activeBytes;
|
||||
// take peakDelta into account - worst case is that all flushing, pending and blocked DWPT had maxMem and the last doc had the peakDelta
|
||||
final long expected = (long)(2 * (maxConfiguredRamBuffer * 1024 * 1024)) + ((numPending + numFlushingDWPT() + numBlockedFlushes()) * peakDelta);
|
||||
assert ram <= expected : "ram was " + ram + " expected: " + expected + " flush mem: " + flushBytes + " active: " + activeBytes ;
|
||||
assert ram <= expected : "ram was " + ram + " expected: " + expected + " flush mem: " + flushBytes + " active: " + activeBytes + " pending: " + numPending + " flushing: " + numFlushingDWPT() + " blocked: " + numBlockedFlushes() + " peakDelta: " + peakDelta ;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -55,6 +55,7 @@ import org.apache.lucene.util.Constants;
|
|||
import org.apache.lucene.util.StringHelper;
|
||||
import org.apache.lucene.util.ThreadInterruptedException;
|
||||
import org.apache.lucene.util.MapBackedSet;
|
||||
import org.apache.lucene.util.TwoPhaseCommit;
|
||||
|
||||
/**
|
||||
An <code>IndexWriter</code> creates and maintains an index.
|
||||
|
@ -190,7 +191,7 @@ import org.apache.lucene.util.MapBackedSet;
|
|||
* referenced by the "front" of the index). For this, IndexFileDeleter
|
||||
* keeps track of the last non commit checkpoint.
|
||||
*/
|
||||
public class IndexWriter implements Closeable {
|
||||
public class IndexWriter implements Closeable, TwoPhaseCommit {
|
||||
/**
|
||||
* Name of the write lock in the index.
|
||||
*/
|
||||
|
@ -3866,6 +3867,7 @@ public class IndexWriter implements Closeable {
|
|||
}
|
||||
|
||||
synchronized boolean nrtIsCurrent(SegmentInfos infos) {
|
||||
//System.out.println("IW.nrtIsCurrent " + (infos.version == segmentInfos.version && !docWriter.anyChanges() && !bufferedDeletesStream.any()));
|
||||
return infos.version == segmentInfos.version && !docWriter.anyChanges() && !bufferedDeletesStream.any();
|
||||
}
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ import java.util.Set;
|
|||
* ConcurrentMergeScheduler} they will be run concurrently.</p>
|
||||
*
|
||||
* <p>The default MergePolicy is {@link
|
||||
* LogByteSizeMergePolicy}.</p>
|
||||
* TieredMergePolicy}.</p>
|
||||
*
|
||||
* @lucene.experimental
|
||||
*/
|
||||
|
|
|
@ -26,6 +26,7 @@ import org.apache.lucene.store.Directory;
|
|||
import org.apache.lucene.store.IndexInput;
|
||||
import org.apache.lucene.util.AttributeSource;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.apache.lucene.util.IOUtils;
|
||||
import org.apache.lucene.util.PagedBytes;
|
||||
|
||||
// Simplest storage: stores fixed length byte[] per
|
||||
|
@ -45,11 +46,10 @@ class FixedStraightBytesImpl {
|
|||
private int lastDocID = -1;
|
||||
private byte[] oneRecord;
|
||||
|
||||
protected Writer(Directory dir, String id) throws IOException {
|
||||
public Writer(Directory dir, String id) throws IOException {
|
||||
super(dir, id, CODEC_NAME, VERSION_CURRENT, false, null, null);
|
||||
}
|
||||
|
||||
// TODO - impl bulk copy here!
|
||||
|
||||
@Override
|
||||
public void add(int docID, BytesRef bytes) throws IOException {
|
||||
|
@ -66,13 +66,6 @@ class FixedStraightBytesImpl {
|
|||
datOut.writeBytes(bytes.bytes, bytes.offset, bytes.length);
|
||||
}
|
||||
|
||||
/*
|
||||
* (non-Javadoc)
|
||||
*
|
||||
* @see
|
||||
* org.apache.lucene.index.values.Writer#merge(org.apache.lucene.index.values
|
||||
* .Writer.MergeState)
|
||||
*/
|
||||
@Override
|
||||
protected void merge(MergeState state) throws IOException {
|
||||
if (state.bits == null && state.reader instanceof Reader) {
|
||||
|
@ -96,8 +89,9 @@ class FixedStraightBytesImpl {
|
|||
}
|
||||
|
||||
lastDocID += maxDocs - 1;
|
||||
} else
|
||||
} else {
|
||||
super.merge(state);
|
||||
}
|
||||
}
|
||||
|
||||
// Fills up to but not including this docID
|
||||
|
@ -126,7 +120,7 @@ class FixedStraightBytesImpl {
|
|||
return oneRecord == null ? 0 : oneRecord.length;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public static class Reader extends BytesReaderBase {
|
||||
private final int size;
|
||||
private final int maxDoc;
|
||||
|
@ -139,19 +133,67 @@ class FixedStraightBytesImpl {
|
|||
|
||||
@Override
|
||||
public Source load() throws IOException {
|
||||
return new Source(cloneData(), size, maxDoc);
|
||||
return size == 1 ? new SingleByteSource(cloneData(), maxDoc) :
|
||||
new StraightBytesSource(cloneData(), size, maxDoc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() throws IOException {
|
||||
datIn.close();
|
||||
}
|
||||
|
||||
// specialized version for single bytes
|
||||
private static class SingleByteSource extends Source {
|
||||
private final int maxDoc;
|
||||
private final byte[] data;
|
||||
|
||||
private static class Source extends BytesBaseSource {
|
||||
public SingleByteSource(IndexInput datIn, int maxDoc) throws IOException {
|
||||
this.maxDoc = maxDoc;
|
||||
try {
|
||||
data = new byte[maxDoc];
|
||||
datIn.readBytes(data, 0, data.length, false);
|
||||
} finally {
|
||||
IOUtils.closeSafely(false, datIn);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public BytesRef getBytes(int docID, BytesRef bytesRef) {
|
||||
bytesRef.length = 1;
|
||||
bytesRef.bytes = data;
|
||||
bytesRef.offset = docID;
|
||||
return bytesRef;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ValueType type() {
|
||||
return ValueType.BYTES_FIXED_STRAIGHT;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ValuesEnum getEnum(AttributeSource attrSource) throws IOException {
|
||||
return new SourceEnum(attrSource, type(), this, maxDoc) {
|
||||
@Override
|
||||
public int advance(int target) throws IOException {
|
||||
if (target >= numDocs) {
|
||||
return pos = NO_MORE_DOCS;
|
||||
}
|
||||
bytesRef.length = 1;
|
||||
bytesRef.bytes = data;
|
||||
bytesRef.offset = target;
|
||||
return pos = target;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
private static class StraightBytesSource extends BytesBaseSource {
|
||||
private final int size;
|
||||
private final int maxDoc;
|
||||
|
||||
public Source(IndexInput datIn, int size, int maxDoc)
|
||||
public StraightBytesSource(IndexInput datIn, int size, int maxDoc)
|
||||
throws IOException {
|
||||
super(datIn, null, new PagedBytes(PAGED_BYTES_BITS), size * maxDoc);
|
||||
this.size = size;
|
||||
|
@ -162,10 +204,10 @@ class FixedStraightBytesImpl {
|
|||
public BytesRef getBytes(int docID, BytesRef bytesRef) {
|
||||
return data.fillSlice(bytesRef, docID * size, size);
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public int getValueCount() {
|
||||
throw new UnsupportedOperationException();
|
||||
return maxDoc;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -173,7 +173,7 @@ public abstract class FSDirectory extends Directory {
|
|||
/** Just like {@link #open(File)}, but allows you to
|
||||
* also specify a custom {@link LockFactory}. */
|
||||
public static FSDirectory open(File path, LockFactory lockFactory) throws IOException {
|
||||
if ((Constants.WINDOWS || Constants.SUN_OS)
|
||||
if ((Constants.WINDOWS || Constants.SUN_OS || Constants.LINUX)
|
||||
&& Constants.JRE_IS_64BIT && MMapDirectory.UNMAP_SUPPORTED) {
|
||||
return new MMapDirectory(path, lockFactory);
|
||||
} else if (Constants.WINDOWS) {
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
package org.apache.lucene.store;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import java.io.*;
|
||||
|
||||
import org.apache.lucene.store.DataInput;
|
||||
|
||||
/**
|
||||
* A {@link DataInput} wrapping a plain {@link InputStream}.
|
||||
*/
|
||||
public class InputStreamDataInput extends DataInput implements Closeable {
|
||||
private final InputStream is;
|
||||
|
||||
public InputStreamDataInput(InputStream is) {
|
||||
this.is = is;
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte readByte() throws IOException {
|
||||
int v = is.read();
|
||||
if (v == -1) throw new EOFException();
|
||||
return (byte) v;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readBytes(byte[] b, int offset, int len) throws IOException {
|
||||
while (len > 0) {
|
||||
final int cnt = is.read(b, offset, len);
|
||||
if (cnt < 0) {
|
||||
// Partially read the input, but no more data available in the stream.
|
||||
throw new EOFException();
|
||||
}
|
||||
len -= cnt;
|
||||
offset += cnt;
|
||||
}
|
||||
}
|
||||
|
||||
//@Override -- not until Java 1.6
|
||||
public void close() throws IOException {
|
||||
is.close();
|
||||
}
|
||||
}
|
|
@ -79,8 +79,8 @@ import org.apache.lucene.util.Constants;
|
|||
*/
|
||||
public class MMapDirectory extends FSDirectory {
|
||||
private boolean useUnmapHack = UNMAP_SUPPORTED;
|
||||
public static final int DEFAULT_MAX_BUFF = Constants.JRE_IS_64BIT ? Integer.MAX_VALUE : (256 * 1024 * 1024);
|
||||
private int maxBBuf = DEFAULT_MAX_BUFF;
|
||||
public static final int DEFAULT_MAX_BUFF = Constants.JRE_IS_64BIT ? (1 << 30) : (1 << 28);
|
||||
private int chunkSizePower;
|
||||
|
||||
/** Create a new MMapDirectory for the named location.
|
||||
*
|
||||
|
@ -91,6 +91,7 @@ public class MMapDirectory extends FSDirectory {
|
|||
*/
|
||||
public MMapDirectory(File path, LockFactory lockFactory) throws IOException {
|
||||
super(path, lockFactory);
|
||||
setMaxChunkSize(DEFAULT_MAX_BUFF);
|
||||
}
|
||||
|
||||
/** Create a new MMapDirectory for the named location and {@link NativeFSLockFactory}.
|
||||
|
@ -100,6 +101,7 @@ public class MMapDirectory extends FSDirectory {
|
|||
*/
|
||||
public MMapDirectory(File path) throws IOException {
|
||||
super(path, null);
|
||||
setMaxChunkSize(DEFAULT_MAX_BUFF);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -180,23 +182,28 @@ public class MMapDirectory extends FSDirectory {
|
|||
* Especially on 32 bit platform, the address space can be very fragmented,
|
||||
* so large index files cannot be mapped.
|
||||
* Using a lower chunk size makes the directory implementation a little
|
||||
* bit slower (as the correct chunk must be resolved on each seek)
|
||||
* bit slower (as the correct chunk may be resolved on lots of seeks)
|
||||
* but the chance is higher that mmap does not fail. On 64 bit
|
||||
* Java platforms, this parameter should always be {@link Integer#MAX_VALUE},
|
||||
* Java platforms, this parameter should always be {@code 1 << 30},
|
||||
* as the address space is big enough.
|
||||
* <b>Please note:</b> This method always rounds down the chunk size
|
||||
* to a power of 2.
|
||||
*/
|
||||
public void setMaxChunkSize(final int maxBBuf) {
|
||||
if (maxBBuf<=0)
|
||||
public final void setMaxChunkSize(final int maxChunkSize) {
|
||||
if (maxChunkSize <= 0)
|
||||
throw new IllegalArgumentException("Maximum chunk size for mmap must be >0");
|
||||
this.maxBBuf=maxBBuf;
|
||||
//System.out.println("Requested chunk size: "+maxChunkSize);
|
||||
this.chunkSizePower = 31 - Integer.numberOfLeadingZeros(maxChunkSize);
|
||||
assert this.chunkSizePower >= 0 && this.chunkSizePower <= 30;
|
||||
//System.out.println("Got chunk size: "+getMaxChunkSize());
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the current mmap chunk size.
|
||||
* @see #setMaxChunkSize
|
||||
*/
|
||||
public int getMaxChunkSize() {
|
||||
return maxBBuf;
|
||||
public final int getMaxChunkSize() {
|
||||
return 1 << chunkSizePower;
|
||||
}
|
||||
|
||||
/** Creates an IndexInput for the file with the given name. */
|
||||
|
@ -206,152 +213,55 @@ public class MMapDirectory extends FSDirectory {
|
|||
File f = new File(getDirectory(), name);
|
||||
RandomAccessFile raf = new RandomAccessFile(f, "r");
|
||||
try {
|
||||
return (raf.length() <= maxBBuf)
|
||||
? (IndexInput) new MMapIndexInput(raf)
|
||||
: (IndexInput) new MultiMMapIndexInput(raf, maxBBuf);
|
||||
return new MMapIndexInput(raf, chunkSizePower);
|
||||
} finally {
|
||||
raf.close();
|
||||
}
|
||||
}
|
||||
|
||||
private class MMapIndexInput extends IndexInput {
|
||||
|
||||
private ByteBuffer buffer;
|
||||
private final long length;
|
||||
private boolean isClone = false;
|
||||
|
||||
private MMapIndexInput(RandomAccessFile raf) throws IOException {
|
||||
this.length = raf.length();
|
||||
this.buffer = raf.getChannel().map(MapMode.READ_ONLY, 0, length);
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte readByte() throws IOException {
|
||||
try {
|
||||
return buffer.get();
|
||||
} catch (BufferUnderflowException e) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readBytes(byte[] b, int offset, int len) throws IOException {
|
||||
try {
|
||||
buffer.get(b, offset, len);
|
||||
} catch (BufferUnderflowException e) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public short readShort() throws IOException {
|
||||
try {
|
||||
return buffer.getShort();
|
||||
} catch (BufferUnderflowException e) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int readInt() throws IOException {
|
||||
try {
|
||||
return buffer.getInt();
|
||||
} catch (BufferUnderflowException e) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public long readLong() throws IOException {
|
||||
try {
|
||||
return buffer.getLong();
|
||||
} catch (BufferUnderflowException e) {
|
||||
throw new IOException("read past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public long getFilePointer() {
|
||||
return buffer.position();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void seek(long pos) throws IOException {
|
||||
buffer.position((int)pos);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long length() {
|
||||
return length;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Object clone() {
|
||||
if (buffer == null)
|
||||
throw new AlreadyClosedException("MMapIndexInput already closed");
|
||||
MMapIndexInput clone = (MMapIndexInput)super.clone();
|
||||
clone.isClone = true;
|
||||
clone.buffer = buffer.duplicate();
|
||||
return clone;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() throws IOException {
|
||||
// unmap the buffer (if enabled) and at least unset it for GC
|
||||
try {
|
||||
if (isClone || buffer == null) return;
|
||||
cleanMapping(buffer);
|
||||
} finally {
|
||||
buffer = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Because Java's ByteBuffer uses an int to address the
|
||||
// values, it's necessary to access a file >
|
||||
// Integer.MAX_VALUE in size using multiple byte buffers.
|
||||
private class MultiMMapIndexInput extends IndexInput {
|
||||
private final class MMapIndexInput extends IndexInput {
|
||||
|
||||
private ByteBuffer[] buffers;
|
||||
private int[] bufSizes; // keep here, ByteBuffer.size() method is optional
|
||||
|
||||
private final long length;
|
||||
private final long length, chunkSizeMask, chunkSize;
|
||||
private final int chunkSizePower;
|
||||
|
||||
private int curBufIndex;
|
||||
private final int maxBufSize;
|
||||
|
||||
private ByteBuffer curBuf; // redundant for speed: buffers[curBufIndex]
|
||||
|
||||
private boolean isClone = false;
|
||||
|
||||
public MultiMMapIndexInput(RandomAccessFile raf, int maxBufSize)
|
||||
throws IOException {
|
||||
MMapIndexInput(RandomAccessFile raf, int chunkSizePower) throws IOException {
|
||||
this.length = raf.length();
|
||||
this.maxBufSize = maxBufSize;
|
||||
this.chunkSizePower = chunkSizePower;
|
||||
this.chunkSize = 1L << chunkSizePower;
|
||||
this.chunkSizeMask = chunkSize - 1L;
|
||||
|
||||
if (maxBufSize <= 0)
|
||||
throw new IllegalArgumentException("Non positive maxBufSize: "
|
||||
+ maxBufSize);
|
||||
if (chunkSizePower < 0 || chunkSizePower > 30)
|
||||
throw new IllegalArgumentException("Invalid chunkSizePower used for ByteBuffer size: " + chunkSizePower);
|
||||
|
||||
if ((length / maxBufSize) > Integer.MAX_VALUE)
|
||||
throw new IllegalArgumentException
|
||||
("RandomAccessFile too big for maximum buffer size: "
|
||||
+ raf.toString());
|
||||
if ((length >>> chunkSizePower) >= Integer.MAX_VALUE)
|
||||
throw new IllegalArgumentException("RandomAccessFile too big for chunk size: " + raf.toString());
|
||||
|
||||
int nrBuffers = (int) (length / maxBufSize);
|
||||
if (((long) nrBuffers * maxBufSize) <= length) nrBuffers++;
|
||||
// we always allocate one more buffer, the last one may be a 0 byte one
|
||||
final int nrBuffers = (int) (length >>> chunkSizePower) + 1;
|
||||
|
||||
//System.out.println("length="+length+", chunkSizePower=" + chunkSizePower + ", chunkSizeMask=" + chunkSizeMask + ", nrBuffers=" + nrBuffers);
|
||||
|
||||
this.buffers = new ByteBuffer[nrBuffers];
|
||||
this.bufSizes = new int[nrBuffers];
|
||||
|
||||
long bufferStart = 0;
|
||||
long bufferStart = 0L;
|
||||
FileChannel rafc = raf.getChannel();
|
||||
for (int bufNr = 0; bufNr < nrBuffers; bufNr++) {
|
||||
int bufSize = (length > (bufferStart + maxBufSize))
|
||||
? maxBufSize
|
||||
: (int) (length - bufferStart);
|
||||
this.buffers[bufNr] = rafc.map(MapMode.READ_ONLY,bufferStart,bufSize);
|
||||
this.bufSizes[bufNr] = bufSize;
|
||||
int bufSize = (int) ( (length > (bufferStart + chunkSize))
|
||||
? chunkSize
|
||||
: (length - bufferStart)
|
||||
);
|
||||
this.buffers[bufNr] = rafc.map(MapMode.READ_ONLY, bufferStart, bufSize);
|
||||
bufferStart += bufSize;
|
||||
}
|
||||
seek(0L);
|
||||
|
@ -362,11 +272,13 @@ public class MMapDirectory extends FSDirectory {
|
|||
try {
|
||||
return curBuf.get();
|
||||
} catch (BufferUnderflowException e) {
|
||||
curBufIndex++;
|
||||
if (curBufIndex >= buffers.length)
|
||||
throw new IOException("read past EOF");
|
||||
curBuf = buffers[curBufIndex];
|
||||
curBuf.position(0);
|
||||
do {
|
||||
curBufIndex++;
|
||||
if (curBufIndex >= buffers.length)
|
||||
throw new IOException("read past EOF");
|
||||
curBuf = buffers[curBufIndex];
|
||||
curBuf.position(0);
|
||||
} while (!curBuf.hasRemaining());
|
||||
return curBuf.get();
|
||||
}
|
||||
}
|
||||
|
@ -421,15 +333,28 @@ public class MMapDirectory extends FSDirectory {
|
|||
|
||||
@Override
|
||||
public long getFilePointer() {
|
||||
return ((long) curBufIndex * maxBufSize) + curBuf.position();
|
||||
return (((long) curBufIndex) << chunkSizePower) + curBuf.position();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void seek(long pos) throws IOException {
|
||||
curBufIndex = (int) (pos / maxBufSize);
|
||||
curBuf = buffers[curBufIndex];
|
||||
int bufOffset = (int) (pos - ((long) curBufIndex * maxBufSize));
|
||||
curBuf.position(bufOffset);
|
||||
// we use >> here to preserve negative, so we will catch AIOOBE:
|
||||
final int bi = (int) (pos >> chunkSizePower);
|
||||
try {
|
||||
final ByteBuffer b = buffers[bi];
|
||||
b.position((int) (pos & chunkSizeMask));
|
||||
// write values, on exception all is unchanged
|
||||
this.curBufIndex = bi;
|
||||
this.curBuf = b;
|
||||
} catch (ArrayIndexOutOfBoundsException aioobe) {
|
||||
if (pos < 0L)
|
||||
throw new IllegalArgumentException("Seeking to negative position");
|
||||
throw new IOException("seek past EOF");
|
||||
} catch (IllegalArgumentException iae) {
|
||||
if (pos < 0L)
|
||||
throw new IllegalArgumentException("Seeking to negative position");
|
||||
throw new IOException("seek past EOF");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -440,11 +365,10 @@ public class MMapDirectory extends FSDirectory {
|
|||
@Override
|
||||
public Object clone() {
|
||||
if (buffers == null)
|
||||
throw new AlreadyClosedException("MultiMMapIndexInput already closed");
|
||||
MultiMMapIndexInput clone = (MultiMMapIndexInput)super.clone();
|
||||
throw new AlreadyClosedException("MMapIndexInput already closed");
|
||||
final MMapIndexInput clone = (MMapIndexInput)super.clone();
|
||||
clone.isClone = true;
|
||||
clone.buffers = new ByteBuffer[buffers.length];
|
||||
// No need to clone bufSizes.
|
||||
// Since most clones will use only one buffer, duplicate() could also be
|
||||
// done lazy in clones, e.g. when adapting curBuf.
|
||||
for (int bufNr = 0; bufNr < buffers.length; bufNr++) {
|
||||
|
@ -453,9 +377,7 @@ public class MMapDirectory extends FSDirectory {
|
|||
try {
|
||||
clone.seek(getFilePointer());
|
||||
} catch(IOException ioe) {
|
||||
RuntimeException newException = new RuntimeException(ioe);
|
||||
newException.initCause(ioe);
|
||||
throw newException;
|
||||
throw new RuntimeException("Should never happen", ioe);
|
||||
}
|
||||
return clone;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
package org.apache.lucene.store;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import java.io.*;
|
||||
|
||||
/**
|
||||
* A {@link DataOutput} wrapping a plain {@link OutputStream}.
|
||||
*/
|
||||
public class OutputStreamDataOutput extends DataOutput implements Closeable {
|
||||
private final OutputStream os;
|
||||
|
||||
public OutputStreamDataOutput(OutputStream os) {
|
||||
this.os = os;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeByte(byte b) throws IOException {
|
||||
os.write(b);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeBytes(byte[] b, int offset, int length) throws IOException {
|
||||
os.write(b, offset, length);
|
||||
}
|
||||
|
||||
// @Override -- not until Java 1.6
|
||||
public void close() throws IOException {
|
||||
os.close();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,77 @@
|
|||
package org.apache.lucene.util;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* An interface for implementations that support 2-phase commit. You can use
|
||||
* {@link TwoPhaseCommitTool} to execute a 2-phase commit algorithm over several
|
||||
* {@link TwoPhaseCommit}s.
|
||||
*
|
||||
* @lucene.experimental
|
||||
*/
|
||||
public interface TwoPhaseCommit {
|
||||
|
||||
/**
|
||||
* The first stage of a 2-phase commit. Implementations should do as much work
|
||||
* as possible in this method, but avoid actual committing changes. If the
|
||||
* 2-phase commit fails, {@link #rollback()} is called to discard all changes
|
||||
* since last successful commit.
|
||||
*/
|
||||
public void prepareCommit() throws IOException;
|
||||
|
||||
/**
|
||||
* Like {@link #commit()}, but takes an additional commit data to be included
|
||||
* w/ the commit.
|
||||
* <p>
|
||||
* <b>NOTE:</b> some implementations may not support any custom data to be
|
||||
* included w/ the commit and may discard it altogether. Consult the actual
|
||||
* implementation documentation for verifying if this is supported.
|
||||
*
|
||||
* @see #prepareCommit()
|
||||
*/
|
||||
public void prepareCommit(Map<String, String> commitData) throws IOException;
|
||||
|
||||
/**
|
||||
* The second phase of a 2-phase commit. Implementations should ideally do
|
||||
* very little work in this method (following {@link #prepareCommit()}, and
|
||||
* after it returns, the caller can assume that the changes were successfully
|
||||
* committed to the underlying storage.
|
||||
*/
|
||||
public void commit() throws IOException;
|
||||
|
||||
/**
|
||||
* Like {@link #commit()}, but takes an additional commit data to be included
|
||||
* w/ the commit.
|
||||
*
|
||||
* @see #commit()
|
||||
* @see #prepareCommit(Map)
|
||||
*/
|
||||
public void commit(Map<String, String> commitData) throws IOException;
|
||||
|
||||
/**
|
||||
* Discards any changes that have occurred since the last commit. In a 2-phase
|
||||
* commit algorithm, where one of the objects failed to {@link #commit()} or
|
||||
* {@link #prepareCommit()}, this method is used to roll all other objects
|
||||
* back to their previous state.
|
||||
*/
|
||||
public void rollback() throws IOException;
|
||||
|
||||
}
|
|
@ -0,0 +1,162 @@
|
|||
package org.apache.lucene.util;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* A utility for executing 2-phase commit on several objects.
|
||||
*
|
||||
* @see TwoPhaseCommit
|
||||
* @lucene.experimental
|
||||
*/
|
||||
public final class TwoPhaseCommitTool {
|
||||
|
||||
/**
|
||||
* A wrapper of a {@link TwoPhaseCommit}, which delegates all calls to the
|
||||
* wrapped object, passing the specified commitData. This object is useful for
|
||||
* use with {@link TwoPhaseCommitTool#execute(TwoPhaseCommit...)} if one would
|
||||
* like to store commitData as part of the commit.
|
||||
*/
|
||||
public static final class TwoPhaseCommitWrapper implements TwoPhaseCommit {
|
||||
|
||||
private final TwoPhaseCommit tpc;
|
||||
private final Map<String, String> commitData;
|
||||
|
||||
public TwoPhaseCommitWrapper(TwoPhaseCommit tpc, Map<String, String> commitData) {
|
||||
this.tpc = tpc;
|
||||
this.commitData = commitData;
|
||||
}
|
||||
|
||||
public void prepareCommit() throws IOException {
|
||||
prepareCommit(commitData);
|
||||
}
|
||||
|
||||
public void prepareCommit(Map<String, String> commitData) throws IOException {
|
||||
tpc.prepareCommit(this.commitData);
|
||||
}
|
||||
|
||||
public void commit() throws IOException {
|
||||
commit(commitData);
|
||||
}
|
||||
|
||||
public void commit(Map<String, String> commitData) throws IOException {
|
||||
tpc.commit(this.commitData);
|
||||
}
|
||||
|
||||
public void rollback() throws IOException {
|
||||
tpc.rollback();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Thrown by {@link TwoPhaseCommitTool#execute(TwoPhaseCommit...)} when an
|
||||
* object fails to prepareCommit().
|
||||
*/
|
||||
public static class PrepareCommitFailException extends IOException {
|
||||
|
||||
public PrepareCommitFailException(Throwable cause, TwoPhaseCommit obj) {
|
||||
super("prepareCommit() failed on " + obj);
|
||||
initCause(cause);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Thrown by {@link TwoPhaseCommitTool#execute(TwoPhaseCommit...)} when an
|
||||
* object fails to commit().
|
||||
*/
|
||||
public static class CommitFailException extends IOException {
|
||||
|
||||
public CommitFailException(Throwable cause, TwoPhaseCommit obj) {
|
||||
super("commit() failed on " + obj);
|
||||
initCause(cause);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/** rollback all objects, discarding any exceptions that occur. */
|
||||
private static void rollback(TwoPhaseCommit... objects) {
|
||||
for (TwoPhaseCommit tpc : objects) {
|
||||
// ignore any exception that occurs during rollback - we want to ensure
|
||||
// all objects are rolled-back.
|
||||
if (tpc != null) {
|
||||
try {
|
||||
tpc.rollback();
|
||||
} catch (Throwable t) {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a 2-phase commit algorithm by first
|
||||
* {@link TwoPhaseCommit#prepareCommit()} all objects and only if all succeed,
|
||||
* it proceeds with {@link TwoPhaseCommit#commit()}. If any of the objects
|
||||
* fail on either the preparation or actual commit, it terminates and
|
||||
* {@link TwoPhaseCommit#rollback()} all of them.
|
||||
* <p>
|
||||
* <b>NOTE:</b> it may happen that an object fails to commit, after few have
|
||||
* already successfully committed. This tool will still issue a rollback
|
||||
* instruction on them as well, but depending on the implementation, it may
|
||||
* not have any effect.
|
||||
* <p>
|
||||
* <b>NOTE:</b> if any of the objects are {@code null}, this method simply
|
||||
* skips over them.
|
||||
*
|
||||
* @throws PrepareCommitFailException
|
||||
* if any of the objects fail to
|
||||
* {@link TwoPhaseCommit#prepareCommit()}
|
||||
* @throws CommitFailException
|
||||
* if any of the objects fail to {@link TwoPhaseCommit#commit()}
|
||||
*/
|
||||
public static void execute(TwoPhaseCommit... objects)
|
||||
throws PrepareCommitFailException, CommitFailException {
|
||||
TwoPhaseCommit tpc = null;
|
||||
try {
|
||||
// first, all should successfully prepareCommit()
|
||||
for (int i = 0; i < objects.length; i++) {
|
||||
tpc = objects[i];
|
||||
if (tpc != null) {
|
||||
tpc.prepareCommit();
|
||||
}
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
// first object that fails results in rollback all of them and
|
||||
// throwing an exception.
|
||||
rollback(objects);
|
||||
throw new PrepareCommitFailException(t, tpc);
|
||||
}
|
||||
|
||||
// If all successfully prepareCommit(), attempt the actual commit()
|
||||
try {
|
||||
for (int i = 0; i < objects.length; i++) {
|
||||
tpc = objects[i];
|
||||
if (tpc != null) {
|
||||
tpc.commit();
|
||||
}
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
// first object that fails results in rollback all of them and
|
||||
// throwing an exception.
|
||||
rollback(objects);
|
||||
throw new CommitFailException(t, tpc);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -490,7 +490,7 @@ public class FST<T> {
|
|||
if (!targetHasArcs(follow)) {
|
||||
//System.out.println(" end node");
|
||||
assert follow.isFinal();
|
||||
arc.label = -1;
|
||||
arc.label = END_LABEL;
|
||||
arc.output = follow.nextFinalOutput;
|
||||
arc.flags = BIT_LAST_ARC;
|
||||
return arc;
|
||||
|
@ -544,7 +544,7 @@ public class FST<T> {
|
|||
//System.out.println(" readFirstTarget follow.target=" + follow.target + " isFinal=" + follow.isFinal());
|
||||
if (follow.isFinal()) {
|
||||
// Insert "fake" final first arc:
|
||||
arc.label = -1;
|
||||
arc.label = END_LABEL;
|
||||
arc.output = follow.nextFinalOutput;
|
||||
if (follow.target <= 0) {
|
||||
arc.flags = BIT_LAST_ARC;
|
||||
|
@ -599,7 +599,7 @@ public class FST<T> {
|
|||
|
||||
/** In-place read; returns the arc. */
|
||||
public Arc<T> readNextArc(Arc<T> arc) throws IOException {
|
||||
if (arc.label == -1) {
|
||||
if (arc.label == END_LABEL) {
|
||||
// This was a fake inserted "final" arc
|
||||
if (arc.nextArc <= 0) {
|
||||
// This arc went to virtual final node, ie has no outgoing arcs
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
#Forrest generates UTF-8 by default, but these httpd servers are
|
||||
#ignoring the meta http-equiv charset tags
|
||||
AddDefaultCharset off
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="contributions.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Contributions
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="demo.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Building and Installing the Basic Demo
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="demo2.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Basic Demo Sources Walk-through
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="fileformats.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Index File Formats
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="gettingstarted.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Getting Started Guide
|
||||
</h1>
|
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 3.6 KiB After Width: | Height: | Size: 3.6 KiB |
Before Width: | Height: | Size: 285 B After Width: | Height: | Size: 285 B |
|
@ -219,10 +219,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="index.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>Lucene Java Documentation</h1>
|
||||
|
||||
<p>
|
|
@ -216,10 +216,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="linkmap.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>Site Linkmap Table of Contents</h1>
|
||||
<p>
|
||||
This is a map of the complete site and its structure.
|
|
@ -221,10 +221,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="index.pdf"><img alt="PDF -icon" src="../skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Lucene Contrib
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="queryparsersyntax.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Query Parser Syntax
|
||||
</h1>
|
|
@ -218,10 +218,6 @@ document.write("Last Published: " + document.lastModified);
|
|||
|start content
|
||||
+-->
|
||||
<div id="content">
|
||||
<div title="Portable Document Format" class="pdflink">
|
||||
<a class="dida" href="scoring.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
|
||||
PDF</a>
|
||||
</div>
|
||||
<h1>
|
||||
Apache Lucene - Scoring
|
||||
</h1>
|
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 4.7 KiB After Width: | Height: | Size: 4.7 KiB |
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 49 B After Width: | Height: | Size: 49 B |
Before Width: | Height: | Size: 54 B After Width: | Height: | Size: 54 B |
Before Width: | Height: | Size: 71 B After Width: | Height: | Size: 71 B |
Before Width: | Height: | Size: 932 B After Width: | Height: | Size: 932 B |
Before Width: | Height: | Size: 4.5 KiB After Width: | Height: | Size: 4.5 KiB |
Before Width: | Height: | Size: 743 B After Width: | Height: | Size: 743 B |
Before Width: | Height: | Size: 79 B After Width: | Height: | Size: 79 B |
Before Width: | Height: | Size: 457 B After Width: | Height: | Size: 457 B |
Before Width: | Height: | Size: 856 B After Width: | Height: | Size: 856 B |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 2.3 KiB |
Before Width: | Height: | Size: 438 B After Width: | Height: | Size: 438 B |
Before Width: | Height: | Size: 345 B After Width: | Height: | Size: 345 B |
Before Width: | Height: | Size: 343 B After Width: | Height: | Size: 343 B |
Before Width: | Height: | Size: 205 B After Width: | Height: | Size: 205 B |
Before Width: | Height: | Size: 208 B After Width: | Height: | Size: 208 B |
Before Width: | Height: | Size: 216 B After Width: | Height: | Size: 216 B |
Before Width: | Height: | Size: 208 B After Width: | Height: | Size: 208 B |
Before Width: | Height: | Size: 391 B After Width: | Height: | Size: 391 B |
Before Width: | Height: | Size: 217 B After Width: | Height: | Size: 217 B |
Before Width: | Height: | Size: 218 B After Width: | Height: | Size: 218 B |