Open/Close index api supports now multiple indices the same way as the delete index api works. The only exception is when dealing with all indices: it's required to explicitly use _all or a pattern that identifies all the indices, not just an empty array of indices. Supports the ignore_missing param too.
Added also a new flag action.disable_close_all_indices (default false) to disable closing all indices
Closes#3217
The index field was serialized as a boolean instead of showing the
'analyed', 'not_analzyed', 'no' options. Fixed by calling
indexTokenizeOptionToString() in the builder.
Closes#3174
This decision helps people who want to rollout the oracle java without having an openjdk java installed.
* Removed any hard dependency on Java in the debian package
* The debian init script does not check for an existing JAVA_HOME anymore
* Debian and RedHat initscripts now exit if they do not find a java binary (instead of starting elasticsearch in the background and swallowing the error as there is no way to log it in that case)
* Changed the debian init script to rely on the pid file instead of the argument name of process
* Added a useful error message in case no java binary is available (in elasticsearch shell script)
Closes#3304Closes#3311
now that we have the concept of a shardIndex as part of our search execution, we can simply move to use ScoreDoc and FieldDoc instead of having our own wrappers that held the info
Also, rename shardRequestId where needed to be called shardIndex to conform with the variable name in Lucene
The previous loading of term vectors from the top level reader did not use the
correct docId. The docId in Versions.DocIdAndVersion is relative to the segment
reader in Versions.DocIdAndVersion and not to the top level reader.
Consequently the term vectors for the wrong document were returned if the
document was not on the first segment of the shard.
move away from maps to correlate between responses from different shards to unique incremental integer representing a shardRequestId (unique for the specific search request)
this allows to no longer require using maps (or CHM), and simply use atomic reference arrays, which rely on volatiles. it also removes the need to use a cache for heavy data structures since we don't really have them around anymore...
When using PlainHighlighter, TokenStreams are resetted both before highlighting
and at the beginning of highlighting, causing issues with analyzers that read
in reset() such as PatternAnalyzer. This commit removes the call to reset which
was performed before passing the TokenStream to the highlighter.
Close#3200
don't wrap in AnalysisService the indices analyzers we have with a NamedAnalyzer, since its effectively creates a new instance of an analyzer (with per field reuse strategy) and we don't benefit as much from reusing analyzers on the indices / node level
Now, the indices level analyzers return a NamedAnalyzer, also NamedAnalyzer will use the non per field reuse strategy since thats really the common case for it (no need for per field reuse there).
Also, try and reuse numeric analyzers globally instead of creating them per numeric mapper. Although those analyzers are not used during indexing (we have a custom numeric field for it), they can be used sometimes when searching in a query string for example without specific query implemenation in the mappers
in guice, we always use eager loaded singletons for all modules we create, thus, we can actually optimize the memory used by injectors by reduced the construction information they store per binding resulting in extensive reduction in memory usage for many indices/shards case on a node
also because all are eager singletons (and effectively, read only), we can not go through trying to create just in time bindings in the parent injector before trying to craete it in the current injector, resulting in improvement of object creations time and the time it takes to create an index or a shard on a node
The currently used maven shade plugin still keeps references to the
original classes in their constant pools around. This is never a problem
at runtime, but for dependency tools which try to use the constant pool
for determining dependencies will get confused (OSGI for example). This
patch simply bumps the version and will implicetely fix
fix http://jira.codehaus.org/browse/MSHADE-105Closes#3254Closes#3255
This has two advantages in the case term filter is *not* cached:
* We iterate only once over the matching docs. Before this fix we iterated once to create the FBS and another time the consume the matching docs from the FBS.
* The DocIdSetIterator#cost method of a DocIdSetIterator from the DocsEnum is accurate, because it based on the document frequency whereas the cost method of the FBS' iterator impl is based on the total number of bits (which is based on maxDoc). This will make this filter execute faster when it is included in a filtered query, because the filtered query can base its decision on what strategy to pick on an accurate heuristic.
This change doesn't have any negative implications in the case a filter is cached (which is the default). The FBS is now created lazily in the DocIdSets#toCacheable method, which is always invoked when the term filter needs to be cached.