There were two submodules of AllocationModule. This combines them into a
single module, adds a base test case for module testing, and adds back
the ability for plugins to provide custom ShardsAllocators.
closes#12781
Instead of logging the entire `_source` in the indexing slowlog we log by
default just the first 1000 characters - this is controlled by the
`index.indexing.slowlog.source` settings and can be set to `true` to log the
whole `_source`, `false` to log none of it, and a number to log at most that
many characters.
Closes#4485
Removed attempt of parsing of `field` rather than `fields` and attempted support of the following syntax:
```
{
"simple_query_string": {
"body" : {
"query": "foo bar"
}
}
}
```
Both these two syntaxes were undocumented, untested and not working.
Added test for case when `fields` is not specified, then the default field is queried.
Closes#12794Closes#12798
This commit includes the stacktrace into the structured exception rendering
to ensure we can find the reason / cause for certain things quicker. This
is enabled by default and is very verbose. Users can disable it via `rest.exception.stacktrace.skip = true|false`
Closes#12239
We have a way to allow a plugin to specify additional settings. These
settings should only be applied if they are not already existing in the
node settings.
The equalTo logic of ShardRouting doesn't take version and unassignedInfo into the account when compares shard routings. Since cluster state diff relies on equal to detect the changes that needs to be sent to other cluster, this omission might lead to changes not being properly propagated to other nodes in the cluster.
Closes#12387
In order to test the way plugins with configuration are installed and removed
we need a plugin with configuration in the repository. The simplest way to
get one is to make an "example" plugin.
In the process of making this example it became aparent that cat actions were
difficult to create outside of the org.elasticsearch.rest.action.cat package
because key methods in AbstractCatAction were package private. This makes them
protected and uses them to create the example configured plugin.
Relates to #12717 but is only one step of many to close it.
the default classloader. It had all kinds of leniency in how the
classname was found, and simply cannot work with plugins having isolated
classloaders.
This change removes that method. Some of the uses of it were for custom
extension points, like custom repository or discovery types. A lot were
just there to plugin mock implementations for tests. For the settings
that were legitimate, all now support plugins adding the given setting
via onModule. For those that were specific to tests for mocks, they now
use Classes.loadClass (a helper around Class.forName). This is a
temporary measure until (in a future PR) tests can change the
implementation via package private statics.
I also removed a number of unnecessary intermediate modules, added a
"jvm-example" plugin that can be filled in in the future as a smoke test
for breaking plugins, and gave some documentation to "spawn" modules
interface.
closes#12643closes#12656
This was a straight up bug found in #12753. If only one type existed,
the compatibility check for a new type was not strict, so changes to
an updateable setting like search_analyzer got through (but only
partially). This change fixes the check and adds tests (which were
previously a TODO).
This also fixes a bug in dynamic field creation which woudln't copy
fielddata settings when duplicating a pre-existing field with the
same name.
closes#12753
This PR prevents setting timezone on ValueFormatter.DateTime. Instead
the timezone information needed when printing buckets key-as-string
information is provided at constrution time of the ValueFormatter, making
sure we don't overwrite any constants. This, however, made it necessary to
be able to access the timezone information when resolving the format
in ValueSourseParser, so the `time_zone` parameter is now parsed alongside
the `format` parameter in ValueSourceParser rather than in DateHistogramParser.
Closes#12531
Today we accept store types like `nio_fs`, `nioFs`, `niofs` etc.
this commit removes the leniency and only accepts plain values without underscore.
Yet, this commit also has a BWC layer that upgrades existing indices to the new version.
Affected enums are:
* `nio_fs`
* `mmap_fs`
* `simple_fs`
If an URL specified with --url on the command line cannot be reached,
the plugin manager tries URLs at download.elastic.co automatically.
This can lead to download wrong plugins and also exposes potentially
the name of an internal plugin to the download service.
This fix ensures, that the plugin manager simply aborts, if the specified
URL cannot be downloaded.
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
This commit adds basic support to track the number of times scripts are
compiled and compiled scripts are evicted from the script cache. These
statistics are tracked at the node level.
Closes#12673
ClusterState has 3 different methods to access RoutingNodes:
* #routingNodes() - mutable version
* #getRoutingNodes() - delegates to #getReadOnlyRoutingNodes()
* #getReadOnlyRoutingNodes() - it's docs say `NOTE, the routing nodes are mutable, use them just for read operations`
The latter also reuses the instance that it creates. This has several problems beside the obvious:
* creating RoutingNodes is costly and should be done only if really needed ie. use cached version as much as possible
* the common case is ReadOnly but all kinds of things are called
* mutable version are only needed in one place and should only be used in the AllocationService
* RoutingNodes can freeze it's ShardRoutings but doesn't
* RoutingNodes should check if it's read-only or not
This commit fixed all the problems and special cases the mutable case such that all accesses via ClusterState#getRoutingNodes()
is read-only and RoutingNodes enforces this.
This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
The `multi_match` query groups terms that have the same analyzer together and
then applies the boost of the first query in each group. This is not necessary
given that boosts for each term are already applied another way.
Versions can be tricky with linux vendors and such. To help debug any possible issues, we should output a better version.
Today:
```
[elasticsearch] java.lang.RuntimeException: Java version: 1.7.0_55 suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
[elasticsearch] Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
[elasticsearch] If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
[elasticsearch] Upgrading is preferred, this workaround will result in degraded performance.
[elasticsearch] at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
[elasticsearch] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
[elasticsearch] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
With patch:
```
java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_40 [Java HotSpot(TM) 64-Bit Server VM 24.0-b56] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Adding a named query that is null can lead to a NullPointerException
when copying the named queries. This is due to an implementation detail
in QueryParseContent.copyNamedQueries. In particular, this method uses
com.google.common.collect.ImmutableMap.copyOf. A documented requirement
of ImmutableMap is that none of the entries have a null key nor null
value. Therefore, we should not add such queries to the namedQueries
map. This will not change any behavior since Map.get returns null if no
entry with the given key exists anyway.
Closes#12683
This change improves the `function_score` query to not compute scores at all
when they are not needed, and to not compute scores on the underlying query
when the combine function is to replace the score with the scores of the
functions.
This commit makes it possible to serialize arbitrary objects by having them extend Writeable. When reading them though, we need to be able to identify which object we have to create, based on its name. This is useful for queries once we move to parsing on the coordinating node, as well as with aggregations and so on.
Introduced a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of the writeable given its name and then de-serialize it calling readFrom against it.
Closes#12393
In order to avoid extra reroutes, `RoutingService` should avoid
scheduling a reroute of any shards where the delay is negative. To make
sure that we don't encounter a race condition between the
GatewayAllocator thinking a shard is delayed and RoutingService thinking
it is not, the GatewayAllocator will update the RoutingService with the
last time it checked in order to use a consistent "view" of the delay.
Resolves#12456
Relates to #12515 and #12456
When postFilter generates a token that is identical to the input term
DirectCandidateGenerator should not preFilter this token. If postFilter
and preFilter are the same analyzer instance it would fail with :
"TokenStream contract violation: close() call missing"
This is a forward port of @nomoa's #12670
Today we try to detect if there is an index existing in the directory
and if not we create one. This can be tricky and errof prone since we
rely on the filesystem without taking the context into account when the
engine gets created. We know in all situations if the index should be created
so we can just use this infromation and rely on the lucene index writer to barf
if we hit a situations where we can't append to an index while we should.
Today we miss to throw / rethrow an recovery exception if it happens during
the finalization of phase 1 if the source files are not affected. Even worse
this can cause some dataloss if the reason for this exception is a failure of
deleting a corruption marker or similar pre-existing corruptions since we continue
with the recovery and mark the target shared as started which will in-turn open
an engine with an empty index.
This makes the output of EsThreadPoolExecutor#toString less pretty but
we no longer have funky hacky that rely on the specific format of the
toString produced by ThreadPoolExecutor which isn't part of its API and
could change with any JVM version and break the output.
Improving the toString allows for nicer error reporting. Also cleaned up
the way that EsRejectedExecutionException notices that it was rejected
from a shutdown thread pool. I left javadocs about how its not 100% correct
but good enough for most uses.
The improved toString on EsThreadPoolExecutor mean every one of them needs
a name. In most cases the name to use is obvious. In tests I use the name
of the test method and in real thread pools I use the name of the thread
pool. In non-ThreadPool executors I use the thread's name.
Closes#9732
In order to support the URL notation including a user/pass combination
(like http://user:pass@host/plugin.zip) the auth info needs to be added
manually.
Today this is "unofficial" as conf/scripts, but some people
want to share scripts across different nodes and so on. Because
they cannot configure it, they are forced to use dirty hacks
like symbolic links, which isnt going to work: we aren't going
to recursively scan conf/ and add permissions to all link targets
underneath it, thats crazy.
I really hate adding yet another configuration knob here, but
users resorting to using symlinks are going to be frustrated,
and do things in a more insecure way.
The URL to download the main elasticsearch plugins did not match
what the S3 wageon is supposed to write to.
In addition this PR adds support for snapshots to access the
snapshot S3 bucket, so we can possibly download snapshot versions
of plugins.
The format of the URLs stems from #12270Closes#12632
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
Currently when we delete files belonging to deleted snapshots we issue one delete command to underlying snapshot store at a time. Some repositories can benefit from bulk deletes of multiple files.
Closes#12533
This method syncs the translog unless it's already synced. If the engine
is alreayd closed we are guaranteed to be synced already such that we can just
ignore this exception.
Closes#12603
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
The Streams.copyTo(String|Bytes)FromClasspath() methods resolve resources using org.elasticsearch.io.Streams classloader. This is fine in elasticsearch core and when running tests but if used in a plugin this can lead to FileNotFoundExceptions at runtime because plugin are loaded in a dedicated classloader.
Most of the abstract base test classes we have were previously @Ignored.
However, there were also some other tests ignored. Having two ways to
quiet tests is confusing, and clearly it has caused some tests
to get lost in the fold.
This change moves all base test classes to use the "TestCase" suffix,
which is not picked up by the test class name pattern. It also removes
@Ignore from (almost) all tests, and adds it to forbidden apis.
And since we were renaming, I shorted base test class names to use
"ES" instead of "Elasticsearch". I type this a lot of types a day,
and I have heard others express a similar desire for a shorter name.
closes#10659
Now that integ tests are moved into `mvn verify`, we don't really have
a need for @Slow, and especially not @Integration. This removes
uses of the first, and completely removes uses of the latter.
Tasks can be registered with a timeout, which runs as a task in a separate
threadpool. The idea is that the timeout runner cancels the main task when
the time is out, and the timeout runner is cancelled when the main task
starts executing. However, the following statement:
```java
timeoutFuture = timer.schedule(new Runnable() {
@Override
public void run() {
if (remove(TieBreakingPrioritizedRunnable.this)) {
runAndClean(timeoutCallback);
}
}
}, timeValue.nanos(), TimeUnit.NANOSECONDS);
```
is not atomic: the removal task is first started, and then the (volatile)
variable is assigned. As a consequence, there is a short window that allows
a timeout task to wait until the time is out even if the task is already
completed.
See http://build-us-00.elastic.co/job/es_core_17_centos/496/ for an example of
such a failure.
Previously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path
Closes#12360
Its not going to work: its blocked by security policy
and will just add a confusing SecurityException to the mix, and
bogusly give an exit status of 0 when in fact something bad happened.
Finally, if ES can't startup, it is a serious problem, there is
no sense in hiding the reason why: deliver the full stack trace.
The repository verification process should create a subdirectory to make sure we check permission of newly created directories in case elasticsearch processes on different nodes are running using different uids and creating blobs with incompatible permissions.
Closes#11611
Previously we issued a reroute when a node went over the high watermark
in order to move shards away from the node. This change tracks nodes
that have previously been over the high or low watermarks and issues a
reroute when the node goes back underneath the watermark.
This allows shards that may be unassigned to be assigned back to a node
that was previously over the low watermark but no longer is.
Resolves#12422
As windows has different line endings and this has
already been fixed by another test, the method has been
moved into CliToolTestCase.
In addition one test has been removed, as it was redundant.
In order to ensure, we have the same experience across operating systems
and shells, this commit uses the java CLI parser instead of the shell
getopt parsing to parse arguments.
This also allows for support for paths, which contain spaces.
Also commons-cli depdency was upgraded to 1.3.1 and tests have been added.
Changes
* new exit code, OK_AND_EXIT, allowing to tell the caller to exit, as everything
went as expected (e.g. when running a version output)
BWC breaking:
* execute() returns an ExitStatus instead of an integer, otherwise there is no
possibility to signal by a command, if the JVM should be exited after a run.
This affects plugins, that have command line tools
* -v used to be version, but is a verbose flag by default in the current CLI infra,
must be -V or --version now
* -X has been removed - the current implementation was useless anyway, as
it prefixed those properties with "es.". You should use
ES_JAVA_OPTS/JAVA_OPTS for JVM configuration
Date math index name resolution enables you to search a range of time-series indices, rather than searching all of your time-series indices and filtering the the results or maintaining aliases. Limiting the number of indices that are searched reduces the load on the cluster and improves execution performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
The added `ExpressionResolver` implementation that is responsible for resolving date math expressions in index names. This resolver is evaluated before wildcard expressions are evaluated.
The supported format: `<static_name{date_math_expr{date_format|timezone_id}}>` and the date math expressions must be enclosed within angle brackets. The `date_format` is optional and defaults to `YYYY.MM.dd`. The `timezone_id` id is optional too and defaults to `utc`.
The `{` character can be escaped by places `\\` before it.
Closes#12059
When we rewrite to a MatchNoTermsQuery we were throwing out the boost which
could could lead to funky changes when the query against _all was in a
bool query.
Previously we would write the entire ByteBuffer to the stream to serialise the HDRHistogram even if it was not all needed. Now we only write the bytes that are actually written to in the ByteBuffer.
I suggest updating the BulkProcessor so that it prevents users from building a BulkProcessor with a null client. If the client is null, then the class should throw an exception with a message that describes the cause of the exception. Earlier today, when I passed a null client to the builder() method, later received an exception, and attempted to get the exception's message in my afterBulk() listener (e.g. e.getMessage()), the message was null. It took me a while to pinpoint the cause of the problem.
This can happen in two ways:
1. The _all field is disabled.
2. There are documents in the index, the _all field is enabled, but there are
no fields in any of the documents.
In both of these cases we now rewrite the query to a MatchNoDocsQuery which
should be safe because there isn't anything to match.
Closes#12439
When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.
Currently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.
Normally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.
Instead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.
This change creates a proper `distribution` modules in which we have today packaging for
all of our four current packages:
* zip
* tar.gz
* rpm
* deb
Licenes have moved into the distribution project as well. So have the config/ and the bin/ directory
from the core/ project.
The RPM package is now built, if rpmbuild exists.
The bats tests have been moved as well.
Also the zip distribution now executes the REST integration tests.