Previously multiple clusters in the same JVM reused the same port ranges, leading to potential big gaps in port selection, which in turns causes unicast based discovery to fail, missing to find another node in the default 5 port range.
Also the previous logic had http use a range that is assigned to another JVMs.
There are some leftovers around usage of NamedWriteableRegistry in our branch compared to master, just because the registry was added first here, then backported to master, and something didn't go completely right during the final merge.
We default the value to be 20x the value of a ping timeout, however we only use the legacy ping timeout settings value for the calculation.
Closes#13162
This commit removes and now forbids all uses of
com.google.common.collect.Lists across the codebase. This is the first
of many steps in the eventual removal of Guava as a dependency.
lessThanOrEqualTo is more appropriate when comparing _ttl than lessThan
because in rare cases, when tests run very fast, the ttl you fetch will
still equal the one you sent.
Some listeners may need to do work before a shard's path is
accessed (such as creating the directory in a plugin), so the listener
should be called before anything happens (as its name implies).
detect_noop is pretty cheap and noop updates compartively expensive so this
feels like a sensible default.
Also had to do some testing and documentation around how _ttl works with
detect_noop.
Closes#11282
As discussed in #11744 this is the last step to unify parsing of boost and _name. Those fields are supported only in long version of queries, while we sometimes parse them when wwe shouldn't, inconsistently.
Closes#11744Closes#12966
Since we now don't stripe shards across data paths we need a way
to access the information on which path a shard is allocated to
eventually do better allocation decisions based on disk usage etc.
This commit exposes the shard paths as part of the shard stats.
Relates to #13106
Also clean up static method in MoreLikeThisFetchService, fix randomly failing TermsQueryBuilderTest#testNullValues and rename mappedFieldNamesSmall to MAPPED_LEAF_FIELD_NAMES.
Until a couple of hours ago we expected the position_offset_gap to default
to 0 in 2.0 and 100 in 2.1. We decided it was worth backporting that new
default to 2.0. So now that its backported we need to teach 2.1 that 2.0
also defaults to 100.
Closes#7268
This is much more fiddly than you'd expect it to be because of the way
position_offset_gap is applied in StringFieldMapper. Instead of setting
the default to 100 its simpler to make sure that all the analyzers default
to 100 and that StringFieldMapper doesn't override the default unless the
user specifies something different. Unless the index was created before
2.1, in which case the old default of 0 has to take.
Also postition_offset_gaps less than 0 aren't allowed at all.
New tests test that:
1. the new default doesn't match phrases across values with reasonably low
slop (5)
2. the new default doest match phrases across values with reasonably high
slop (50)
3. you can override the value and phrases work as you'd expect
4. if you leave the value undefined in the mapping and define it on a
custom analyzer the the value from the custom analyzer shines through
Closes#7268
This commit changes the startup behavior of Elasticsearch to throw an
exception if duplicate settings keys are detected in the Elasticsearch
configuration file.
Closes#13079
Currently when an entire type is disabled, our document parser will end
parsing on the first field of the document. This blows up the recently
added check that parsing did not silently skip any tokens (ie whether
there was garbage leftover).
This change fixes the parser to correctly skip the entire document when
the type is disabled.
closes#13017
This commit consolidates the logic in
RoutingTable#allActiveShardsGrouped and
RoutingTable#allAssignedShardsGrouped into a single method that merely
applies a predicate to each ShardRouting.
Closes#13081
The `rewrite` option has been removed from the parser with
commit da5fa6c4 and won't parse anymore, however we still
have a setter for it in the builder that gets rendered out
when used and potentially leads to parsing errors. This PR
removes the setter for the unsupported `rewrite` option.
The `rewrite` option has been removed from the parser with
commit da5fa6c4 and won't parse anymore, however we still
have a setter for it in the builder that gets rendered out
when used and potentially leads to parsing errors. This PR
removes the setter for the unsupported `rewrite` option.
This change makes modules added by plugins come before others, as it was
before #12783. The order of configuration, and thereby binding, happens
in the order modules are received, and without this change, some plugins
can get *insane* guice errors (500mb stack trace).
The setting `plugin.types` is currently used to load plugins from the
classpath. This is necessary in tests, as well as the transport client.
This change removes the setting, and replaces it with the ability to
directly add plugins when building a transport client, as well as
infrastructure in the integration tests to specify which plugin classes
should be loaded on each node.
* makes most classes final and package private
* removes duplicate and confusing multiple entry points
* adds javadocs to some classes like JarHell,Security
* adds a public class BootStrapInfo that exposes any stats
needed by outside code.
When we create a plugin's classloader, we should allow it to register things in
the lucene SPI (registry of Tokenizers, TokenFilters, CharFilters, Codec,
PostingsFormat, DocValuesFormat).
Plugins should be able to do this so they can extend Lucene.
We do check if a directory is present and then open a dir stream on
it. Yet the file can be concurrrently deleted which is OK but we fail
with a hard exception. This change tries to open the dir directly (listing via stream)
and catches NoSuchFileEx | FNFEx.
This commit enforces that at most a single settings file is found. If
multiple settings files are found, a SettingsException will be thrown
Closes#13042
This commit fixes an issue that was causing Elasticsearch to silently
ignore settings files that contain garbage. The underlying issue was
swallowing an SettingsException under the assumption that the only
reason that an exception could be throw was due to the settings file
not existing (in this case the IOException would be the cause of the
swallowed SettingsException). This assumption is mistaken as an
IOException could also be thrown due to an access error or a read
error. Additionally, a SettingsException could be thrown exactly
because garbage was found in the settings file. We should instead
explicitly check that the settings file exists, and bomb on an
exception thrown for any reason.
Closes#13028
We currently test that our query parsers can parse the format that our query builder outputs in XContent format, but in some cases the parser supports more than that, hence we need more specific tests otherwise we have no coverage for alternate formats.
Today we are very verbose when rendering exceptions on the rest layer.
Yet, this isn't necessarily very easy to read and way too much infromation most
of the time. This change suppresses the stacktrace rendering by default but instead
adds a `rest.suppressed` logger that logs the suppressed stacktrace or rather the entire
exception on the node that renderes the exception
The log message looks like this:
```
[2015-08-19 16:21:58,427][INFO ][rest.suppressed ] /test/_search/ Params: {index=test}
[test] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:551)
```
Currently we implicitly enforce a version format on the java.version
property for plugins via JarHell.checkJavaVersion. We should explicitly
enforce this version format and specify it in the documentation.
This commit adds an explicit enforcement of the version format on the
java.version property for plugins and updates the documentation for
plugin-descriptor.properties accordingly.
Closes#13009
This commit improves Java version comparison in JarHell.
The first improvement is the addition of a method to check the version
format of a target version string. This method will reject target
version strings that are not a sequence of nonnegative decimal integers
separated by “.”s, possibly with leading zeros (0*[0-9]+(\.[0-9]+)?).
This version format is the version format used for Java specification
versioning (cf. Java Product Versioning, 1.5.1 Specification Versioning
and the Javadocs for java.lang.Package.)
The second improvement is a clean method for checking that a target
version is compatible with the runtime version of the JVM. This is done
using the system property java.specification.version and comparing the
versions lexicograpically. This method of comparison has been tested on
JDK 9 builds that include JEP-220 (the Project Jigsaw JEP concerning
modular runtime images) and JEP-223 (the version string JEP). The class
that encapsulates the methods for parsing and comparing versions is
written in a way that can easily be converted to use the Version class
from JEP-223 if that class is ultimately incorporated into mainline JDK
9.
Closes#12441
This commit changes the behavior when validating index templates to
accumulate all validation errors before reporting failure to the user.
This addresses a usability issue when creating index templates.
Closes#12900
Aggregation.subPath() always threw an ArrayStoreException because we were trying to pass a List into System.arraycopy(). This change fixes that bug and adds a test to prevent regression
Today we only guess how big the shard will be that we are allocating on a node.
Yet, we have this information on the master but it's not available on the data nodes
when we pick a data path for the shard. We use some rather simple heuristic based on
existing shard sizes on this node which might be complete bogus. This change adds
the expected shard size to the ShardRouting for RELOCATING and INITIALIZING shards
to be used on the actual node to find the best data path for the shard.
Closes#11271
Try to strike a balance between usability and debuggability.
2MB stacktrace is not gonna work... we print message and find
"likely root cause" and identify it as such, limit it to 30 stack
frames, and filter out guice. When frames are truncated we identify
that too, and always inform the user full exception is available from
logs.
Closes#13029
Multicast has known issues (see #12999 and #12993). This change moves
multicast into a plugin, and deprecates it in the docs. It also allows
for plugging in multiple zen ping implementations.
closes#13019
If the machine doesn't support IPv6, or if the user disabled it
with -Djava.net.preferIPv4Stack, or if the user disabled it with
ES_USE_IPV4, we will still try to bind a socket to ::1 out of box,
and we will see lots of exceptions logged to the console.
It is harmless noise but not a great experience.
this commit removes all support for reverse host name resolving from
InetSocketTransportAddress. This class now only returns IP addresses.
Closes#13014
Fix unicast discovery to work when a host has multiple addresses.
Ban dangerous methods in java.net with forbidden APIs.
Fix ipv6 bugs and formatting of network addresses everywhere.
Closes#12999Closes#12993
Squashed commit of the following:
commit 6c1aa001d091c5cf25212a53dc701fb704337f1e
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 14:25:43 2015 -0400
Fix these to be correct with addresses just in case
commit 648215627e84abf58a71400e7dc9ae775efb71d6
Merge: d00561b 41d8fbe
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 13:23:09 2015 -0400
Merge branch 'master' into unicast_all_the_way_down
commit d00561b76fd1aa5850699f7901f3dae3d4d402b7
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:38:50 2015 +0200
limit local ports to 5 in UnicastZenPing
commit e2e15c594006746cbe24432694294a71cc99deb8
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:32:47 2015 -0400
fix port limiting
commit 10153cb7adadda81a1f482445e703836b65cf5e2
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 10:18:37 2015 -0400
don't serialize scopeids: that's broken
commit 2aa63d43db2baec68a2e9bc227cfeb85dfeb4f83
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:06:51 2015 +0200
restore @Network
commit c840f1d1ef438826ae1ecfd5e45942a0e30dc9c0
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 16:02:30 2015 +0200
Use NetworkAddress.formatAddress where applicable in plugins
commit 374ce878852b35d626b7a29c8c4773545b0e9ddd
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 15:34:06 2015 +0200
Use NetworkAddress.formatAddress where applicable
commit e7a606d63f1bc43c1b62b6e17adf707c76d43a15
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 10:17:57 2015 +0200
Add @Multicast annotation to disable multicast tests by default.
We only run multicast tests now when we explicitly state it. A working
multicast env is required which is not always the case.
commit 2d7d2d0347179696ab41f71f048b13305014c85b
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:51:28 2015 +0200
Remove extra check for local mode in InternalTestCluster
commit dda59ac39aa136d4687b9274c2692cd77f8b8f66
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 09:37:03 2015 +0200
Handle node mode across entire test cluster
We used static methods reading sys properties to define the node mode
per cluster. this had lots of problems when tests couldn't cope with
mixed or only local mode. Now we are passing it down to the cluster from the test
which allows to @SuppressNetworkMode / @SupressLocalMode on the test to force
consistent node configurations.
commit 058197b7a408318995c88ce7f6762e32348de0de
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:19:14 2015 -0400
really ban InetSocketAddress's trappy method and break build and go to sleep, sorry
commit ac8779185aee1e17e6f5a81766290fdfc9c603ba
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 03:16:52 2015 -0400
Ban methods that might surprisingly cause DNS lookups
commit e64fe3dff2b11503e5f2831eb9863d64f56c5538
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:59:05 2015 -0400
Add unit test
commit f15434f20fb1a3691b1cc16028597d8fae937e05
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:39:02 2015 -0400
fix ipv6 formatting bugs
commit 05c2c74098052c75fbb79ea1818a295ef2e03e30
Author: Robert Muir <rmuir@apache.org>
Date: Thu Aug 20 02:12:05 2015 -0400
format addresses correctly so I can actually read what comes out of our logs and stats apis
commit 4f9389dcf1e8925f23153c5eb271b4ce2294dbaf
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 21:26:52 2015 -0400
ban dangerous methods in java.net
commit 6aacd4d9925f324903d1d099a6cf5f862aeaf677
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 20:59:24 2015 -0400
ban lenient method
commit f466a842c60163d1f4554bdce8a4163edb534c2c
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:29:00 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 0de007a33b33fb68cf85cd86db4ca4f8ce10bbc9
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:10:07 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 539f6ca6e5137e0d496239adc8684688dedcc824
Author: Simon Willnauer <simonw@apache.org>
Date: Thu Aug 20 00:02:01 2015 +0200
fix tests to not mix local transport and zen unicast disco
commit 004c2881b25467f332acc8c9f9e92b1f0f9d314e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:51:45 2015 -0400
Fix multinode
commit 54113af325ce31571811c49fdaae89d5687be4ba
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:36:45 2015 -0400
fix integration tests
commit 0156a77a56319d6b9737ec6a531992052e50bd59
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 23:32:18 2015 +0200
enable multicast in MulticastZenPingIT.java
commit 1791caa35da853ce0122485fa3fd4674c671ec6e
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 17:23:16 2015 -0400
Fix constant
commit 22820b53e0b2dc9fd47145c2bc29ce912a8fd484
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:59:09 2015 +0200
give it some extra ids for local transport crazyness
commit b2138fafa94a8a085813fd48356df63e57ade5b3
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:51:42 2015 +0200
pass on local addresses from configured transport rather than hard code IP addresses
commit 1bf5de1f457b081e0ce262b57d2b55d39c434156
Author: Simon Willnauer <simonw@apache.org>
Date: Wed Aug 19 22:04:31 2015 +0200
fix PluggableTransportModuleIT.java to use local disco and detach port limit for node local disco
commit b6706eddfa04c43947c16551359ae98a463d34aa
Author: Robert Muir <rmuir@apache.org>
Date: Wed Aug 19 14:16:03 2015 -0400
Default to unicast discovery, with default host list of 127.0.0.1, [::1]
In some cases this may contain duplicates, although its a misconfiguration,
lets not bind to multiple ports. Its no problem for us to dedup, this code
doesn't need to be huper-duper fast since its used only for logic around bind/publish
The cardinality aggregation is a metric aggregation and therefore cannot accept sub-aggregations. It was previously possible to create a rest request with a cardinality aggregation that had sub-aggregations. Now such a request will throw an error in the response.
Close#12988
We have some settings that prevent host name resolution which should
be repected by InetSocketTransportAddress#getHost() to only resolve if
allowed or desired.
At the moment, when installing from an url, a user provides the plugin name on
the command line like:
* bin/plugin install [plugin-name] --url [url]
This can lead to problems when picking an already existing name from another
plugin, and can potentially overwrite plugins already installed with that name.
This, this PR introduces a mandatory `name` property to the plugin descriptor
file which replaces the name formerly provided by the user.
With the addition of the `name` property to the plugin descriptor file, the user
does not need to specify the plugin name any longer when installing from a file
or url. Because of this, all arguments to `plugin install` command are now
either treated as a symbolic name, a URL or a file without the need to specify
this with an explicit option.
The new syntax for `plugin install` is now:
bin/plugin install [name or url]
* downloads official plugin
bin/plugin install analysis-kuromoji
* downloads github plugin
bin/plugin install lmenezes/elasticsearch-kopf
* install from URL or file
bin/plugin install http://link.to/foo.zip
bin/plugin install file:/path/to/foo.zip
If the argument does not parse to a valid URL, it is assumed to be a name and the
download location is resolved like before. Regardless of the source location of
the plugin, it is extracted to a temporary directory and the `name` property from
the descriptor file is used to determine the final install location.
Relates to #12715
it's an important exception to serialize and we see it often in tests
etc. but then it's wrapped in NotSerializableExceptionWrapper which is
odd. This commit adds native support for this exception.
The cluster configuration allows to setup a cluster for use with unicast discovery. This means that nodes have to have pre-calculated known addresses which can be used to poplulate the unicast hosts setting. Despite of repeated attempts to select unused ports, we still see test failures where the node can not bind to it's assigned port due to it already being in use (most on CentOS). This commit changes it to allow each node to have a pre-set mutual exclusive range of ports and add all those ports to the unicast hosts list. That's OK because we know the node will only bind to one of those.
We are in the process of getting rid of guava, and this removes a major
use. The replacement is mostly Collections.emptyList(), Arrays.asList
and Collections.unmodifiableList. While it is questionable whether we
need the last one (as these are usually placed in final members), we can
continue to refactor later by removing unnecessary wrappings.
Dynamic date fields mapped from dates of the form "yyyy-MM-dd"
automatically receive the millisecond paresr epoch_millis as an
alternative parsing format. However, dynamic date fields mapped from
dates of the form "yyyy/MM/dd" do not. This is a bug since the migration
documentation currently specifies that a dynamically added date field,
by default, includes the epoch_millis format. This commit adds
epoch_millis as an alternative parser to dynamic date fields mapped from
dates of the form "yyyy/MM/dd".
Closes#12873
commons-lang really is only used by some core classes to join strings or modiy arrays.
It's not worth carrying the dependency. This commit removes the dependency on commons-lang
entirely.
This causes a FileSystemException when trying to retrieve FileStore for a path,
and falsely returns false for Files.isWritable
Squashed commit of the following:
commit d2cc0d966f3bc94aa836b316a42b3c5724bc01ef
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 15:49:48 2015 -0400
fixes for the non-bogus comments
commit 6e0a272f5f8ef7358654ded8ff4ffc31831fa5c7
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 15:30:43 2015 -0400
Fix isWritable too
commit 2a8764ca118fc4c950bfc60d0b97de873e0e82ad
Author: Robert Muir <rmuir@apache.org>
Date: Tue Aug 18 14:49:50 2015 -0400
try to workaround filestore bug
This brings the Lucene query assertions back to the querybuilder test that were
removed in the ancient past when we tested Lucene queries through their
inherent equals method. As we no longer do that it makes sense to do at least
coarse sanity checking on the generated Lucene query. More such checks are
being added as part of this commit.
Relates to #10217
this method is very confusing and if it's used it's likely the wrong thing
with respect to the actual bound / published address. This change discourages
it's use and removes all useage. It's replaced with the actual published address
most of the time.
With #12921 we refactored IndicesModule but we forgot to make sure we create IndicesQueriesRegistry once. IndicesQueriesModule used to do `bind(IndicesQueriesRegistry.class).asEagerSingleton();` otherwise we get multiple instances of the registry. This needs to be ported do the IndicesModule.
It was already disabled, but during tests the test framework enabled the query cache by setting the static default query cache. The caching behaviour should be the same in production and in tests.
This move the `murmur3` field to the `mapper-murmur3` plugin and fixes its
defaults so that values will not be indexed by default, as the only purpose
of this field is to speed up `cardinality` aggregations on high-cardinality
string fields, which only requires doc values.
I also removed the `rehash` option from the `cardinality` aggregation as it
doesn't bring much value (rehashing is cheap) and allowed to remove the
coupling between the `cardinality` aggregation and the `murmur3` field.
Close#12874
Some of our next queries to refactor rely on some state taken from the cluster state. That is why we need to mock cluster service and inject an index to it, the index that we simulate the execution of the queries against. The best would be to have multiple indices actually, but that would make our setup a lot more complicated, especially given that IndexQueryParseService is still per index. We might be able to improve that in the future though, for now this is as good as it gets.
The Plugin interface currently contains 6 different methods for
adding modules. Elasticsearch has 3 different levels of injectors,
and for each of those, there are two methods. The first takes no
arguments and returns a collection of class objects to construct. The
second takes a Settings object and returns a collection of module
objects already constructed. The settings argument is unecessary because
the plugin can already get the settings from its constructor. Removing
that, the only difference between the two versions is returning an
already constructed Module, or a module Class, and there is no reason
the plugin can't construct all their modules themselves.
This change reduces the plugin api down to just 3 methods for adding
modules. Each returns a Collection<Module>. It also removes the
processModule method, which was unnecessary since onModule
implementations fullfill the same requirement. And finally, it renames
the modules() method to nodeModules() so it is clear these are created
once for each node.
java.net.preferIPv6Addresses is a better choice. preferIPv4Stack is a nuclear option
and you just won't even bind to any IPv6 addresses. This reduces confusion.
The restore portion of some snapshot/restore test is failing randomly due to #9421. This change suspends rebalance during snapshot/restore operations until #9421 is fixed.
Closes#12855
Plugins can preovide additional settings to be added to the settings
provided in elasticsearch.yml. However, if two different plugins supply
the same setting key, the last one to be loaded wins, which is
indeterminate. This change enforces plugins cannot have conflicting
settings, at startup time. As a followup, we should do this when
installing plugins as well, to give earlier errors when two plugins
collide.
Custom repository types are registered through the RepositoriesModule.
Later, when a specific repository type is used, the RespositoryModule
is installed, which in turn would spawn the module that was
registered for that repository type. However, a module is not needed
here. Each repository type has two associated classes, a Repository and
an IndexShardRepository.
This change makes the registration method for custom repository
types take both of these classes, instead of a module.
See #12783.
When elasticsearch is configured by interface (or default: loopback interfaces),
bind to all addresses on the interface rather than an arbitrary one.
If the publish address is not specified, default it from the bound addresses
based on the following sort ordering:
* ipv4/ipv6 (java.net.preferIPv4Stack, defaults to true)
* ordinary addresses
* site-local addresses
* link local addresses
* loopback addresses
One one address is published, and multicast is still always over ipv4: these
need to be future improvements.
Closes#12906Closes#12915
Squashed commit of the following:
commit 7e60833312f329a5749f9a256b9c1331a956d98f
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 14:45:33 2015 -0400
fix java 7 compilation oops
commit c7b9f3a42058beb061b05c6dd67fd91477fd258a
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 14:24:16 2015 -0400
Cleanup/fix logic around custom resolvers
commit bd7065f1936e14a29c9eb8fe4ecab0ce512ac08e
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 13:29:42 2015 -0400
Add some unit tests for utility methods
commit 0faf71cb0ee9a45462d58af3d1bf214e8a79347c
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:11:48 2015 -0400
localhost all the way down
commit e198bb2bc0d1673288b96e07e6e6ad842179978c
Merge: b55d092 b93a75f
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:05:02 2015 -0400
Merge branch 'master' into network_cleanup
commit b55d092811d7832bae579c5586e171e9cc1ebe9d
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 12:03:03 2015 -0400
fix docs, fix another bug in multicast (publish host = bad here!)
commit 88c462eb302b30a82585f95413927a5cbb7d54c4
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:50:49 2015 -0400
remove nocommit
commit 89547d7b10d68b23d7f24362e1f4782f5e1ca03c
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:49:35 2015 -0400
fix http too
commit 9b9413aca8a3f6397b5031831f910791b685e5be
Author: Robert Muir <rmuir@apache.org>
Date: Mon Aug 17 11:06:02 2015 -0400
Fix transport / interface code
Next up: multicast and then http
EsExecutorsTests had a test that was failing spuriously due to threadpools
being threadpools. This weakens the assertions that the test makes to what
should always be true.
It might turn out to be useful to have the actual commit hash of the version we are
looking for if our download manager can just redirect to the right staging repository.
This is purely for maintainance reasons since it easier to see if we can drop
certain stageing urls if we have the version next to the hash.
I also removed the gpg passphrase from the example URL since it's better to get prompted?
This is caused by sending the same file to the chunk handler with offset
`0` which in-turn opens a new outputstream and waits for bytes. But the next round
will send 0 bytes again with offset 0. This commit adds some checks / validators that those
settings are positive byte values and fixes the RecoveryStatus to throw an IAE if the same file
is opened twice.
What is the problem we are trying to solve ?
===========================================
When we are doing aggregations against a field name as shown in
https://github.com/HarishAtGitHub/elasticsearch-tester/blob/master/12135.py#L37-L46
search = {
"aggs": {
"NAME": {
"terms": {
"field": "ip_str",
"size": 10
}
}
}
}
and when the field "ip_str" has values of different types in different indices
. say one is of type StringTerms type and other is of IP(LongTerms type) then
the aggregation fails as the types do not match(incompatible).
The failure throws a class cast exception as follows:
{
"error": {
"root_cause": [],
"type": "reduce_search_phase_exception",
"reason": "[reduce] ",
"phase": "query",
"grouped": true,
"failed_shards": [],
"caused_by": {
"type": "class_cast_exception",
"reason": "org.elasticsearch.search.aggregations.bucket.terms.LongTerms$Bucket cannot be cast to org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket"
}
},
"status": 503
}
which is hard to understand . User cannot infer anything about the cause of the problem and what he should do from seeing the
class cast exception.
What can be the possible solution ?
===================================
Make the exception more readable by showing him the root cause of the problem so that he can
understand which area actually caused the problem, so that he can take necessary steps further.
Code Analysis
=============
Debugging code shows that:
the query /{indices}/_search?search_type=count involves two phases
1) search phase
***************
searchService.sendExecuteQuery(...) [Ref: TransportSearchCountAction]
what happens here ?
the phase 1, which is the search phase goes without error.
In this phase the shards for the given indexes are collected and the search is done on all asynchronously
and finally collected in the variable "firstResults" and given to meger phase.
[Flow: .... -> TransportSearchTypeAction -> method performFirstPhase]
2) merge phase
**************
searchPhaseController.merge(...firstResults...) [Ref: TransportSearchCountAction]
what happens here ?
the "firstresults" QuerySearchResults are now to be aggregated and combined.
[Flow: SearchPhaseController.merge(...) -> ..... -> InternalTerms.doReduce(...)]
the phase 1, which is the search phase goes without error.
The problem comes in phase 2, which is merge phase.
Now the individual term buckets are available.
As per the test case , there are two indices cast and cast2, so by default 10 shards.
cast has ip_str of type StringTerms
cast2 has ip_str of type ip which is actually LongTerms
so here two types of Buckets exist. StringTerms_Bucket and LongTerms_Bucket.
Now the aggregation is to be put inside the BucketPriorityQueue(size 2: as out of 10, 2 has hits) finally.
(docs of PriorityQueue: https://lucene.apache.org/core/4_4_0/core/org/apache/lucene/util/PriorityQueue.html#insertWithOverflow(T))
Now first the LongTerms$Bucket is put inside.
then the StringTerms$Bucket is to be put in.
This is the area where exception is thrown. What happens is when adding the StringTerms$Bucket now it has to
goes through the code "lessThan(element, heap[1])"
which finally calls
---------------------------------------------------------------------------------------------
| StringTerms$Bucket.compareTerms(other) <---------------- Area of exception |
| |
--------------------------------------------------------------------------------------------
where when comparing one to other a type cast is done and it fails as StringTerms$Bucket and LongTerms$Bucket are
incompatible.
Approach to solve:
==================
The best way is to make user understand that the problem is when reducing/merging/aggregating the buckets which came as a result of
querying different shards, so that this will make them infer that the problem is because the values of the fields are of different types.
The message is also user friendly and much better than the indecipherable classcastexception.
The only place to infer correctly that the aggregation has failed is in the place where aggregations take place.
so
at InternalTerms.java -> (BucketPriorityQueue)ordered.insertWithOverflow(b);
so here I can throw AggregationExecutionException saying it is because the buckets are of different
types.
But when can I infer at this point that the failure is due to mismatch of types of buckets ???
it can be possible only if at this point it is informed that the problem which occurred deep inside
is due to buckets that were incomparable.
so from just a classCastException we cannot make such a pointed exact inference, because
as class cast exception can be due to a number of scenarios and at a number of places.
so unless we inform the exact problem to InternalTerms it will not be able to infer properly.
so infer the classCastException at the compareTerms function itself that it is a IncomparableTermBucktesTypeException.
This is the best place to infer classCastException as this the place which generated the exception.
Best inference of exceptions can be done only at the source/origin of the exception.
so IncomparableTermBucktesTypeException to InternalTerms-> will make it infer and conclude on why
aggregation failed and give best information to user.
Close#12821
The IndicesModule was made up of two submodules, one which
handled registering queries, and the other for registering
hunspell dictionaries. This change moves those into
IndicesModule. It also adds a new extension point type,
InstanceMap. This is simply a Map<K,V>, where K and V are
actual objects, not classes like most other extension points.
I also added a test method to help testing instance map extensions.
This was particularly painful because of how guice binds the key
and value as separate bindings, and then reconstitutes them
into a Map at injection time. In order to gain access to the
object which links the key and value, I had to tweak our
guice copy to not use an anonymous inner class for the Provider.
Note that I also renamed the existing extension point types, since
they were very redundant. For example, ExtensionPoint.MapExtensionPoint
is now ExtensionPoint.ClassMap.
See #12783.
shard variable dependency from processFirstPhaseResults as shard is no more
needed here . it only deals with the results obtained from the synchronous search on each shard.
The ClusterModule contained a couple submodules. This moves the
functionality from those modules into ClusterModule. Two of those
had to do with DynamicSettings. This change also cleans up
how DynamicSettings are built, and enforces they are added, with
validators, in ClusterModule.
See #12783.
Before:
PluginInfo{name='cloud-aws', description='The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.', site=false, jvm=true, classname=org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin, isolated=true, version='2.1.0-SNAPSHOT'}
After:
- Plugin information:
Name: cloud-aws
Description: The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.
Site: false
Version: 2.1.0-SNAPSHOT
JVM: true
* Classname: org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin
* Isolated: true
Previously settings specified in index templates were not validated upon
template creation. Creating an index from an index template with invalid
settings could lead to cluster stability issues because creation of such
indexes would bypass index settings validation.
This commit adds validation of settings specified in index templates at
template creation time. This works by routing the index template
settings through the index settings validation mechanism.
Closes#12865
Users with IPv6 preferred over IPv4 may have `localhost` resolve to
`::1` instead of `127.0.0.1`, so we should be explicit so they don't run
into issues.
Today on a failure the reproduce line printed out by the test framework
will build all projects and might fail if the test class is not present.
This commit adds a reactor filter to the reproduction line to ensure
unrelated projects are skipped.
Closes#12838
In order to match the paths of official plugins, we need to fix
the broken test by removing the elasticsearch prefix from the official
plugin names before testing.
This PR adds a timezone field to ValueParser.DateMath that is
set to UTC by default but can be set using the existing constructors.
This makes it possible for extended bounds setting in DateHistogram
to also use date math expressions that e.g. round by day and apply
this rounding in the time zone specified in the date histogram
aggregation request.
Closes#12278
In order to create releases without actually changing the version
as part of a commit, we also need to reflect the path of the potentially
changing S3 repo.
This method has multiple modes of resolving config files by
first looking in the config directory, then on the classpath,
and finally by prefixing with "config/" on the classpath.
Most of the places taking advantage of this were tests, so they
did not have to setup a real home dir with config. The only place
that was really relying on it was the code which loads names.txt
to randomly choose a node name.
This change fixes test to setup fake home dirs with their config
files. It also makes the logic for finding names.txt explicit:
look in config dir, and if it doesn't exist, load /config/names.txt
from the classpath.
TermsQueryParser still parses those values although deprecated. These need to be present in the java api as well to get ready for the query refactoring, where the builders are the intermediate query format that we parse our json queries into. Whatever the parser supports need to be supported by the builder as well.
Closes#12870
TermsQueryParser doesn't support the cache field anymore, so if it gets set through java api, the subsequent parsing of that query will throw error
Relates to #12870
Refactored a part out of the release script, so the user can
change the version locally as well as move the documentation
and change the Version.java
The background of this change is to have a very simple release
process that puts stuff into a staging environment, so the beta
release can be tested, before it is officially released.
This means the build_release script can be removed soon.
Settings currently has a classloader member, which any user (plugin
or core ES code) can access to load classes/resources. This is extremely
error prone as setting the classloder on the Settings instance is a
public method. Furthermore, it is not really necessary. Classes that
need resources should load resources using normal means
(getClass().getResourceAsStream). Those that need classes
should use Class.forName, which will load the class with the
same classloader as the calling class. This means, in the few
places where classes are loaded by string name, they will use
the appropriate loader: either the default classloader which loads
core ES code, or a child classloader for each plugin.
This change removes the classloader member from Settings, as
well as other classloader related uses (except for a handful
of cases which must use a classloader, at least for now).
We previous used something like Class.forName to load mock classes,
where tests would set a setting that was *supposed* to only be used by
tests. This change make these impls package private so that only tests
can change out these implementations, through test plugins.
closes#12784
No need to load catch this query since it's cheap and not reused.
If we cache it, it can cause assertions to be tripped since this
method is executed during postRecovery phase and might still run while
nodes are shutdown in tests.
Leftover after adding the query builder type to QueryParser. getBuilderPrototype is better typed now and doesn't require unchecked cast anymore. Fixed also some Tuple usage without types.
This commit tries to add some infrastructure to streamline how extension
points should be strucutred. It's a simple approache with 4 implementations
for `highlighter`, `suggester`, `allocation_decider` and `shards_allocator`.
It simplifies adding new extension points and forces to register classes instead
of strings.
When parsing inner array of filters, AndQueryParser seems to
check for correct "filters" field name but then does the same
kind of operation in the `else` branch of the stament. This
seems like it can be removed.
Build fails with maven 3.3.1 and 3.3.3. To reproduce, install one of the 3.3.x versions of maven and run `mvn clean verify` in the root directory of the project. The build will fail in the QA: Smoke Test Shaded Jar module with the following error:
```
Started J0 PID(99979@flea.local).
Suite: org.elasticsearch.shaded.test.ShadedIT
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testJodaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.06s | ShadedIT.testJodaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:3A9404F1F69FD80]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testGuavaIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testGuavaIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:C2502FD54D83433D]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: reproduce with: ant test -Dtestcase=ShadedIT -Dtests.method=testjsr166eIsNotOnTheCP -Dtests.seed=2F4D23A7462CF921 -Dtests.locale= -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8
FAILURE 0.01s | ShadedIT.testjsr166eIsNotOnTheCP <<<
> Throwable #1: junit.framework.AssertionFailedError: Expected an exception but the test passed: java.lang.ClassNotFoundException
> at __randomizedtesting.SeedInfo.seed([2F4D23A7462CF921:35593286F4269392]:0)
> at junit.framework.Assert.fail(Assert.java:57)
> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /Users/Shared/Jenkins/Home/workspace/elasticsearch-master/qa/smoke-test-shaded/target/J0/temp/org.elasticsearch.shaded.test.ShadedIT_2F4D23A7462CF921-001
2> NOTE: test params are: codec=CheapBastard, sim=DefaultSimilarity, locale=, timezone=Asia/Baku
2> NOTE: Mac OS X 10.10.4 x86_64/Oracle Corporation 1.8.0_25 (64-bit)/cpus=8,threads=1,free=482137936,total=514850816
2> NOTE: All tests run in this JVM: [ShadedIT]
Completed [1/1] in 6.61s, 5 tests, 3 failures <<< FAILURES!
Tests with failures:
- org.elasticsearch.shaded.test.ShadedIT.testJodaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testGuavaIsNotOnTheCP
- org.elasticsearch.shaded.test.ShadedIT.testjsr166eIsNotOnTheCP
```
Please note that build doesn't fail with maven 3.2.x and it doesn't fail if mvn command is executed inside the qa/smoke-test-shaded directory. Only when the build is started from the root directory the error above can be observed.
The reason is because of the shaded version which depends on elasticsearch core.
When Maven build the module only, then elasticsearch core is not added to the dependency tree.
```sh
mvn dependency:tree -pl :smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
But if shaded plugin is involved during the build, it modifies the `projectArtifactMap`:
```sh
mvn dependency:tree -pl org.elasticsearch.distribution.shaded:elasticsearch,:smoke-test-shaded
```
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ smoke-test-shaded ---
[INFO] org.elasticsearch.qa:smoke-test-shaded:jar:2.0.0-beta1-SNAPSHOT
[INFO] +- org.elasticsearch.distribution.shaded:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | \- org.elasticsearch:elasticsearch:jar:2.0.0-beta1-SNAPSHOT:compile
[INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queries:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-memory:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-queryparser:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-sandbox:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-suggest:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-misc:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-join:jar:5.2.1:compile
[INFO] | | \- org.apache.lucene:lucene-grouping:jar:5.2.1:compile
[INFO] | +- org.apache.lucene:lucene-spatial:jar:5.2.1:compile
[INFO] | | \- com.spatial4j:spatial4j:jar:0.4.1:compile
[INFO] | +- com.google.guava:guava:jar:18.0:compile
[INFO] | +- com.carrotsearch:hppc:jar:0.7.1:compile
[INFO] | +- joda-time:joda-time:jar:2.8:compile
[INFO] | +- org.joda:joda-convert:jar:1.2:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-smile:jar:2.5.3:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:jar:2.5.3:compile
[INFO] | | \- org.yaml:snakeyaml:jar:1.12:compile
[INFO] | +- com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.5.3:compile
[INFO] | +- io.netty:netty:jar:3.10.3.Final:compile
[INFO] | +- com.ning:compress-lzf:jar:1.0.2:compile
[INFO] | +- com.tdunning:t-digest:jar:3.0:compile
[INFO] | +- org.hdrhistogram:HdrHistogram:jar:2.1.6:compile
[INFO] | +- org.apache.commons:commons-lang3:jar:3.3.2:compile
[INFO] | +- commons-cli:commons-cli:jar:1.3.1:compile
[INFO] | \- com.twitter:jsr166e:jar:1.1.0:compile
[INFO] +- org.hamcrest:hamcrest-all:jar:1.3:test
[INFO] \- org.apache.lucene:lucene-test-framework:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-codecs:jar:5.2.1:test
[INFO] +- org.apache.lucene:lucene-core:jar:5.2.1:compile
[INFO] +- com.carrotsearch.randomizedtesting:randomizedtesting-runner:jar:2.1.16:test
[INFO] +- junit:junit:jar:4.11:test
[INFO] \- org.apache.ant🐜jar:1.8.2:test
```
A fix could consist of fixing something on Maven side. Probably something changed in a recent version and introduced this "issue" but it might be not really an issue. More a fix.
There are two workarounds:
1) exclude manually elasticsearch core from shaded version in smoke-test-shaded module and add manually each lucene lib needed by elasticsearch
2) add a new `elasticsearch-lucene` (lucene) POM module which simply declares all needed lucene libs in subprojects (such as the smoke tester one).
I choose the later.
Closes#12791.
This allows `path.shared_data` to be added to the security manager while
still allowing a custom `data_path` for indices using shadow replicas.
For example, configuring `path.shared_data: /tmp/foo`, then created an
index with:
```
POST /myindex
{
"index": {
"number_of_shards": 1,
"number_of_replicas": 1,
"data_path": "/tmp/foo/bar/baz",
"shadow_replicas": true
}
}
```
The index will then reside in `/tmp/foo/bar/baz`.
`path.shared_data` defaults to `null` if not specified.
Resolves#12714
Relates to #11065
The upper bound must be 0-based since we are corrupting an offset into
the file but it can be a 1-based length of the file which results in an
uncorrupted file.
There were two submodules of AllocationModule. This combines them into a
single module, adds a base test case for module testing, and adds back
the ability for plugins to provide custom ShardsAllocators.
closes#12781
Instead of logging the entire `_source` in the indexing slowlog we log by
default just the first 1000 characters - this is controlled by the
`index.indexing.slowlog.source` settings and can be set to `true` to log the
whole `_source`, `false` to log none of it, and a number to log at most that
many characters.
Closes#4485
In the query refactoring branch we've been introducing getter methods for every bit that you can set to each query. The naming is not every consistent at the moment. The applied naming convention are the following:
- `innerQuery()` for any inner query, when there's only one of them
- when there's more than one inner query, use a prefix that identifies which query it is, and the `query` suffix (e.g. `positiveQuery` or `littleQuery`)
- `fieldName()` for the name of the field to be queried
- `value()` for the actual query
These changes don't break bw comp given that these getters were all introduced with the query refactoring which hasn't been released yet. Also we are modifying getters that don't have a corresponding setter, as the fields are final, hence we are not breaking consistency between getter and setter.
Closes#12800
Removed attempt of parsing of `field` rather than `fields` and attempted support of the following syntax:
```
{
"simple_query_string": {
"body" : {
"query": "foo bar"
}
}
}
```
Both these two syntaxes were undocumented, untested and not working.
Added test for case when `fields` is not specified, then the default field is queried.
Closes#12794Closes#12798
This commit includes the stacktrace into the structured exception rendering
to ensure we can find the reason / cause for certain things quicker. This
is enabled by default and is very verbose. Users can disable it via `rest.exception.stacktrace.skip = true|false`
Closes#12239
We have a way to allow a plugin to specify additional settings. These
settings should only be applied if they are not already existing in the
node settings.
The equalTo logic of ShardRouting doesn't take version and unassignedInfo into the account when compares shard routings. Since cluster state diff relies on equal to detect the changes that needs to be sent to other cluster, this omission might lead to changes not being properly propagated to other nodes in the cluster.
Closes#12387
In order to test the way plugins with configuration are installed and removed
we need a plugin with configuration in the repository. The simplest way to
get one is to make an "example" plugin.
In the process of making this example it became aparent that cat actions were
difficult to create outside of the org.elasticsearch.rest.action.cat package
because key methods in AbstractCatAction were package private. This makes them
protected and uses them to create the example configured plugin.
Relates to #12717 but is only one step of many to close it.
the default classloader. It had all kinds of leniency in how the
classname was found, and simply cannot work with plugins having isolated
classloaders.
This change removes that method. Some of the uses of it were for custom
extension points, like custom repository or discovery types. A lot were
just there to plugin mock implementations for tests. For the settings
that were legitimate, all now support plugins adding the given setting
via onModule. For those that were specific to tests for mocks, they now
use Classes.loadClass (a helper around Class.forName). This is a
temporary measure until (in a future PR) tests can change the
implementation via package private statics.
I also removed a number of unnecessary intermediate modules, added a
"jvm-example" plugin that can be filled in in the future as a smoke test
for breaking plugins, and gave some documentation to "spawn" modules
interface.
closes#12643closes#12656
This was a straight up bug found in #12753. If only one type existed,
the compatibility check for a new type was not strict, so changes to
an updateable setting like search_analyzer got through (but only
partially). This change fixes the check and adds tests (which were
previously a TODO).
This also fixes a bug in dynamic field creation which woudln't copy
fielddata settings when duplicating a pre-existing field with the
same name.
closes#12753
This PR prevents setting timezone on ValueFormatter.DateTime. Instead
the timezone information needed when printing buckets key-as-string
information is provided at constrution time of the ValueFormatter, making
sure we don't overwrite any constants. This, however, made it necessary to
be able to access the timezone information when resolving the format
in ValueSourseParser, so the `time_zone` parameter is now parsed alongside
the `format` parameter in ValueSourceParser rather than in DateHistogramParser.
Closes#12531
Today we accept store types like `nio_fs`, `nioFs`, `niofs` etc.
this commit removes the leniency and only accepts plain values without underscore.
Yet, this commit also has a BWC layer that upgrades existing indices to the new version.
Affected enums are:
* `nio_fs`
* `mmap_fs`
* `simple_fs`
If an URL specified with --url on the command line cannot be reached,
the plugin manager tries URLs at download.elastic.co automatically.
This can lead to download wrong plugins and also exposes potentially
the name of an internal plugin to the download service.
This fix ensures, that the plugin manager simply aborts, if the specified
URL cannot be downloaded.
Currently there is a flag in the QueryParseContext that keeps track
of whether an inner query sits inside a filter and should therefore
produce an unscored lucene query. This is done in the
parseInnerFilter...() methods that are called in the fromXContent()
methods or the parse() methods we haven't refactored yet. This needs
to move to the toQuery() method in the refactored builders, since the
query builders themselves have no information about the parent query
they might be nested in.
This PR moves the isFilter flag from the QueryParseContext to the re-
cently introduces QueryShardContext. The parseInnerFilter... methods
need to stay in the QueryParseContext for now, but they already delegate
to the flag that is kept in QueryShardContext. For refactored queries
(like BoolQueryBuilder) references to isFilter() are moved from the
parser to the corresponding builder. Builders where the inner query
was previously parsed using parseInnerFilter...() now use a newly
introduces toFilter(shardContext) method that produces the nested lucene
query with the filter context flag switched on.
Closes#12731
Moved the license checker config into the parent pom, and overrede
the license dir/target-to-check in distributions/pom.
Disabled the license checker explicitly for projects which run integration
tests but have no licenses dir:
* core
* distribution
* qa
* plugins/delete-by-query
* plugins/mapper-size
* plugins/site-example
Closes#12752Closes#12754
This commit adds basic support to track the number of times scripts are
compiled and compiled scripts are evicted from the script cache. These
statistics are tracked at the node level.
Closes#12673
ClusterState has 3 different methods to access RoutingNodes:
* #routingNodes() - mutable version
* #getRoutingNodes() - delegates to #getReadOnlyRoutingNodes()
* #getReadOnlyRoutingNodes() - it's docs say `NOTE, the routing nodes are mutable, use them just for read operations`
The latter also reuses the instance that it creates. This has several problems beside the obvious:
* creating RoutingNodes is costly and should be done only if really needed ie. use cached version as much as possible
* the common case is ReadOnly but all kinds of things are called
* mutable version are only needed in one place and should only be used in the AllocationService
* RoutingNodes can freeze it's ShardRoutings but doesn't
* RoutingNodes should check if it's read-only or not
This commit fixed all the problems and special cases the mutable case such that all accesses via ClusterState#getRoutingNodes()
is read-only and RoutingNodes enforces this.
This is one of our esoteric metadata mappers so I think we should distribute
it in a plugin rather than in elasticsearch core.
This introduces one limitation: the value of the `_size` parameter is not
retrievable for documents that are only in the transaction log.
The `multi_match` query groups terms that have the same analyzer together and
then applies the boost of the first query in each group. This is not necessary
given that boosts for each term are already applied another way.
Versions can be tricky with linux vendors and such. To help debug any possible issues, we should output a better version.
Today:
```
[elasticsearch] java.lang.RuntimeException: Java version: 1.7.0_55 suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
[elasticsearch] Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
[elasticsearch] If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
[elasticsearch] Upgrading is preferred, this workaround will result in degraded performance.
[elasticsearch] at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
[elasticsearch] at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
[elasticsearch] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
With patch:
```
java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_40 [Java HotSpot(TM) 64-Bit Server VM 24.0-b56] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:121)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:28)
```
This commit adds a new API to allow scripts to say whether they need scores.
In practice, only the `expression` script engine makes use of it correctly,
other engines just return `true` since they can't predict whether they'll
need scores. This should make scripted aggregations and `function_query`
faster as we'll now be able to pass needsScores=false to Query.createWeight.
Adding a named query that is null can lead to a NullPointerException
when copying the named queries. This is due to an implementation detail
in QueryParseContent.copyNamedQueries. In particular, this method uses
com.google.common.collect.ImmutableMap.copyOf. A documented requirement
of ImmutableMap is that none of the entries have a null key nor null
value. Therefore, we should not add such queries to the namedQueries
map. This will not change any behavior since Map.get returns null if no
entry with the given key exists anyway.
Closes#12683
This change improves the `function_score` query to not compute scores at all
when they are not needed, and to not compute scores on the underlying query
when the combine function is to replace the score with the scores of the
functions.
This commit makes it possible to serialize arbitrary objects by having them extend Writeable. When reading them though, we need to be able to identify which object we have to create, based on its name. This is useful for queries once we move to parsing on the coordinating node, as well as with aggregations and so on.
Introduced a new abstraction called NamedWriteable, which is supported by StreamOutput and StreamInput through writeNamedWriteable and readNamedWriteable methods. A new NamedWriteableRegistry is introduced also where named writeable prototypes need to be registered so that we are able to retrieve the proper instance of the writeable given its name and then de-serialize it calling readFrom against it.
Closes#12393
In order to avoid extra reroutes, `RoutingService` should avoid
scheduling a reroute of any shards where the delay is negative. To make
sure that we don't encounter a race condition between the
GatewayAllocator thinking a shard is delayed and RoutingService thinking
it is not, the GatewayAllocator will update the RoutingService with the
last time it checked in order to use a consistent "view" of the delay.
Resolves#12456
Relates to #12515 and #12456
When postFilter generates a token that is identical to the input term
DirectCandidateGenerator should not preFilter this token. If postFilter
and preFilter are the same analyzer instance it would fail with :
"TokenStream contract violation: close() call missing"
This is a forward port of @nomoa's #12670
Today we try to detect if there is an index existing in the directory
and if not we create one. This can be tricky and errof prone since we
rely on the filesystem without taking the context into account when the
engine gets created. We know in all situations if the index should be created
so we can just use this infromation and rely on the lucene index writer to barf
if we hit a situations where we can't append to an index while we should.
Today we miss to throw / rethrow an recovery exception if it happens during
the finalization of phase 1 if the source files are not affected. Even worse
this can cause some dataloss if the reason for this exception is a failure of
deleting a corruption marker or similar pre-existing corruptions since we continue
with the recovery and mark the target shared as started which will in-turn open
an engine with an empty index.
This makes the output of EsThreadPoolExecutor#toString less pretty but
we no longer have funky hacky that rely on the specific format of the
toString produced by ThreadPoolExecutor which isn't part of its API and
could change with any JVM version and break the output.
Improving the toString allows for nicer error reporting. Also cleaned up
the way that EsRejectedExecutionException notices that it was rejected
from a shutdown thread pool. I left javadocs about how its not 100% correct
but good enough for most uses.
The improved toString on EsThreadPoolExecutor mean every one of them needs
a name. In most cases the name to use is obvious. In tests I use the name
of the test method and in real thread pools I use the name of the thread
pool. In non-ThreadPool executors I use the thread's name.
Closes#9732
In order to support the URL notation including a user/pass combination
(like http://user:pass@host/plugin.zip) the auth info needs to be added
manually.
We are currently splitting the parse() method in the query parsers into a
query parsing part (fromXContent(QueryParseContext)) and a new method that
generates the lucene query (toQuery(QueryParseContext)). At some point we
would like to be able to excute these two steps in different phases, one
on the coordinating node and one on the shards.
This PR tries to anticipate this by renaming the existing QueryParseContext
to QueryShardContext to make it clearer that this context object will provide
the information needed to generate the lucene queries on the shard. A new
QueryParseContext is introduced and all the functionality needed for parsing
the XContent in the request is moved there (parser, helper methods for parsing
inner queries).
While the refactoring is not finished, the new QueryShardContext will expose the
QueryParseContext via the parseContext() method. Also, for the time beeing the
QueryParseContext will contain a link back to the QueryShardContext it wraps, which
eventually will be deleted once the refactoring is done.
Closes#12527
Relates to #10217
Today this is "unofficial" as conf/scripts, but some people
want to share scripts across different nodes and so on. Because
they cannot configure it, they are forced to use dirty hacks
like symbolic links, which isnt going to work: we aren't going
to recursively scan conf/ and add permissions to all link targets
underneath it, thats crazy.
I really hate adding yet another configuration knob here, but
users resorting to using symlinks are going to be frustrated,
and do things in a more insecure way.
The URL to download the main elasticsearch plugins did not match
what the S3 wageon is supposed to write to.
In addition this PR adds support for snapshots to access the
snapshot S3 bucket, so we can possibly download snapshot versions
of plugins.
The format of the URLs stems from #12270Closes#12632
Conflicting mappings that were allowed before v2.0 can cause runaway shard failures on upgrade. This commit adds a check that prevents a cluster from starting if it contains such indices as well as restoring such indices from a snapshot into already running cluster.
Closes#11857
Currently when we delete files belonging to deleted snapshots we issue one delete command to underlying snapshot store at a time. Some repositories can benefit from bulk deletes of multiple files.
Closes#12533
This method syncs the translog unless it's already synced. If the engine
is alreayd closed we are guaranteed to be synced already such that we can just
ignore this exception.
Closes#12603
this change was added recently which uses default timezone for the creation
date on CAT endpoints. We should be consistent and use UTC across the board.
This commit adds #getDefaultTimzone() to forbidden API and fixes the REST tests.
Relates to #11688
The Streams.copyTo(String|Bytes)FromClasspath() methods resolve resources using org.elasticsearch.io.Streams classloader. This is fine in elasticsearch core and when running tests but if used in a plugin this can lead to FileNotFoundExceptions at runtime because plugin are loaded in a dedicated classloader.
Most of the abstract base test classes we have were previously @Ignored.
However, there were also some other tests ignored. Having two ways to
quiet tests is confusing, and clearly it has caused some tests
to get lost in the fold.
This change moves all base test classes to use the "TestCase" suffix,
which is not picked up by the test class name pattern. It also removes
@Ignore from (almost) all tests, and adds it to forbidden apis.
And since we were renaming, I shorted base test class names to use
"ES" instead of "Elasticsearch". I type this a lot of types a day,
and I have heard others express a similar desire for a shorter name.
closes#10659
Now that integ tests are moved into `mvn verify`, we don't really have
a need for @Slow, and especially not @Integration. This removes
uses of the first, and completely removes uses of the latter.
Tasks can be registered with a timeout, which runs as a task in a separate
threadpool. The idea is that the timeout runner cancels the main task when
the time is out, and the timeout runner is cancelled when the main task
starts executing. However, the following statement:
```java
timeoutFuture = timer.schedule(new Runnable() {
@Override
public void run() {
if (remove(TieBreakingPrioritizedRunnable.this)) {
runAndClean(timeoutCallback);
}
}
}, timeValue.nanos(), TimeUnit.NANOSECONDS);
```
is not atomic: the removal task is first started, and then the (volatile)
variable is assigned. As a consequence, there is a short window that allows
a timeout task to wait until the time is out even if the task is already
completed.
See http://build-us-00.elastic.co/job/es_core_17_centos/496/ for an example of
such a failure.
Previously only the first aggregation in a buckets_path was check to make sure the aggregation existed. Now the whole path is checked to ensure an aggregation exists at each element in the buckets_path
Closes#12360
Its not going to work: its blocked by security policy
and will just add a confusing SecurityException to the mix, and
bogusly give an exit status of 0 when in fact something bad happened.
Finally, if ES can't startup, it is a serious problem, there is
no sense in hiding the reason why: deliver the full stack trace.
The repository verification process should create a subdirectory to make sure we check permission of newly created directories in case elasticsearch processes on different nodes are running using different uids and creating blobs with incompatible permissions.
Closes#11611
Previously we issued a reroute when a node went over the high watermark
in order to move shards away from the node. This change tracks nodes
that have previously been over the high or low watermarks and issues a
reroute when the node goes back underneath the watermark.
This allows shards that may be unassigned to be assigned back to a node
that was previously over the low watermark but no longer is.
Resolves#12422
As windows has different line endings and this has
already been fixed by another test, the method has been
moved into CliToolTestCase.
In addition one test has been removed, as it was redundant.
In order to ensure, we have the same experience across operating systems
and shells, this commit uses the java CLI parser instead of the shell
getopt parsing to parse arguments.
This also allows for support for paths, which contain spaces.
Also commons-cli depdency was upgraded to 1.3.1 and tests have been added.
Changes
* new exit code, OK_AND_EXIT, allowing to tell the caller to exit, as everything
went as expected (e.g. when running a version output)
BWC breaking:
* execute() returns an ExitStatus instead of an integer, otherwise there is no
possibility to signal by a command, if the JVM should be exited after a run.
This affects plugins, that have command line tools
* -v used to be version, but is a verbose flag by default in the current CLI infra,
must be -V or --version now
* -X has been removed - the current implementation was useless anyway, as
it prefixed those properties with "es.". You should use
ES_JAVA_OPTS/JAVA_OPTS for JVM configuration
Date math index name resolution enables you to search a range of time-series indices, rather than searching all of your time-series indices and filtering the the results or maintaining aliases. Limiting the number of indices that are searched reduces the load on the cluster and improves execution performance. For example, if you are searching for errors in your daily logs, you can use a date math name template to restrict the search to the past two days.
The added `ExpressionResolver` implementation that is responsible for resolving date math expressions in index names. This resolver is evaluated before wildcard expressions are evaluated.
The supported format: `<static_name{date_math_expr{date_format|timezone_id}}>` and the date math expressions must be enclosed within angle brackets. The `date_format` is optional and defaults to `YYYY.MM.dd`. The `timezone_id` id is optional too and defaults to `utc`.
The `{` character can be escaped by places `\\` before it.
Closes#12059
When we rewrite to a MatchNoTermsQuery we were throwing out the boost which
could could lead to funky changes when the query against _all was in a
bool query.
Previously we would write the entire ByteBuffer to the stream to serialise the HDRHistogram even if it was not all needed. Now we only write the bytes that are actually written to in the ByteBuffer.
We currently test the toQuery method by comparing its return value with the output of createExpectedQuery. This seemed at first like a good deep test for the lucene generated queries, but ends up causing a lot of code duplication between tests and prod code, which smells like this test is not that useful after all.
This commit removes the requirement for createExpectedQuery, we test a bit less the lucene generated queries, but we do still have a testToQuery method in BaseQueryTestCase. This test acts as smoke test to verify that there are no issues with our toQuery method implementation for each query: it verifies that two equal queries should result in the same lucene query. It also verifies that changing the boost value of a query affects the result of the toQuery.
Part of the previous createExpectedQuery can be moved to assertLuceneQuery using normal assertions, we do it when it makes sense only though.
This commit also adds support for boost and queryName to SpanMultiTermQueryParser and SpanMultiTermQueryBuilder, plus it removes no-op setters for queryName and boost from QueryFilterBuilder. Instead we simply declare in our test infra that query filter doesn't support boost and queryName and we disable tests aroudn these fields in that specific case. Boost and queryName support is also moved down to the QueryBuilder interface.
Closes#12473
I suggest updating the BulkProcessor so that it prevents users from building a BulkProcessor with a null client. If the client is null, then the class should throw an exception with a message that describes the cause of the exception. Earlier today, when I passed a null client to the builder() method, later received an exception, and attempted to get the exception's message in my afterBulk() listener (e.g. e.getMessage()), the message was null. It took me a while to pinpoint the cause of the problem.
This can happen in two ways:
1. The _all field is disabled.
2. There are documents in the index, the _all field is enabled, but there are
no fields in any of the documents.
In both of these cases we now rewrite the query to a MatchNoDocsQuery which
should be safe because there isn't anything to match.
Closes#12439
When a node discovers shard content on disk which isn't used, we reach out to all other nodes that supposed to have the shard active. Only once all of those have confirmed the shard active, the shard has no unassigned copies *and* no cluster state change have happened in the mean while, do we go and delete the shard folder.
Currently, after removing a shard, the IndicesStores checks the indices services if that has no more shard active for this index and if so, it tries to delete the entire index folder (unless on master node, where we keep the index metadata around). This is wrong as both the check and the protections in IndicesServices.deleteIndexStore make sure that there isn't any shard *in use* from that index. However, it may be the we erroneously delete other unused shard copies on disk, without the proper safety guards described above.
Normally, this is not a problem as the missing copy will be recovered from another shard copy on another node (although a shame). However, in extremely rare cases involving multiple node failures/restarts where all shard copies are not available (i.e., shard is red) there are race conditions which can cause all shard copies to be deleted.
Instead, we should change the decision to clean up an index folder to based on checking the index directory for being empty and containing no shards.
This change creates a proper `distribution` modules in which we have today packaging for
all of our four current packages:
* zip
* tar.gz
* rpm
* deb
Licenes have moved into the distribution project as well. So have the config/ and the bin/ directory
from the core/ project.
The RPM package is now built, if rpmbuild exists.
The bats tests have been moved as well.
Also the zip distribution now executes the REST integration tests.
As Robert pointed out on #12465, it has the undesirable property of relying on
the operating system. So it would be better to use a simple rule such as
checking whether the file name starts with a dot.
When an index is opened it will not be assigned to a node but also not have closed state
anymore. Before we only checked if an index either is closed or assigned to the data node
and therefore the change from close->open was not written.
Some of the test for meta data are redundant. Also, since they
somewhat test service disruptions (start master with empty
data folder) we might move them to DiscoveryWithServiceDisruptionsTests.
Also, this commit adds a test for
https://github.com/elastic/elasticsearch/issues/11665
If we cache the type filter and we e.g. set its boost which is now settable on all queries, the boost will change for all subsequent queries. We should rather create a new query every time.
Major changes:
* Changed MetaData to holds alias and index lookup information into a single TreeMap instead of two separate maps.
* Moved the building of the alias / index lookup to the metadata builder.
Settings are currently parsed by looping over the tokens until an END_OBJECT token is reached. However, this does not mean that the end of
the settings stream was reached. This can occur, for example, when parsing a YAML settings file with inconsistent indentation. Currently
in this case, some settings will be silently ignored. This commit forces a check that we have in fact reached the end of the settings
stream.
Closes#12382
HDRHistogram has been added as an option in the percentiles and percentile_ranks aggregation. It has one option `number_significant_digits` which controls the accuracy and memory size for the algorithm
Closes#8324
If we cache the type filter and we e.g. set its boost which is now settable on all queries, the boost will change for all subsequent queries. We should rather create a new query every time.
Following up to #12427, this PR does same changes, moving null-checks from construtors
and setters in query builder to the validate() method.
PR against query-refactoring branch
The ThrottlingAllocationDecider is responsible to limit the number of incoming/local recoveries on a node. It therefore shouldn't count shards marked as relocating which represent the source of the recovery.
Closes#12409
If there are multiple levels of filtered leaf readers then with the unwrap() method it immediately returns the most inner leaf reader and thus skipping of over any other filtered leaf reader that may be instance of ElasticsearchLeafReader. By using #getDelegate() method we can check each filter reader layer if it is instance of ElasticsearchLeafReader, so that we never skip over any wrapped filtered leaf reader and lose the shard id.
Also, as set of utility methods was introduced on ShardRouting to do various types of matching with other shard routings, giving control about what exactly should be matched (same shard id, same allocation id, all but version and shard info etc.). This is useful here, but also prepares the grounds for the change needed in #12387 (making ShardRouting.equals be strict and perform exact equality).
Closes#12397
When a replica is initializing from the primary, and we find a better node that has full sync id match, it is better to cancel the existing replica allocation and allocate it to the new node with sync id match (eventually)
Today, when a user provides settings and specifies a classloader to be used, the classloader gets
dropped when we copy the settings to check for prompt entries. This change copies the classloader
when replacing the prompt placeholders and adds a test to ensure the InternalSettingsPreparer
always retains the classloader.
Closes#12340
JarHell has a low level check, but its more of a best effort one,
only checking if X-Compile-Target-JDK is set in the manifest. This
is the case for all lucene- and elasticsearch- generated jars, but
lets just be explicit for plugins.
The TransportSingleCustomOperationAction `prefer_local` option has been removed as it isn't worth the effort.
The TransportSingleShardAction will execute the operation on the receiving node if a concrete list doesn't provide a list of candite shards routings to perform the operation on.
Change #12371 broke fielddata on the `_parent` child for indices created before
2.0. This pull request adds back caching of the `_parent` fielddata for indices
created before 2.0 and cleans some related stuff. For instance
DocumentTypeListener doesn't need to take care of removed mappings anymore since
mappings can't be removed in 2.0.
The rangeQuery() method in MappedFieldType and some overwriting subtypes takes
a nullable QueryParseContext argument which is unused and can be deleted.
This is also useful for the current query parsing refactoring, since
we want to avoid passing the context object around unnecessarily.
IndicesStoreIntegrationTests.indexCleanup() tests if the shard files on disk are actually removed
after relocation. In particular it tests the following:
Whenever a node deletes a shard because it was relocated somewhere else, it first
checks if enough other copies are started somewhere else. The node sends a
ShardActiveRequest to the nodes that should have a copy.
The nodes that receive this request check if the shard is in state STARTED in which case
they respond with true. If they have the shard in POST_RECOVERY they register a cluster state
observer that checks at each update if the shard has moved to STARTED and respond with true when
this happens.
To test that the cluster state observer mechanism actually works, the latter can be triggered by
blocking the cluster state processing when a recover starts and only unblocking it shortly
after the node receives the ShardActiveRequest.
This is more reliable than using random cluster state processing delays
because the random delays make it hard to reason about different timeouts
that can be reached.
closes#11989
Closes#12367
Squashed commit of the following:
commit 9453c411798121aa5439c52e95301f60a022ba5f
Merge: 3511a9c 828d8c7
Author: Robert Muir <rmuir@apache.org>
Date: Wed Jul 22 08:22:41 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit 3511a9c616503c447de9f0df9b4e9db3e22abd58
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:50:15 2015 -0700
Remove duplicated constant
commit 4a9b5b4621b0ef2e74c1e017d9c8cf624dd27713
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 21:01:57 2015 -0700
Add check that plugin must specify at least site or jvm
commit 19aef2f0596153a549ef4b7f4483694de41e101b
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:52:58 2015 -0700
Change plugin "plugin" property to "classname"
commit 07ae396f30ed592b7499a086adca72d3f327fe4c
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:36:05 2015 -0400
remove test with no methods
commit 550e73bf3d0f94562f4dde95239409dc5a24ce25
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:31:58 2015 -0400
fix loading to use classname
commit 04463aed12046da0da5cac2a24c3ace51a79f799
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:24:19 2015 -0400
rename to classname
commit 9f3afadd1caf89448c2eb913757036da48758b2d
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 20:18:46 2015 -0700
moved PluginInfo and refactored parsing from properties file
commit df63ccc1b8b7cc64d3e59d23f6c8e827825eba87
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:08:26 2015 -0400
fix test
commit c7febd844be358707823186a8c7a2d21e37540c9
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 23:03:44 2015 -0400
remove test
commit 017b3410cf9d2b7fca1b8653e6f1ebe2f2519257
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:58:31 2015 -0400
fix test
commit c9922938df48041ad43bbb3ed6746f71bc846629
Merge: ad59af4 01ea89a
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 22:37:28 2015 -0400
Merge branch 'master' into refactor_pluginservice
commit ad59af465e1f1ac58897e63e0c25fcce641148a7
Author: Areek Zillur <areek.zillur@elasticsearch.com>
Date: Tue Jul 21 19:30:26 2015 -0400
[TEST] Verify expected number of nodes in cluster before issuing shardStores request
commit f0f5a1e087255215b93656550fbc6bd89b8b3205
Author: Lee Hinman <lee@writequit.org>
Date: Tue Jul 21 11:27:28 2015 -0600
Ignore EngineClosedException during translog fysnc
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
commit 4ac68bb1658688550ced0c4f479dee6d8b617777
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 22:37:29 2015 +0200
Replica allocator unit tests
First batch of unit tests to verify the behavior of replica allocator
commit 94609fc5943c8d85adc751b553847ab4cebe58a3
Author: Jason Tedor <jason@tedor.me>
Date: Tue Jul 21 14:04:46 2015 -0400
Correctly list blobs in Azure storage to prevent snapshot corruption and do not unnecessarily duplicate Lucene segments in Azure Storage
This commit addresses an issue that was leading to snapshot corruption for snapshots stored as blobs in Azure Storage.
The underlying issue is that in cases when multiple snapshots of an index were taken and persisted into Azure Storage, snapshots subsequent
to the first would repeatedly overwrite the snapshot files. This issue does render useless all snapshots except the final snapshot.
The root cause of this is due to String concatenation involving null. In particular, to list all of the blobs in a snapshot directory in
Azure the code would use the method listBlobsByPrefix where the prefix is null. In the listBlobsByPrefix method, the path keyPath + prefix
is constructed. However, per 5.1.11, 5.4 and 15.18.1 of the Java Language Specification, the reference null is first converted to the string
"null" before performing the concatenation. This leads to no blobs being returned and therefore the snapshot mechanism would operate as if
it were writing the first snapshot of the index. The fix is simply to check if prefix is null and handle the concatenation accordingly.
Upon fixing this issue so that subsequent snapshots would no longer overwrite earlier snapshots, it was discovered that the snapshot metadata
returned by the listBlobsByPrefix method was not sufficient for the snapshot layer to detect whether or not the Lucene segments had already
been copied to the Azure storage layer in an earlier snapshot. This led the snapshot layer to unnecessarily duplicate these Lucene segments
in Azure Storage.
The root cause of this is due to known behavior in the CloudBlobContainer.getBlockBlobReference method in the Azure API. Namely, this method
does not fetch blob attributes from Azure. As such, the lengths of all the blobs appeared to the snapshot layer to be of length zero and
therefore they would compare as not equal to any new blobs that the snapshot layer is going to persist. To remediate this, the method
CloudBlockBlob.downloadAttributes must be invoked. This will fetch the attributes from Azure Storage so that a proper comparison of the
blobs can be performed.
Closeselastic/elasticsearch-cloud-azure#51, closeselastic/elasticsearch-cloud-azure#99
commit cf1d481ce5dda0a45805e42f3b2e0e1e5d028b9e
Author: Lee Hinman <lee@writequit.org>
Date: Mon Jul 20 08:41:55 2015 -0600
Unit tests for `nodesAndVersions` on shared filesystems
With the `recover_on_any_node` setting, these unit tests check that the
correct node list and versions are returned.
commit 3c27cc32395c3624f7c794904d9ea4faf2eccbfb
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 14:15:59 2015 -0400
don't fail junit4 integration tests if there are no tests.
instead fail the failsafe plugin, which means the external cluster will still get shut down
commit 95d2756c5a8c21a157fa844273fc83dfa3c00aea
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 17:16:53 2015 +0200
Testing: Fix help displaying tests under windows
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
commit 944f06ea36bd836f007f8eaade8f571d6140aad9
Author: Clinton Gormley <clint@traveljury.com>
Date: Tue Jul 21 18:04:52 2015 +0200
Refactored check_license_and_sha.pl to accept a license dir and package path
In preparation for the move to building the core zip, tar.gz, rpm, and deb as separate modules, refactored check_license_and_sha.pl to:
* accept a license dir and path to the package to check on the command line
* to be able to extract zip, tar.gz, deb, and rpm
* all packages except rpm will work on Windows
commit 2585431e8dfa5c82a2cc5b304cd03eee9bed7a4c
Author: Chris Earle <pickypg@users.noreply.github.com>
Date: Tue Jul 21 08:35:28 2015 -0700
Updating breaking changes
- field names cannot be mapped with `.` in them
- fixed asciidoc issue where the list was not recognized as a list
commit de299b9d3f4615b12e2226a1e2eff5a38ecaf15f
Author: Shay Banon <kimchy@gmail.com>
Date: Tue Jul 21 13:27:52 2015 +0200
Replace primaryPostAllocated flag and use UnassignedInfo
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
commit 43080bff40f60bedce5bdbc92df302f73aeb9cae
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:45:05 2015 +0200
PluginManager: Fix bin/plugin calls in scripts/bats test
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
commit b81ccba48993bc13c7678e6d979fd96998499233
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 11:37:50 2015 +0200
Discovery: make sure NodeJoinController.ElectionCallback is always called from the update cluster state thread
This is important for correct handling of the joining thread. This causes assertions to trip in our test runs. See http://build-us-00.elastic.co/job/es_g1gc_master_metal/11653/ as an example
Closes#12372
commit 331853790bf29e34fb248ebc4c1ba585b44f5cab
Author: Boaz Leskes <b.leskes@gmail.com>
Date: Tue Jul 21 15:54:36 2015 +0200
Remove left over no commit from TransportReplicationAction
It asks to double check thread pool rejection. I did and don't see problems with it.
commit e5724931bbc1603e37faa977af4235507f4811f5
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 15:31:57 2015 +0200
CliTool: Various PluginManager fixes
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
commit 7a815a370f83ff12ffb12717ac2fe62571311279
Author: Alexander Reelsen <alexander@reelsen.net>
Date: Tue Jul 21 13:54:18 2015 +0200
CLITool: Port PluginManager to use CLITool
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.
commit 7f171eba7b71ac5682a355684b6da703ffbfccc7
Author: Martijn van Groningen <martijn.v.groningen@gmail.com>
Date: Tue Jul 21 10:44:21 2015 +0200
Remove custom execute local logic in TransportSingleShardAction and TransportInstanceSingleOperationAction and rely on transport service to execute locally. (forking thread etc.)
Change TransportInstanceSingleOperationAction to have shardActionHandler to, so we can execute locally without endless spinning.
commit 0f38e3eca6b570f74b552e70b4673f47934442e1
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 17:36:12 2015 -0700
More readMetadata tests and pickiness
commit 880b47281bd69bd37807e8252934321b089c9f8e
Author: Ryan Ernst <ryan@iernst.net>
Date: Tue Jul 21 14:42:09 2015 -0700
Started unit tests for plugin service
commit cd7c8ddd7b8c4f3457824b493bffb19c156c7899
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 07:21:07 2015 -0400
fix tests
commit 673454f0b14f072f66ed70e32110fae4f7aad642
Author: Robert Muir <rmuir@apache.org>
Date: Tue Jul 21 06:58:25 2015 -0400
refactor pluginservice
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217
Moving the query building functionality from the parser to the builders
new doToQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#12365
Moving the query building functionality from the parser to the builders
new toQuery() method analogous to other recent query refactorings.
Relates to #10217Closes#12182
When performing an operation on a primary, the state is captured and the
operation is performed on the primary shard. The original request is
then modified to increment the version of the operation as preparation
for it to be sent to the replicas.
If the request first fails on the primary during the translog sync
(because the Engine is already closed due to shadow primaries closing
the engine on relocation), then the operation is retried on the new primary
after being modified for the replica shards. It will then fail due to the
version being incorrect (the document does not yet exist but the request
expects a version of "1").
Order of operations:
- Request is executed against primary
- Request is modified (version incremented) so it can be sent to replicas
- Engine's translog is fsync'd if necessary (failing, and throwing an exception)
- Modified request is retried against new primary
This change ignores the exception where the engine is already closed
when syncing the translog (similar to how we ignore exceptions when
refreshing the shard if the ?refresh=true flag is used).
The help files are using a unix based file separator, where as
the test relies on the help being based on the file system separator.
This commit fixes the test to remove all `\r` characters before
comparing strings.
The test has also been moved into its own CliToolTestCase, as it does
not need to be an integration test.
There is no need to maintain additional state as to if a primary was allocated post api creation on the index routing table, we hold all this information already in the UnassignedInfo class.
closes#12374
The release and smoke test python scripts used to install
plugins in the old fashion.
Also the BATS testing suite installed/removed plugins in that
way. Here the marvel tests have been removed, as marvel currently
does not work with the master branch.
In addition documentation has been updated as well, where it was
still missing.
This dependency was used in order for mapping updates that change the fielddata
format to take effect immediately. And the way it worked was by clearing the
cache of fielddata instances that were already loaded. However, we do not need
to cache the already loaded (logical) fielddata instances, they are cheap to
regenerate. Note that the fielddata _caches_ are still kept around so that we
don't keep on rebuilding costly (physical) fielddata values.
The new plugin manager parser was not called correctly in the scripts.
In addition the plugin manager now creates a plugins/ directory in case
it does not exist.
Also the integration tests called the plugin manager in the deprecated way.
In order to unify the handling and reuse the CLITool infrastructure
the plugin manager should make use of this as well.
This obsolets the -i and --install options but requires the user
to use `install` as the first argument of the CLI.
This is basically just a port of the existing functionality, which
is also the reason why this is not a refactoring of the plugin manager,
which will come in a separate commit.