If a field caps request contains a field name that doesn't exist in all indices
the response will be partial and we hide an NPE. The NPE is now fixed but we still
have the problem that we don't pass on errors on the shard level to the user. This will
be fixed in a followup.
`_search_shards`API today only returns aliases names if there is an alias
filter associated with one of them. Now it can be useful to see which aliases
have been expanded for an index given the index expressions. This change also includes non-filtering aliases even without a filtering alias being present.
Changes the scope of the AllocationService dependency injection hack so that it is at least contained to the AllocationService and does not leak into the Discovery world.
We start the test JVMs with various options. There are two that we
should remove, they do not make sense.
- we require Java 8 yet there was a conditional build option for Java 7
- we do not set MaxDirectMemorySize in our default JVM options, we
should not in the test JVMs either
Relates #24501
Adds CONSOLE to cross-cluster-search docs but skips them for testing
because we don't have a second cluster set up. This gets us the
`VIEW IN CONSOLE` and `COPY AS CURL` links and makes sure that they
are valid yaml (not json, technically) but doesn't get testing.
Which is better than we had before.
Adds CONSOLE to the dynamic templates docs and ingest-node docs.
The ingest-node docs contain a *ton* of non-console snippets. We
might want to convert them to full examples later, but that can be
a separate thing.
Relates to #18160
I originally wrote this file when we first added snippets testing
and a lot has changed. We've grown quite fond of the
`// TESTRESPONSE[s/foo/bar/]` construct, for example, but the docs
discouraged its use.
Relates to #18160
Rewrites most of the snippets in the `innert_hits` docs to be
complete examples and enables `VIEW IN CONSOLE`, `COPY AS CURL`,
and automatic testing of the snippets.
Today we go to heroic lengths to workaround bugs in the JDK or around
issues like BSD jails to get information about the underlying file
store. For example, we went to lengths to work around a JDK bug where
the file store returned would incorrectly report whether or not a path
is writable in certain situations in Windows operating
systems. Another bug prevented getting file store information on
Windows on a virtual drive on Windows. We no longer need to work
around these bugs, we could simply try to write to disk and let an I/O
exception arise if we could not write to the disk or take advantage of
the fact that these bugs are fixed in recent releases of the JDK
(e.g., the file store bug is fixed since 8u72). Additionally, we
collected information about all file stores on the system which meant
that if the user had a stale NFS mount, Elasticsearch could hang and
fail on startup if that mount point was not available. Finally, we
collected information through Lucene about whether or not a disk was a
spinning disk versus an SSD, information that we do not need since we
assume SSDs by default. This commit takes into consideration that we
simply do not need this heroic effort, we do not need information
about all file stores, and we do not need information about whether or
not a disk spins to greatly simplfy file store handling.
Relates #24402
Added missing permissions required for authenticating with Kerberos to HDFS. Also implemented
code to support authentication in the form of using a Kerberos keytab file. In order to support
HDFS authentication, users must install a Kerberos keytab file on each node and transfer it to the
configuration directory. When a user specifies a Kerberos principal in the repository settings the
plugin automatically enables security for Hadoop and begins the login process. There will be a
separate PR and commit for the testing infrastructure to support these changes.
This commit fixes the reschedule async fsync test in index service
tests. This test was passing for the wrong reason. Namely, the test was
trying to set translog durability to async, fire off an indexing
request, and then assert that the translog eventually got fsynced. The
problem here is that in the course of issuing the indexing request, a
mapping update is trigger. The mapping update triggers the index
settings to be refreshed. Since the test did not issue a cluster state
update to change the durability from request to async but instead did
this directly through index service, the mapping update flops the
durability back to async. This means that when the indexing request
executes, an fsync is performed after the request and the assertoin that
an fsync is not needed passes but for the wrong reason (in short: the
test wanted it to pass because an async fsync fired, but instead it
passed because a request async fired). This commit fixes this by going
through the index settings API so that a proper cluster state update is
triggered and so the mapping update does not flop the durability back to
request.
We are still chasing a test failure here and increasing the logging
level stopped the failures. We have a theory as to what is going on so
this commit reduces the logging level to hopefully trigger the failure
again and give us the logging that we need to confirm the theory.
Use fedora-25 Vagrant box in VagrantTestPlugin, which was missing from
9a3ab3e800 causing packaging test
failures.
Additionally update TESTING.asciidoc
This is a follow-up to #24317, which did the hard work but was merged in such a
way that it exposes the setting while still allowing indices to have multiple
types by default in order to give time to people who test against master to
add this setting to their index settings.
This test started failing but the logging here is insufficient to
discern what is happening. This commit increases the logging level on
this test until the failure can be understood.
In pre-release versions of Elasticsearch 5.0.0, users were subject to
log messages of the form "your platform does not.*reliably.*potential
system instability". This is because we disable Netty from being unsafe,
and Netty throws up this scary info-level message when unsafe is
unavailable, even if it was unavailable because the user requested that
it be unavailabe. Users were rightly confused, and concerned. So, we
contributed a guard to Netty to prevent this log message from showing up
when unsafe was explicitly disabled. This guard shipped with all
versions of Netty that shipped starting with Elasticsearch
5.0.0. Unfortunately, this guard was lost in an unrelated refactoring
and now with the 4.1.10.Final upgrade, users will again see this
message. This commit is a hack around this until we can get a fix
upstream again.
Relates #24469
The UpgraderPlugin adds two additional extension points called during cluster upgrade and when old indices are introduced into the cluster state during initial recovery, restore from a snapshot or as a dangling index. One extension point allows plugin to update old templates and another extension points allows to update/check index metadata.
Currently the only implementation of `ListenableActionFuture` requires
dispatching listener execution to a thread pool. This commit renames
that variant to `DispatchingListenableActionFuture` and converts
`AbstractListenableActionFuture` to be a variant that does not require
dispatching. That class is now named `PlainListenableActionFuture`.
The DiscoveryNodesProvider interface provides an unnecessary abstraction and is just used in conjunction with the existing PingContextProvider interface. This commit removes it.
Open/Close index api have allow_no_indices set to false by default, while delete index has it set to true. The flag controls where a wildcard expression that matches no indices will be ignored or an error will be thrown instead. This commit aligns open/close default behaviour to that of delete index.
This field was only ever used in the constructor, where it was set and
then passed around. As such, there's no need for it to be a field and we
can remove it.
Add info about the base image used and the github repo of
elasticsearch-docker.
Clarify that setting `memlock=-1:-1` is only a requirement when
`bootstrap_memory_lock=true` and the alternatives we document
elsewhere in docs for disabling swap are valid for Docker as well.
Additionally, with latest versions of docker-ce shipping with
unlimited (or high enough) defaults for `nofile` and `nproc`, clarify
that explicitly setting those per ES container is not required, unless
they are not defined in the Docker daemon.
Finally simplify production `docker-compose.yml` example by removing
unneeded options.
Relates #24389
After a replica shard finishes recovery, it will be marked as active and
its local checkpoint will be considered in the calculation of the global
checkpoint on the primary. If there were operations in flight during
recovery, when the replica is activated its local checkpoint could be
lagging behind the global checkpoint on the primary. This means that
when the replica shard is activated, we can end up in a situtaion where
a global checkpoint update would want to move the global checkpoint
backwards, violating an invariant of the system. This only arises if a
background global checkpoint sync executes, which today is only a
scheduled operation and might be delayed until the in-flight operations
complete and the replica catches up to the primary. Yet, we are going to
move to inlining global checkpoints which will cause this issue to be
more likely to manifest. Additionally, the global checkpoint on the
replica, which is the local knowledge on the replica updated under the
mandate of the primary, could be higher than the local checkpoint on the
replica, again violating an invariant of the system. This commit
addresses these issues by blocking global checkpoint on the primary when
a replica shard is finalizing recovery. While we have blocked global
checkpoint advancement, recovery on the replica shard will not be
considered complete until its local checkpoint advances to the blocked
global checkpoint.
Relates #24404
today we only lookup nodes by their ID but never by the (clusterAlias, nodeId) tuple.
This could in theory lead to lookups on the wrong cluster if there are 2 clusters
with a node that has the same ID. It's very unlikely to happen but we now can clearly
disambiguate between clusters and their nodes.