Commit Graph

1655 Commits

Author SHA1 Message Date
Nhat Nguyen dda56fc0fc
Move ESIndexLevelReplicationTestCase to test framework (#31243)
Other components might benefit from the testing infra provided by
ESIndexLevelReplicationTestCase. This commit moves it to the test
framework.
2018-06-11 12:47:38 -04:00
Lee Hinman c064b507df
Encapsulate Translog in Engine (#31220)
This removes the abstract `getTranslog` method in `Engine`, instead leaving it
to the abstract implementations of the other methods that use the translog. This
allows future Engines not to have a Translog, as instead they must implement the
methods that use the translog pieces to return necessary values.
2018-06-11 09:44:50 -06:00
Tanguy Leroux bf58660482
Remove all unused imports and fix CRLF (#31207)
The X-Pack opening and the recent other refactorings left a lot of 
unused imports in the codebase. This commit removes them all.
2018-06-11 15:12:12 +02:00
Jason Tedor 65c107b47d
Fix unknown licenses (#31223)
The goal of this commit is to address unknown licenses when producing
the dependencies info report. We have two different checks that we run
on licenses. The first check is whether or not we have stashed a copy of
the license text for a dependency in the repository. The second is to
map every dependency to a license type (e.g., BSD 3-clause). The problem
here is that the way we were handling licenses in the second check
differs from how we handle licenses in the first check. The first check
works by finding a license file with the name of the artifact followed
by the text -LICENSE.txt. Yet in some cases we allow mapping an artifact
name to another name used to check for the license (e.g., we map
lucene-.* to lucene, and opensaml-.* to shibboleth. The second check
understood the first way of looking for a license file but not the
second way. So in this commit we teach the second check about the
mappings from artifact names to license names. We do this by copying the
configuration from the dependencyLicenses task to the dependenciesInfo
task and then reusing the code from the first check in the second
check. There were some other challenges here though. For example,
dependenciesInfo was checking too many dependencies. For now, we should
only be checking direct dependencies and leaving transitive dependencies
from another org.elasticsearch artifact to that artifact (we want to do
this differently in a follow-up). We also want to disable
dependenciesInfo for projects that we do not publish, users only care
about licenses they might be exposed to if they use our assembled
products. With all of the changes in this commit we have eliminated all
unknown licenses. A follow-up will enforce that when we add a new
dependency it does not get mapped to unknown, these will be forbidden in
the future. Therefore, with this change and earlier changes are left
having no unknown licenses and two custom licenses; custom here means it
does not map to an SPDX license type. Those two licenses are xz and
ldapsdk. A future change will not allow additional custom licenses
unless they are explicitly whitelisted. This ensures that if a new
dependency is added it is mapped to an SPDX license or mapped to custom
because it does not have an SPDX license.
2018-06-09 07:28:41 -04:00
Lee Hinman bdb0fb2555
Fully encapsulate LocalCheckpointTracker inside of the engine (#31213)
* Fully encapsulate LocalCheckpointTracker inside of the engine

This makes the Engine interface not expose the `LocalCheckpointTracker`, instead
exposing the pieces needed (like retrieving the local checkpoint) as individual
methods.
2018-06-08 17:19:41 -06:00
Julie Tibshirani 00b0e10063
Remove DocumentFieldMappers#simpleMatchToFullName. (#31041)
* Remove DocumentFieldMappers#simpleMatchToFullName, as it is duplicative of MapperService#simpleMatchToIndexNames.
* Rename MapperService#simpleMatchToIndexNames -> simpleMatchToFullName for consistency.
* Simplify EsIntegTestCase#assertConcreteMappingsOnAll to accept concrete fields instead of wildcard patterns.
2018-06-08 13:53:35 -07:00
Jason Tedor e481b860a1
Enable engine factory to be pluggable (#31183)
This commit enables the engine factory to be pluggable based on index
settings used when creating the index service for an index.
2018-06-07 17:01:06 -04:00
Tanguy Leroux b5f05f676c
Remove BlobContainer.move() method (#31100)
closes #30680
2018-06-07 10:48:31 +02:00
Tim Brooks 67e73b4df4
Combine accepting selector and socket selector (#31115)
This is related to #27260. This commit combines the AcceptingSelector
and SocketSelector classes into a single NioSelector. This change
allows the same selector to handle both server and socket channels. This
is valuable as we do not necessarily want a dedicated thread running for
accepting channels.

With this change, this commit removes the configuration for dedicated
accepting selectors for the normal transport class. The accepting
workload for new node connections is likely low, meaning that there is
no need to dedicate a thread to this process.
2018-06-06 11:59:54 -06:00
Yannick Welsch 1dca00deb9
Remove extra checks from HdfsBlobContainer (#31126)
This commit saves one network roundtrip when reading or deleting files from an HDFS repository.
2018-06-06 16:38:37 +02:00
Tanguy Leroux 9531b7bbcb
Add BlobContainer.writeBlobAtomic() (#30902)
This commit adds a new writeBlobAtomic() method to the BlobContainer
interface that can be implemented by repository implementations which
support atomic writes operations.

When the BlobContainer implementation does not provide a specific 
implementation of writeBlobAtomic(), then the writeBlob() method is used.

Related to #30680
2018-06-05 13:00:43 +02:00
Christoph Büscher 3f87c79500
Change ObjectParser exception (#31030)
ObjectParser should throw XContentParseExceptions, not IAE. A dedicated parsing
exception can includes the place where the error occurred.

Closes #30605
2018-06-04 20:20:37 +02:00
Jason Tedor be55da18c2
Enable customizing REST tests blacklist (#31074)
This commit enables adding additional REST tests to the blacklist for
builds that already define tests.rest.blacklist.
2018-06-04 13:35:49 -04:00
Boaz Leskes a7ceefe93f
Make Persistent Tasks implementations version and feature aware (#31045)
With #31020 we introduced the ability for transport clients to indicate what features they support
in order to make sure we don't serialize object to them they don't support. This PR adapts the
serialization logic of persistent tasks to be aware of those features and not serialize tasks that
aren't supported. 

Also, a version check is added for the future where we may add new tasks implementations and
need to be able to indicate they shouldn't be serialized both to nodes and clients.

As the implementation relies on the interface of `PersistentTaskParams`, these are no longer
optional. That's acceptable as all current implementation have them and we plan to make
`PersistentTaskParams` more central in the future.

Relates to #30731
2018-06-03 21:51:08 +02:00
Boaz Leskes 65d3f0efca Adapt transport tests for the extra byte introduced in #31020
We now serialize a feature array, which takes an extra byte when empty.
2018-06-02 13:04:40 +02:00
Jason Tedor 4522b57e07
Introduce client feature tracking (#31020)
This commit introduces the ability for a client to communicate to the
server features that it can support and for these features to be used in
influencing the decisions that the server makes when communicating with
the client. To this end we carry the features from the client to the
underlying stream as we carry the version of the client today. This
enables us to enhance the logic where we make protocol decisions on the
basis of the version on the stream to also make protocol decisions on
the basis of the features on the stream. With such functionality, the
client can communicate to the server if it is a transport client, or if
it has, for example, X-Pack installed. This enables us to support
rolling upgrades from the OSS distribution to the default distribution
without breaking client connectivity as we can now elect to serialize
customs in the cluster state depending on whether or not the client
reports to us using the feature capabilities that it can under these
customs. This means that we would avoid sending a client pieces of the
cluster state that it can not understand. However, we want to take care
and always send the full cluster state during node-to-node communication
as otherwise we would end up with different understanding of what is in
the cluster state across nodes depending on which features they reported
to have. This is why when deciding whether or not to write out a custom
we always send the custom if the client is not a transport client and
otherwise do not send the custom if the client is transport client that
does not report to have the feature required by the custom.

Co-authored-by: Yannick Welsch <yannick@welsch.lu>
2018-06-01 11:45:35 -04:00
Jim Ferenczi 0791f93dbd
Add an option to split keyword field on whitespace at query time (#30691)
This change adds an option named `split_queries_on_whitespace` to the `keyword`
field type. When set to true full text queries (`match`, `multi_match`, `query_string`, ...) that target the field will split the input on whitespace to build the query terms. Defaults to `false`.
Closes #30393
2018-06-01 09:47:03 +02:00
Nik Everett b225f5e5c6
HLRest: Allow caller to set per request options (#30490)
This modifies the high level rest client to allow calling code to
customize per request options for the bulk API. You do the actual
customization by passing a `RequestOptions` object to the API call
which is set on the `Request` that is generated by the high level
client. It also makes the `RequestOptions` a thing in the low level
rest client. For now that just means you use it to customize the
headers and the `httpAsyncResponseConsumerFactory` and we'll add
node selectors and per request timeouts in a follow up.

I only implemented this on the bulk API because it is the first one
in the list alphabetically and I wanted to keep the change small
enough to review. I'll convert the remaining APIs in a followup.
2018-05-31 13:59:52 -04:00
Ryan Ernst 46e8d97813
Core: Remove RequestBuilder from Action (#30966)
This commit removes the RequestBuilder generic type from Action. It was
needed to be used by the newRequest method, which in turn was used by
client.prepareExecute. Both of these methods are now removed, along with
the existing users of prepareExecute constructing the appropriate
builder directly.
2018-05-31 16:15:00 +02:00
Martijn van Groningen 544822c78b
Moved keyword tokenizer to analysis-common module (#30642)
Relates to #23658
2018-05-29 19:22:28 +02:00
Vladimir Dolzhenko 81eb8ba0f0
Include size of snapshot in snapshot metadata (#29602)
Include size of snapshot in snapshot metadata

Adds difference of number of files (and file sizes) between prev and current snapshot. Total number/size reflects total number/size of files in snapshot.

Closes #18543
2018-05-25 21:04:50 +02:00
Martijn van Groningen ae2f021f1c
Move score script context from SearchScript to its own class (#30816) 2018-05-25 07:17:50 +02:00
Tim Brooks e8b70273c1
Remove Throwable usage from transport modules (#30845)
Currently nio and netty modules use the CompletableFuture class for
managing listeners. This is unfortunate as that class accepts
Throwable. This commit adds a class CompletableContext that wraps
the CompletableFuture but does not accept Throwable. This allows the
modification of netty and nio logic to no longer handle Throwable.
2018-05-24 17:33:29 -06:00
David Turner ff0b6c795a
Decouple ClusterStateTaskListener & ClusterApplier (#30809)
Today, the `ClusterApplier` and `MasterService` both use the
`ClusterStateTaskListener` interface to notify their callers when asynchronous
activities have completed. However, this is not wholly appropriate: none of the
callers into the `ClusterApplier` care about the `ClusterState` arguments that
they receive.  This change introduces a dedicated ClusterApplyListener
interface for callers into the `ClusterApplier`, to distinguish these listeners
from the real `ClusterStateTaskListener`s that are waiting for responses from
the `MasterService`.
2018-05-24 09:05:09 +01:00
Tim Brooks d7040ad7b4
Reintroduce mandatory http pipelining support (#30820)
This commit reintroduces 31251c9 and 63a5799. These commits introduced a
memory leak and were reverted. This commit brings those commits back
and fixes the memory leak by removing unnecessary retain method calls.
2018-05-23 14:38:52 -06:00
Colin Goodheart-Smithe 4fd0a3e492 Revert "Make http pipelining support mandatory (#30695)" (#30813)
This reverts commit 31251c9 introduced in #30695.

We suspect this commit is causing the OOME's reported in #30811 and we will use this PR to test this assertion.
2018-05-23 10:54:46 -06:00
Yannick Welsch 30b004f582
Use original settings on full-cluster restart (#30780)
When doing a node restart using the test framework, the restarted node does not only use the
settings provided to the original node, but also additional settings provided by plugin extensions,
which does not correspond to the settings that a node would have on a true restart.
2018-05-23 09:02:01 +02:00
Tim Brooks 63a5799526
Remove http pipelining from integration test case (#30788)
This is related to #29500. We are removing the ability to disable http
pipelining. This PR removes the references to disabling pipelining in
the integration test case.
2018-05-22 17:18:05 -06:00
Luca Cavanna a17d6cab98
Replace Request#setHeaders with addHeader (#30588)
Adding headers rather than setting them all at once seems more
user-friendly and we already do it in a similar way for parameters
(see Request#addParameter).
2018-05-22 20:32:30 +02:00
Nhat Nguyen 1918a30237
Upgrade to Lucene-7.4.0-snapshot-cc2ee23050 (#30778)
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.

Closes #30741
Closes #30577
2018-05-22 13:11:48 -04:00
Tim Brooks 31251c9a6d
Make http pipelining support mandatory (#30695)
This is related to #29500 and #28898. This commit removes the abilitiy
to disable http pipelining. After this commit, any elasticsearch node
will support pipelined requests from a client. Additionally, it extracts
some of the http pipelining work to the server module. This extracted
work is used to implement pipelining for the nio plugin.
2018-05-22 09:29:31 -06:00
Tim Brooks abf8c56a37
Remove logging from elasticsearch-nio jar (#30761)
This is related to #27260. The elasticsearch-nio jar is supposed to be
a library opposed to a framework. Currently it internally logs certain
exceptions. This commit modifies it to not rely on logging. Instead
exception handlers are passed by the applications that use the jar.
2018-05-21 20:18:12 -06:00
Nhat Nguyen 67d8fc222d
Upgrade to Lucene-7.4.0-snapshot-59f2b7aec2 (#30726)
This snapshot resolves issues related to ShrinkIndexIT.
2018-05-18 18:21:39 -04:00
Ryan Ernst b3f3a4312b
Plugins: Remove meta plugins (#30670)
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
2018-05-18 10:56:08 -07:00
Adrien Grand 28d4685d72
Mitigate date histogram slowdowns with non-fixed timezones. (#30534)
Date histograms on non-fixed timezones such as `Europe/Paris` proved much slower
than histograms on fixed timezones in #28727. This change mitigates the issue by
using a fixed time zone instead when shard data doesn't cross a transition so
that all timestamps share the same fixed offset. This should be a common case
with daily indices.

NOTE: Rewriting the aggregation doesn't work since the timezone is then also
used on the coordinating node to create empty buckets, which might be out of the
range of data that exists on the shard.

NOTE: In order to be able to get a shard context in the tests, I reused code
from the base query test case by creating a new parent test case for both
queries and aggregations: `AbstractBuilderTestCase`.

Mitigates #28727
2018-05-16 17:06:52 +02:00
Zachary Tong df853c49c0
Add a MovingFunction pipeline aggregation, deprecate MovingAvg agg (#29594)
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points.  It is the scripted version of MovingAvg pipeline agg.

Through custom script contexts, we expose a number of convenience methods:

 - MovingFunctions.max()
 - MovingFunctions.min()
 - MovingFunctions.sum()
 - MovingFunctions.unweightedAvg()
 - MovingFunctions.linearWeightedAvg()
 - MovingFunctions.ewma()
 - MovingFunctions.holt()
 - MovingFunctions.holtWinters()
 - MovingFunctions.stdDev()

The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
2018-05-16 10:57:00 -04:00
Van0SS 4478f10a2a Rest High Level client: Add List Tasks (#29546)
This change adds a `listTasks` method to the high level java
ClusterClient which allows listing running tasks through the 
task management API.

Related to #27205
2018-05-16 13:31:37 +02:00
Nik Everett 9b47e0508b Fix compilation of test framework tests
We accidentally broke the compilation of the test frameworks tests. All
better now.
2018-05-15 22:56:41 -04:00
Jason Tedor 25c823da09
Skip shard deprecation messages in REST tests (#30630)
A 6.x node can send a deprecation message that the default number of
shards will change from five to one in 7.0.0. In a mixed cluster,
whether or not a create index request sees five or one shard and
produces a deprecation message depends on the version of the master
node. This means that during BWC tests a test can see this deprecation
message depending on the version of the master node. In 6.x when we
introduced this deprecation message we assumed that whereever we see
this deprecation message is expected. However, in a mixed cluster test
we need a similar mechanism but it would only apply if the version of
the master node is earlier than 7.0.0. This commit takes advantage of a
recent change to expose the version of the master node to do sections of
REST tests. With this in hand, we can skip asserting on the deprecation
message if the version of the master node is before 7.0.0 and otherwise
seeing that deprecation message would be completely unexpected.
2018-05-15 21:07:32 -04:00
Tim Brooks 99b9ab58e2
Add nio http server transport (#29587)
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
2018-05-15 16:37:14 -06:00
Jason Tedor abc06d5b79
Expose master version in REST test context (#30623)
This commit exposes the master version to the REST test context. This
will be needed in a follow-up where the master version will be used to
determine whether or not a certain warning header is expected.
2018-05-15 17:26:43 -04:00
Nik Everett 869b639d14
QA: System property to override distribution (#30591)
This configures all `qa` projects to use the distribution contained in
the `tests.distribution` system property if it is set. The goal is to
create a simple way to run tests against the default distribution which
has x-pack basic features enabled while not forcing these tests on all
contributors. You run these tests by doing something like:

```
./gradlew -p qa -Dtests.distribution=zip check
```

or

```
./gradlew -p qa -Dtests.distribution=zip bwcTest
```

x-pack basic *shouldn't* get in the way of any of these tests but
nothing is ever perfect so this we have to disable a few when running
with the zip distribution.
2018-05-15 17:16:16 -04:00
Julie Tibshirani 4f9dd37169
Add support for search templates to the high-level REST client. (#30473) 2018-05-15 13:07:58 -07:00
Jason Tedor 4a4e3d70d5
Default to one shard (#30539)
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.

Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
2018-05-14 12:22:35 -04:00
Martijn van Groningen 7b95470897
Moved tokenizers to analysis common module (#30538)
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.

Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.

Relates to #23658
2018-05-14 07:55:01 +02:00
Yannick Welsch fc870fdb4c
Use simpler write-once semantics for HDFS repository (#30439)
There's no need for an extra `blobExists()` call when writing a blob to the HDFS service. The writeBlob implementation for the HDFS repository already uses the `CreateFlag.CREATE` option on the file creation, which ensures that the blob that's uploaded does not already exist. This saves one network roundtrip.
2018-05-11 09:50:37 +02:00
Jay Modi f733de8e67
Security: fix TokenMetaData equals and hashcode (#30347)
The TokenMetaData equals method compared byte arrays using `.equals` on
the arrays themselves, which is the equivalent of an `==` check. This
means that a seperate byte[] with the same contents would not be
considered equivalent to the existing one, even though it should be.

The method has been updated to use `Array#equals` and similarly the
hashcode method has been updated to call `Arrays#hashCode` instead of
calling hashcode on the array itself.
2018-05-10 13:12:11 -06:00
Jason Tedor bf2365d13b
Remove BWC repository test (#30500)
This commit removes a test that we can not restore from 1.x and 2.x
repository files. This test is not needed, the version of Elasticsearch
that this commit targets can not even read index files from those
versions.
2018-05-09 23:24:54 -04:00
Nik Everett f9dc86836d
Docs: Test examples that recreate lang analyzers (#29535)
We have a pile of documentation describing how to rebuild the built in
language analyzers and, previously, our documentation testing framework
made sure that the examples successfully built *an* analyzer but they
didn't assert that the analyzer built by the documentation matches the
built in anlayzer. Unsuprisingly, some of the examples aren't quite
right.

This adds a mechanism that tests that the analyzers built by the docs.
The mechanism is fairly simple and brutal but it seems to be working:
build a hundred random unicode sequences and send them through the
`_analyze` API with the rebuilt analyzer and then again through the
built in analyzer. Then make sure both APIs return the same results.
Each of these calls to `_anlayze` takes about 20ms on my laptop which
seems fine.
2018-05-09 09:23:10 -04:00
Stéphane Campinas 2f8905839f Correct wording in log message (#30336) 2018-05-07 12:00:06 +02:00
Tanguy Leroux 1987d6261f
Do not fail snapshot when deleting a missing snapshotted file (#30332)
When deleting or creating a snapshot for a given shard, elasticsearch 
usually starts by listing all the existing snapshotted files in the repository. 
Then it computes a diff and deletes the snapshotted files that are not 
needed anymore. During this deletion, an exception is thrown if the file 
to be deleted does not exist anymore.

This behavior is challenging with cloud based repository implementations 
like S3 where a file that has been deleted can still appear in the bucket for 
few seconds/minutes (because the deletion can take some time to be fully 
replicated on S3). If the deleted file appears in the listing of files, then the 
following deletion will fail with a NoSuchFileException and the snapshot 
will be partially created/deleted.

This pull request makes the deletion of these files a bit less strict, ie not 
failing if the file we want to delete does not exist anymore. It introduces a 
new BlobContainer.deleteIgnoringIfNotExists() method that can be used 
at some specific places where not failing when deleting a file is 
considered harmless.

Closes #28322
2018-05-07 09:35:55 +02:00
Jim Ferenczi dbd857341f
Upgrade to 7.4.0-snapshot-1ed95c097b (#30357)
Upgrade to lucene-7.4.0-snapshot-1ed95c097b

This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
2018-05-04 11:44:22 +02:00
Zachary Tong 3c2d2a7d4a
Fix NPE when CumulativeSum agg encounters null/empty bucket (#29641)
Fix NPE when CumulativeSum agg encounters null/empty bucket

If the cusum agg encounters a null value, it's because the value is
missing (like the first value from a derivative agg), the path is
not valid, or the bucket in the path was empty.

Previously cusum would just explode on the null, but this changes it
so we only increment the sum if the value is non-null and finite.
This is safe because even if the cusum encounters all null or empty
buckets, the cumulative sum is still zero (like how the sum agg returns
zero even if all the docs were missing values)

I went ahead and tweaked AggregatorTestCase to allow testing pipelines,
so that I could delete the IT test and reimplement it as AggTests.

Closes #27544
2018-05-02 12:22:55 -07:00
Ryan Ernst fb0aa562a5
Network: Remove http.enabled setting (#29601)
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.

closes #12792
2018-05-02 11:42:05 -07:00
Ryan Ernst f0e92676b1
Tests: Simplify VersionUtils released version splitting (#30322)
This commit refactors VersionUtils.resolveReleasedVersions to be
simpler, and in the process fixes the behavior to match that of
VersionCollection.groovy.

closes #30133
2018-05-02 09:29:35 -07:00
Adrien Grand 368ddc408f
Remove MapperService#types(). (#29617)
This isn't be necessary with a single type per index.
2018-05-02 11:35:12 +02:00
Boaz Leskes 4a537ef03c
Bulk operation fail to replicate operations when a mapping update times out (#30244)
Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3) we may fail to properly replicate operation when a mapping update on master fails. If a bulk
operations needs a mapping update half way, it will send a request to the master before continuing 
to index the operations. If that request times out or isn't acked (i.e., even one node in the cluster 
didn't process it within 30s), we end up throwing the exception and aborting the entire bulk. This is 
a problem because all operations that were processed so far are not replicated any more to the 
replicas.  Although these operations were never "acked" to the user (we threw an error) it cause the 
local checkpoint on the replicas to lag (on 6.x) and the primary and replica to diverge. 

This PR does a couple of things:
1) Most importantly, treat *any* mapping update failure as a document level failure, meaning only 
    the relevant indexing operation will fail.
2) Removes the mapping update callbacks from `IndexShard.applyIndexOperationOnPrimary` and 
    similar methods for simpler execution. We don't use exceptions any more when a mapping 
    update was successful.

I think we need to do more work here (the fact that a single slow node can prevent those mappings 
updates from being acked and thus fail operations is bad), but I want to keep this as small as I can 
(it is already too big).
2018-05-01 08:15:02 +02:00
Nik Everett 50945051b6
HTML5ify Javadoc for core and test framework (#30234)
`javadoc` will switch from detaulting to html4 to html5 in "a future
release". We should get ahead of it so we're not surprised. Also, HTML5
is the future! Er, the present. Anyway, this follows up from #30220 to
make the Javadoc for two of the four remaining projects HTML5
compatible.
2018-04-30 09:39:50 -04:00
Martijn van Groningen 621a1935b8
test: also assert deprecation warning after clusters have been closed. 2018-04-19 09:20:04 +02:00
Ryan Ernst 1cb3a9d9dc Test: Guard deprecation check when 0 nodes created
The internal test cluster can sometimes have 0 nodes. In this situation,
the http.enabled flag will never be read, and thus no deprecation
warning will be emitted. This commit guards the deprecation warning
check in this case.
2018-04-18 21:18:56 -07:00
Ryan Ernst 98d776edaf
Networking: Deprecate http.enabled setting (#29591)
This commit deprecates the http.enabled, in preparation for removing the
feature in 7.0.

relates #12792
2018-04-18 17:36:09 -07:00
Julie Tibshirani c8209fa7b1
Fix the assertion message for an incorrect current version. (#29572) 2018-04-17 19:27:02 -07:00
Nhat Nguyen 45c6c20467
Enforce translog access via engine (#29542)
Today the translog of an engine is exposed and can be accessed directly.
While this exposure offers much flexibility, it also causes these troubles:

- Inconsistent behavior between translog method and engine method.
For example, rolling a translog generation via an engine also trims
unreferenced files, but translog's method does not.

- An engine does not get notified when critical errors happen in translog
as the access is direct.

This change isolates translog of an engine and enforces all accesses to
translog via the engine.
2018-04-17 08:03:41 -04:00
Jason Tedor 1dd0fd4874
Deprecate the index thread pool (#29540)
The index thread pool is no longer needed as its primary use-case for
single-document indexing requests has been relieved now that
single-document indexing requests are converted to bulk indexing
requests (with a single document payload).
2018-04-17 06:47:30 -04:00
olcbean b3e3b80f1b REST high-level client: add support for Indices Update Settings API [take 2] (#29327)
Relates to #27205
2018-04-16 21:39:11 +02:00
Simon Willnauer eab530ce11 Ensure flush happens on shard idle
This adds 2 testcases that test if a shard goes idle
pending (uncommitted) segments are committed and unreferenced
files will be freed.

Relates to #29482
2018-04-13 15:06:51 +02:00
Nhat Nguyen f96e00badf
Add primary term to translog header (#29227)
This change adds the current primary term to the header of the current
translog file. Having a term in a translog header is a prerequisite step
that allows us to trim translog operations given the max valid seq# for
that term.

This commit also updates tests to conform the primary term invariant 
which guarantees that all translog operations in a translog file have
its terms at most the term stored in the translog header.
2018-04-12 13:57:59 -04:00
Lee Hinman d72d3f996e
Add a helper method to get a random java.util.TimeZone (#29487)
* Add a helper method to get a random java.util.TimeZone

This adds a helper method to ESTestCase that returns a randomized
`java.util.TimeZone`. This can be used when transitioning code from Joda to the
JDK's time classes.
2018-04-12 11:56:42 -06:00
Adrien Grand 4918924fae
Remove legacy mapping code. (#29224)
Some features have been deprecated since `6.0` like the `_parent` field or the
ability to have multiple types per index. This allows to remove quite some
code, which in-turn will hopefully make it easier to proceed with the removal
of types.
2018-04-11 09:41:37 +02:00
tomcallahan 2574064e66
Enable rest tests via IDEs (#29439)
Currently rest-based tests do not work from the IDE, as the security
manager is configured to permit certain network operations when
using the snapshot jars compiled by gradle.  We have an existing
workaround that explicitly associates a codebase with the path
from which the classes are loaded (in this case, the IDE build
directory).  This PR adds the rest client to this workaround list.
2018-04-10 09:08:58 -04:00
Lee Hinman a07ba9e400
Move Streams.copy into elasticsearch-core and make a multi-release jar (#29322)
* Move Streams.copy into elasticsearch-core and make a multi-release jar

This moves the method `Streams.copy(InputStream in, OutputStream out)` into the
`elasticsearch-core` project (inside the `o.e.core.internal.io` package). It
also makes this class into a multi-release class where the Java 9 equivalent
uses `InputStream#transferTo`.

This is a followup from
https://github.com/elastic/elasticsearch/pull/29300#discussion_r178147495
2018-04-06 11:07:20 -06:00
Lee Hinman a93c942927
Move ObjectParser into the x-content lib (#29373)
* Move ObjectParser into the x-content lib

This moves `ObjectParser`, `AbstractObjectParser`, and
`ConstructingObjectParser` into the libs/x-content dependency. This decoupling
allows them to be used for parsing for projects that don't want to depend on the
entire Elasticsearch jar.

Relates to #28504
2018-04-06 09:41:14 -06:00
Colin Goodheart-Smithe 55c8e80532
Fixes query_string query equals timezone check (#29406)
* Fixes query_string query equals timezone check

This change fixes a bug where two `QueryStringQueryBuilder`s were found
to be equal if they had the same timezone set even if the query string
in the builders were different

Closes #29403

* Adds mutate function to QueryStringQueryBuilderTests

* iter
2018-04-06 11:45:34 +01:00
Adrien Grand 569d0c0e89
Improve similarity integration. (#29187)
This improves the way similarities are plugged in in order to:
 - reject the classic similarity on 7.x indices and emit a deprecation
   warning otherwise
 - reject unkwown parameters on 7.x indices and emit a deprecation
   warning otherwise

Even though this breaks the plugin API, I'd like to backport to 7.x so
that users can get deprecation warnings when they are doing something
that will become unsupported in the future.

Closes #23208
Closes #29035
2018-04-03 16:45:25 +02:00
Adrien Grand 3bdfc8f3fb
Upgrade to lucene-7.3.0-snapshot-98a6b3d. (#29298)
Most notable changes include:
 - this release doesn't have the 7.2.1 version constant so I had to create one
 - spatial4j and jts were upgraded
2018-04-03 09:27:14 +02:00
Lee Hinman 6b2167f462
Begin moving XContent to a separate lib/artifact (#29300)
* Begin moving XContent to a separate lib/artifact

This commit moves a large portion of the XContent code from the `server` project
to the `libs/xcontent` project. For the pieces that have been moved, some
helpers have been duplicated to allow them to be decoupled from ES helper
classes. In addition, `Booleans` and `CheckedFunction` have been moved to the
`elasticsearch-core`  project.

This decoupling is a move so that we can eventually make things like the
high-level REST client not rely on the entire ES jar, only the parts it needs.

There are some pieces that are still not decoupled, in particular some of the
XContent tests still remain in the server project, this is because they test a
large portion of the pluggable xcontent pieces through
`XContentElasticsearchException`. They may be decoupled in future work.
Additionally, there may be more piecese that we want to move to the xcontent lib
in the future that are not part of this PR, this is a starting point.

Relates to #28504
2018-04-02 15:58:31 -06:00
Mayya Sharipova e70cd35bda
Revert "REST high-level client: add support for Indices Update Settings API (#28892)" (#29323)
This reverts commit b67b5b1bbd.
2018-03-30 16:26:46 -07:00
Andy Bristol b7e6fb9ac5
[test] remove Streamable serde assertions (#29307)
Removes a set of assertions in the test framework that verified that
Streamable objects could be serialized and deserialized across different
versions. When this was discussed the consensus was that this approach
has not caught many bugs in a long time and that serialization testing of
objects was best left to their respective unit and integration tests.

This commit also removes a transport interceptor that was used in
ESIntegTestCase tests to make these assertions about objects coming in
or off the wire.
2018-03-30 14:09:26 -07:00
olcbean b67b5b1bbd REST high-level client: add support for Indices Update Settings API (#28892)
Relates to #27205
2018-03-30 10:53:29 +02:00
Jason Tedor 4ef3de40bc
Fix handling of bad requests (#29249)
Today we have a few problems with how we handle bad requests:
 - handling requests with bad encoding
 - handling requests with invalid value for filter_path/pretty/human
 - handling requests with a garbage Content-Type header

There are two problems:
 - in every case, we give an empty response to the client
 - in most cases, we leak the byte buffer backing the request!

These problems are caused by a broader problem: poor handling preparing
the request for handling, or the channel to write to when the response
is ready. This commit addresses these issues by taking a unified
approach to all of them that ensures that:
 - we respond to the client with the exception that blew us up
 - we do not leak the byte buffer backing the request
2018-03-28 16:25:01 -04:00
Simon Willnauer 13e19e7428
Allow _update and upsert to read from the transaction log (#29264)
We historically removed reading from the transaction log to get consistent
results from _GET calls. There was also the motivation that the read-modify-update
principle we apply should not be hidden from the user. We still agree on the fact
that we should not hide these aspects but the impact on updates is quite significant
especially if the same documents is updated before it's written to disk and made serachable.

This change adds back the ability to read from the transaction log but only for update calls.
Calls to the _GET API will always do a refresh if necessary to return consistent results ie.
if stored fields or DocValues Fields are requested.

Closes #26802
2018-03-28 18:03:34 +02:00
Yannick Welsch cacf759213
Remove RELOCATED index shard state (#29246)
as this information is already covered by ReplicationTracker.primaryMode.
2018-03-28 12:25:46 +02:00
Nhat Nguyen 87957603c0
Prune only gc deletes below local checkpoint (#28790)
Once a document is deleted and Lucene is refreshed, we will not be able 
to look up the `version/seq#` associated with that delete in Lucene. As
conflicting operations can still be indexed, we need another mechanism
to remember these deletes. Therefore deletes should still be stored in
the Version Map, even after Lucene is refreshed. Obviously, we can't
remember all deletes forever so a trimming mechanism is needed.
Currently, we remember deletes for at least 1 minute (the default GC
deletes cycle) and clean them periodically. This is, at the moment, the
best we can do on the primary for user facing APIs but this arbitrary
time limit is problematic for replicas. Furthermore, we can't rely on
the primary and replicas doing the trimming in a synchronized manner,
and failing to do so results in the replica and primary making different
decisions. 

The following scenario can cause inconsistency between
primary and replica.

1. Primary index doc (index, id=1, v2)
2. Network packet issue causes index operation to back off and wait
3. Primary deletes doc (delete, id=1, v3)
4. Replica processes delete (delete, id=1, v3)
5. 1+ minute passes (GC deletes runs replica)
6. Indexing op is finally sent to the replica which no processes it 
   because it forgot about the delete.

We can reply on sequence-numbers to prevent this issue. If we prune only 
deletes whose seqno at most the local checkpoint, a replica will
correctly remember what it needs. The correctness is explained as
follows:

Suppose o1 and o2 are two operations on the same document with seq#(o1) 
< seq#(o2), and o2 arrives before o1 on the replica. o2 is processed
normally since it arrives first; when o1 arrives it should be discarded:
 
1. If seq#(o1) <= LCP, then it will be not be added to Lucene, as it was
  already previously added.

2. If seq#(o1)  > LCP, then it depends on the nature of o2:
  - If o2 is a delete then its seq# is recorded in the VersionMap,
    since seq#(o2) > seq#(o1) > LCP, so a lookup can find it and
    determine that o1 is stale.
  
  - If o2 is an indexing then its seq# is either in Lucene (if
    refreshed) or the VersionMap (if not refreshed yet), so a 
    real-time lookup can find it and determine that o1 is stale.

In this PR, we prefer to deploy a single trimming strategy, which 
satisfies both requirements, on primary and replicas because:

- It's simpler - no need to distinguish if an engine is running at
primary mode or replica mode or being promoted.

- If a replica subsequently is promoted, user experience is fully
maintained as that replica remembers deletes for the last GC cycle.

However, the version map may consume less memory if we deploy two 
different trimming strategies for primary and replicas.
2018-03-26 13:42:08 -04:00
Boaz Leskes f5d4550e93
Fold EngineDiskUtils into Store, for better lock semantics (#29156)
#28245 has introduced the utility class`EngineDiskUtils` with a set of methods to prepare/change
translog and lucene commit points. That util class bundled everything that's needed to create and
empty shard, bootstrap a shard from a lucene index that was just restored etc. 

In order to safely do these manipulations, the util methods acquired the IndexWriter's lock. That
would sometime fail due to concurrent shard store fetching or other short activities that require the
files not to be changed while they read from them. 

Since there is no way to wait on the index writer lock, the `Store` class has other locks to make
sure that once we try to acquire the IW lock, it will succeed. To side step this waiting problem, this
PR folds `EngineDiskUtils` into `Store`. Sadly this comes with a price - the store class doesn't and
shouldn't know about the translog. As such the logic is slightly less tight and callers have to do the
translog manipulations on their own.
2018-03-26 14:08:03 +02:00
Jim Ferenczi 5288235ca3
Optimize the composite aggregation for match_all and range queries (#28745)
This change refactors the composite aggregation to add an execution mode that visits documents in the order of the values
present in the leading source of the composite definition. This mode does not need to visit all documents since it can early terminate
the collection when the leading source value is greater than the lowest value in the queue.
Instead of collecting the documents in the order of their doc_id, this mode uses the inverted lists (or the bkd tree for numerics) to collect documents
in the order of the values present in the leading source.
For instance the following aggregation:

```
"composite" : {
  "sources" : [
    { "value1": { "terms" : { "field": "timestamp", "order": "asc" } } }
  ],
  "size": 10
}
```
... can use the field `timestamp` to collect the documents with the 10 lowest values for the field instead of visiting all documents.
For composite aggregation with more than one source the execution can early terminate as soon as one of the 10 lowest values produces enough
composite buckets. For instance if visiting the first two lowest timestamp created 10 composite buckets we can early terminate the collection since it
is guaranteed that the third lowest timestamp cannot create a composite key that compares lower than the one already visited.

This mode can execute iff:
 * The leading source in the composite definition uses an indexed field of type `date` (works also with `date_histogram` source), `integer`, `long` or `keyword`.
 * The query is a match_all query or a range query over the field that is used as the leading source in the composite definition.
 * The sort order of the leading source is the natural order (ascending since postings and numerics are sorted in ascending order only).

If these conditions are not met this aggregation visits each document like any other agg.
2018-03-26 09:51:37 +02:00
Lee Hinman b4af451ec5
Remove BytesArray and BytesReference usage from XContentFactory (#29151)
* Remove BytesArray and BytesReference usage from XContentFactory

This removes the usage of `BytesArray` and `BytesReference` from
`XContentFactory`. Instead, a regular `byte[]` should be passed. To assist with
this a helper has been added to `XContentHelper` that will preserve the offset
and length from the underlying BytesReference.

This is part of ongoing work to separate the XContent parts from ES so they can
be factored into their own jar.

Relates to #28504
2018-03-20 11:52:26 -06:00
Nik Everett a813492fe3
Tests: Make $_path support dots in paths (#28917)
`$_path` is used by documentation tests to ignore a value from a
response, for example:

```
[source,js]
----
{
  "count": 1,
  "datafeeds": [
    {
      "datafeed_id": "datafeed-total-requests",
      "state": "started",
      "node": {
        ...
        "attributes": {
          "ml.machine_memory": "17179869184",
          "ml.max_open_jobs": "20",
          "ml.enabled": "true"
        }
      },
      "assignment_explanation": ""
    }
  ]
}
----
// TESTRESPONSE[s/"17179869184"/$body.$_path/]
```

That example shows `17179869184` in the compiled docs but when it runs
the tests generated by that doc it ignores `17179869184` and asserts
instead that there is a value in that field. This is required because we
can't predict things like "how many milliseconds will this take?" and
"how much memory will this take?".

Before this change it was impossible to use `$_path` when any component
of the path contained a `.`. This fixes the `$_path` evaluator to
properly escape `.`.

Closes #28770
2018-03-19 14:17:09 -04:00
Christoph Büscher 312ccc05d5
[Tests] Fix GetResultTests and DocumentFieldTests failures (#29083)
Changes made in #28972 seems to have changed some assumptions about how
SMILE and CBOR write byte[] values and how this is tested. This changes
the generation of the randomized DocumentField values back to BytesArray
while expecting the JSON and YAML deserialisation to produce Base64
encoded strings and SMILE and CBOR to parse back BytesArray instances.

Closes #29080
2018-03-15 16:42:26 +01:00
Boaz Leskes bf65cb4914
Untangle Engine Constructor logic (#28245)
Currently we have a fairly complicated logic in the engine constructor logic to deal with all the 
various ways we want to mutate the lucene index and translog we're opening.

We can:
1) Create an empty index
2) Use the lucene but create a new translog
3) Use both
4) Force a new history uuid in all cases.

This leads complicated code flows which makes it harder and harder to make sure we cover all the 
corner cases. This PR tries to take another approach. Constructing an InternalEngine always opens 
things as they are and all needed modifications are done by static methods directly on the 
directory, one at a time.
2018-03-14 20:59:47 +01:00
Lee Hinman 8e8fdc4f0e
Decouple XContentBuilder from BytesReference (#28972)
* Decouple XContentBuilder from BytesReference

This commit removes all mentions of `BytesReference` from `XContentBuilder`.
This is needed so that we can completely decouple the XContent code and move it
into its own dependency.

While this change appears large, it is due to two main changes, moving
`.bytes()` and `.string()` out of XContentBuilder itself into static methods
`BytesReference.bytes` and `Strings.toString` respectively. The rest of the
change is code reacting to these changes (the majority of it in tests).

Relates to #28504
2018-03-14 13:47:57 -06:00
Jason Tedor 5904d936fa
Copy Lucene IOUtils (#29012)
As we have factored Elasticsearch into smaller libraries, we have ended
up in a situation that some of the dependencies of Elasticsearch are not
available to code that depends on these smaller libraries but not server
Elasticsearch. This is a good thing, this was one of the goals of
separating Elasticsearch into smaller libraries, to shed some of the
dependencies from other components of the system. However, this now
means that simple utility methods from Lucene that we rely on are no
longer available everywhere. This commit copies IOUtils (with some small
formatting changes for our codebase) into the fold so that other
components of the system can rely on these methods where they no longer
depend on Lucene.
2018-03-13 12:49:33 -04:00
Jason Tedor 8b6fbe2c11
Add test for dying with dignity (#28987)
I have long wanted an actual test that dying with dignity works. It is
tricky because if dying with dignity works, it means the test JVM dies
which is usually an abnormal condition. And anyway, how does one force a
fatal error to be thrown. I was motivated to investigate this again by
the fact that I missed a backport to one branch leading to an issue
where Elasticsearch would not successfully die with dignity. And now we
have a solution: we install a plugin that throws an out of memory error
when it receives a request. We hack the standalone test infrastructure
to prevent this from failing the test. To do this, we bypass the
security manager and remove the PID file for the node; this tricks the
test infrastructure into thinking that it does not need to stop the
node. We also bypass seccomp so that we can fork jps to make sure that
Elasticsearch really died. And to be extra paranoid, we parse the logs
of the dead Elasticsearch process to make sure it died with
dignity. Never forget.
2018-03-12 23:20:07 -04:00
Luca Cavanna 184a8718d8
REST high-level client: add flush API (#28852)
Relates to #27205
2018-03-01 10:56:03 +01:00
Luca Cavanna cd3d9c9f80
[TEST] share code between streamable/writeable/xcontent base test classes (#28785)
Today we have two test base classes that have a lot in common when it comes to testing wire and xcontent serialization: `AbstractSerializingTestCase` and `AbstractXContentStreamableTestCase`. There are subtle differences though between the two, in the way they work, what can be overridden and features that they support (e.g. insertion of random fields).

This commit introduces a new base class called `AbstractWireTestCase` which holds all of the serialization test code in common between `Streamable` and `Writeable`. It has two minimal subclasses called `AbstractWireSerializingTestCase` and `AbstractStreamableTestCase` which are specialized for `Writeable` and `Streamable`.

This commit also introduces a new test class called `AbstractXContentTestCase` for all of the xContent testing, which holds a testFromXContent method for parsing and rendering to xContent. This one can be delegated to from the existing `AbstractStreamableXContentTestCase` and `AbstractSerializingTestCase` so that we avoid code duplicate as much as possible and all these base classes offer the same functionalities in the same way. Having this last base class decoupled from the serialization testing may also help with the REST high-level client testing, as there are some classes where it's hard to implement equals/hashcode and this makes it possible to override `assertEqualInstances` for custom equality comparisons (also this base class doesn't require implementing equals/hashcode as it doesn't test such methods.
2018-02-23 10:48:48 +01:00
Tim Brooks 5a8ec9b762
Selectors operate on channel contexts (#28468)
This commit is related to #27260. Currently there is a weird
relationship between channel contexts and nio channels. The selectors
use the context for read and writing. But the selector operates directly
on the nio channel for registering, closing, and connecting.

This commit works on improving this relationship. The selector operates
directly on the context which wraps the low level java.nio.channels. The
NioChannel class is simply an API that is used to interact with the
channel (sending messages from outside the selector event loop,
scheduling a close, adding listeners, etc). The context is only used
internally by the channel to implement these apis and by the selector to
perform these operations.
2018-02-22 09:44:52 -07:00
Luca Cavanna 8b4a298874
Migrate some *ResponseTests to AbstractStreamableXContentTestCase (#28749)
This allows us to save a bit of code, but also adds more coverage as it tests serialization which was missing in some of the existing tests. Also it requires implementing equals/hashcode and we get the corresponding tests for them for free from the base test class.
2018-02-21 20:04:12 +01:00
Lee Hinman d7eae4b90f
Pass InputStream when creating XContent parser (#28754)
* Pass InputStream when creating XContent parser

Rather than passing the raw `BytesReference` in when creating the xcontent
parser, this passes the StreamInput (which is an InputStream), this allows us to
decouple XContent from BytesReference.

This also removes the use of `commons.Booleans` so it doesn't require more
external commons classes.

Related to #28504

* Undo boolean removal

* Enhance deprecation javadoc
2018-02-21 11:03:25 -07:00
Yu 7d8fb69d50 version set in ingest pipeline (#27573)
Add support version and version_type in ingest pipelines

Add support for setting document version and version type in set
processor of an ingest pipeline.
2018-02-21 09:34:51 +01:00
Lee Hinman d4fddfa2a0
Remove log4j dependency from elasticsearch-core (#28705)
* Remove log4j dependency from elasticsearch-core

This removes the log4j dependency from our elasticsearch-core project. It was
originally necessary only for our jar classpath checking. It is now replaced by
a `Consumer<String>` so that the es-core dependency doesn't have external
dependencies.

The parts of #28191 which were moved in conjunction (like `ESLoggerFactory` and
`Loggers`) have been moved back where appropriate, since they are not required
in the core jar.

This is tangentially related to #28504

* Add javadocs for `output` parameter

* Change @code to @link
2018-02-20 09:15:54 -07:00
Luca Cavanna 8bbb3c9ffa
REST high-level client: add support for Rollover Index API (#28698)
Relates to #27205
2018-02-20 15:58:58 +01:00
Simon Willnauer 779bc6fd5c
Simplify Engine.Searcher creation (#28728)
Today we have several levels of indirection to acquire an Engine.Searcher.
We first acquire a the reference manager for the scope then acquire an
IndexSearcher and then create a searcher for the engine based on that.
This change simplifies the creation into a single method call instead of
3 different ones.
2018-02-20 09:35:49 +01:00
Nhat Nguyen 84fd39f5bb
Separate acquiring safe commit and last commit (#28271)
Previously we introduced a new parameter to `acquireIndexCommit` to
allow acquire either a safe commit or a last commit. However with the
new parameters, callers can provide a nonsense combination - flush first
but acquire the safe commit. This commit separates acquireIndexCommit
method into two different methods to avoid that problem. Moreover, this
change should also improve the readability.

Relates #28038
2018-02-16 21:25:58 -05:00
Luca Cavanna ebe5e8e635
REST high-level client: encode path parts (#28663)
The REST high-level client supports now encoding of path parts, so that for instance documents with valid ids, but containing characters that need to be encoded as part of urls (`#` etc.), are properly supported. We also make sure that each path part can contain `/` by encoding them properly too.

Closes #28625
2018-02-15 17:22:45 +01:00
olcbean 02fc16f10e Add Cluster Put Settings API to the high level REST client (#28633)
Relates to #27205
2018-02-15 17:21:45 +01:00
Boaz Leskes beb55d148a
Simplify the Translog constructor by always expecting an existing translog (#28676)
Currently the Translog constructor is capable both of opening an existing translog and creating a
new one (deleting existing files). This PR separates these two into separate code paths. The
constructors opens files and a dedicated static methods creates an empty translog.
2018-02-15 09:24:09 +01:00
Lee Hinman b59b1cf59d
Move more XContent.createParser calls to non-deprecated version (#28672)
* Move more XContent.createParser calls to non-deprecated version

Part 2

This moves more of the callers to pass in the DeprecationHandler.

Relates to #28504

* Use parser's deprecation handler where appropriate

* Use logging handler in test that uses deprecated field on purpose
2018-02-14 11:24:48 -07:00
Lee Hinman 7c1f5f5054
Move more XContent.createParser calls to non-deprecated version (#28670)
* Move more XContent.createParser calls to non-deprecated version

This moves more of the callers to pass in the DeprecationHandler.

Relates to #28504

* Use parser's deprecation handler where available
2018-02-14 09:01:40 -07:00
Michael Basnight 920dff7053
Add released major logic to version utils (#28644)
Version Utils did not previously have logic that removed the last majors
minor snapshot if there was a next bugfix and maintenance bugfix
release. This adds the logic and fixes some broken assumptions in tests
as well.

relates #28505
2018-02-12 18:33:45 -06:00
Michael Basnight 4e0c1463d5
Fix build.snapshot bug in version collection (#28641)
The build.snapshot was mistakenly passed in to every snapshot version,
so when release tests were run, these versions were mistaken as released
entities and could not be found in maven, because they do not
exist. This fix removes that bug in logic, and always makes them proper
snapshots. This has a benefit of cleaning up the VersionUtilsTests
because they no longer rely on different sets of versions to check
against, which was also a bug.
2018-02-12 14:56:07 -06:00
Martijn van Groningen c19d84012e
iter 2018-02-12 13:50:48 +01:00
Martijn van Groningen 21e5ee6551
[TEST] Changed how stash dumps are logged in yaml tests in case of failures
Currently if a yaml test has a teardown and a test is failing then
a stash dump of a request in the teardown is logged instead of
a stash dump of a request in the test itself.

By handling the logging of stash dumps separately for setup, tests and
teardown yaml sections we shouldn't miss the stash dump of request/response
that is actually causing the yaml test to fail.
2018-02-12 13:50:48 +01:00
Boaz Leskes 4aece92b2c
IndexShardOperationPermits: shouldn't use new Throwable to capture stack traces (#28598)
The is a follow up to #28567 changing the method used to capture stack traces, as requested
during the review. Instead of creating a throwable, we explicitly capture the stack trace of the
current thread. This should Make Jason Happy Again ™️ .
2018-02-12 10:33:13 +01:00
Michael Basnight e0bea70070
Generalize BWC logic (#28505)
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.

This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.

Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the 
version that is current, that an error will be thrown from that individual project if an attempt is made to 
run it.

This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
2018-02-09 14:55:10 -06:00
Boaz Leskes ba59cf1262
Capture stack traces while issuing IndexShard operations permits to easy debugging (#28567)
Today we acquire a permit from the shard to coordinate between indexing operations, recoveries and other state transitions. When we leak an  permit it's practically impossible to find who the culprit is. This PR add stack traces capturing for each permit so we can identify which part of the code is responsible for acquiring the unreleased permit. This code is only active when assertions are active. 

The output is something like:
```
java.lang.AssertionError: shard [test][1] on node [node_s0] has pending operations:
--> java.lang.RuntimeException: something helpful 2
	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:322)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
	at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
	at java.base/java.lang.Thread.run(Thread.java:844)

--> java.lang.RuntimeException: something helpful
	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:311)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
	at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
	at java.base/java.lang.Iterable.forEach(Iterable.java:75)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
	at java.base/java.lang.Thread.run(Thread.java:844)

```
2018-02-08 22:59:02 +01:00
Tim Brooks 16f7e00514
Improve testTransportStatsWithException test (#28554)
This commit modifies the transport stats with exception test to remove
the requirement that we calculate the published address size when
comparing bytes received. This is tricky and is currently broken as we
also place the address string in the transport exception, however we do
not adjust the bytes for that.

The solution in this commit is to just serialize the transport exception
in the test and use that for the calculation.
2018-02-07 14:31:42 -07:00
Lee Hinman eebff4d2b3
Use non deprecated xcontenthelper (#28503)
* Move to non-deprecated XContentHelper.createParser(...)

This moves away from one of the now-deprecated XContentHelper.createParser
methods in favor of specifying the deprecation logger at parser creation time.

Relates to #28449

Note that this doesn't move all the `createParser` calls because some of them
use the already-deprecated method that doesn't specify the XContentType.

* Remove the deprecated (and now non-needed) createParser method
2018-02-05 16:18:18 -07:00
Lee Hinman 3ddea8d8d2
Start switching to non-deprecated ParseField.match method (#28488)
This commit switches all the modules and server test code to use the
non-deprecated `ParseField.match` method, passing in the parser's deprecation
handler or the logging deprecation handler when a parser is not available (like
in tests).

Relates to #28449
2018-02-02 10:10:13 -07:00
Yannick Welsch 031415a5f6
Replicate writes only to fully initialized shards (#28049)
The primary currently replicates writes to all other shard copies as soon as they're added to the routing table. Initially those shards are not even ready yet to receive these replication requests, for example when undergoing a file-based peer recovery. Based on the specific stage that the shard copies are in, they will throw different kinds of exceptions when they receive the replication requests. The primary then ignores responses from shards that match certain exception types. With this mechanism it's not possible for a primary to distinguish between a situation where a replication target shard is not allocated and ready yet to receive requests and a situation where the shard was successfully allocated and active but subsequently failed.
This commit changes replication so that only initializing shards that have successfully opened their engine are used as replication targets. This removes the need to replicate requests to initializing shards that are not even ready yet to receive those requests. This saves on network bandwidth and enables features that rely on the distinction between a "not-yet-ready" shard and a failed shard.
2018-02-02 11:13:07 +01:00
Luca Cavanna d860971572
REST high-level client: add support for split and shrink index API (#28425)
Relates to #27205
2018-02-01 16:37:01 +01:00
Jim Ferenczi dd40b984c4
Add a shallow copy method to aggregation builders (#28430)
This change adds a shallow copy method for aggregation builders. This method returns a copy of the builder replacing the factoriesBuilder and metaDada
This method is used when the builder is rewritten (AggregationBuilder#rewrite) in order to make sure that we create a new instance of the parent builder when sub aggregations are rewritten.

Relates #27782
2018-02-01 09:22:32 +01:00
Jason Tedor 1b3d529bef Introduce secure security manager to project
This commit migrates SecureSM, our secure security manager
implementation, from its own repository to being a sub-project of
Elasticsearch.
2018-01-31 18:23:28 -05:00
Nhat Nguyen 5e0be61774
Add logging to index commit deletion policy (#28448)
This would help us to figure out which index commit that an engine 
started with or used in peer-recovery.

Relates #28405
2018-01-31 11:09:49 -05:00
markharwood 77d2dd203e
Search - add allow_partial_search_results flag with default setting false (#28440)
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.

Closes #27435
2018-01-31 15:51:29 +00:00
Jim Ferenczi 7edb978256
RandomDocumentPicks#randomFieldName can produce invalid field name (#28419)
This change makes sure that this function does not create field names that end with a '.', more precisely it only allows
alpha-numeric characters to compose the leaf field name.

Closes #27373
2018-01-31 09:21:09 +01:00
Yannick Welsch 9dd0886265
Fix NullPointerException in MockUncasedHostProvider (#28424)
The MockUncasedHostProvider accesses nodes that are not fully built yet, where TransportService.getNode() returns null, which means that the null entries end up in the list of seedNodes that UnicastZenPing then uses.
2018-01-30 10:44:19 +01:00
Simon Willnauer 43d1dcb919
Add a method that ensures that the cluster is yellow and has no intializing shards (#28416) 2018-01-29 20:46:30 +01:00
Nik Everett 66ff1b2a59
Tests: Wipe cluster settings after every test (#28410)
Cluster settings shouldn't leak into the next test.

I played with failing the test if it left over any settings but that
felt like it added more ceremony then it was worth. The advantage is
that any test that intentionally wants to leave settings in place after
the test would fail and require looking at but, so far as I can tell, we
don't have any such tests.
2018-01-29 11:47:04 -05:00
Ryan Ernst 3dd833ca0a
Plugins: Use one confirmation of all meta plugin permissions (#28366)
Currently meta plugins will ask for confirmation of security policy
exceptions for each bundled plugin. This commit collects the necessary
permissions of each bundled plugin, and asks for confirmation of all of
them at the same time.
2018-01-26 15:44:44 -08:00
olcbean 9db23e48cd Add Indices Aliases API to the high level REST client (#27876)
Relates to #27205
2018-01-25 14:34:06 +01:00
Colin Goodheart-Smithe 75116a23cc
Adds test name to MockPageCacheRecycler exception (#28359)
This change adds the test name to the exceptions thrown by the MockPageCacheRecycler and MockBigArrays. Also, if there is more than one page/array which are not released it will add the first one as the cause of the thrown exception and the others as suppressed exceptions.

Relates to #21315
2018-01-25 08:13:33 +00:00
Alexander Reelsen a87714aafc
Settings: Introduce settings updater for a list of settings (#28338)
This introduces a settings updater that allows to specify a list of
settings. Whenever one of those settings changes, the whole block of
settings is passed to the consumer.

This also fixes an issue with affix settings, when used in combination
with group settings, which could result in no found settings when used
to get a setting for a namespace.

Lastly logging has been slightly changed, so that filtered settings now
only log the setting key.

Another bug has been fixed for the mock log appender, which did not
work, when checking for the exact message.

Closes #28047
2018-01-24 09:47:17 +01:00
Christoph Büscher ba9e2e44cb
[Test] Re-Add integer_range and date_range field types for query builder tests (#28171)
The tests for those field types were removed in #26549 because the range mapper
was moved to a module, but later this mapper was moved back to core in #27854.
This change adds back those two field types like before to the general setup in
AbstractQueryTestCase and adds some specifics to the RangeQueryBuilder and
TermsQueryBuilder tests. Also adding back an integration test in SearchQueryIT that
has been removed before but that can be kept with the mapper back in core now.

Relates to #28147
2018-01-23 13:08:54 +01:00
Luca Cavanna 0c83ee2a5d
Trim down usages of `ShardOperationFailedException` interface (#28312)
In many cases we use the `ShardOperationFailedException` interface to abstract an exception that can only be of one type, namely `DefaultShardOperationException`. There is no need to use the interface in such cases, the concrete type should be used instead. That has the additional advantage of simplifying parsing such exceptions back from rest responses for the high-level REST client
2018-01-22 15:51:46 +01:00
kel 452c36c552 Calculate sum in Kahan summation algorithm in aggregations (#27807) (#27848) 2018-01-22 12:42:56 +01:00
Adrien Grand 700d9ecc95
Remove the `update_all_types` option. (#28288)
This option is not useful in 7.x since no indices may have more than one type
anymore.
2018-01-22 12:03:07 +01:00
Tim Brooks a6a57a71d3
Implement socket and server ChannelContexts (#28275)
This commit is related to #27260. Currently have a channel context that
implements reading and writing logic for socket channels. Additionally,
we have exception contexts to handle exceptions. And accepting contexts
to handle accepted channels. This PR introduces a ChannelContext that
handles close and exception handling for all channel types.
Additionally, it has implementers that provide specific functionality
for socket channels (read and writing). And specific functionality for
server channels (accepting).
2018-01-18 13:06:40 -07:00
Tim Brooks 20fb7a6d87
Modify Abstract transport tests to use impls (#28270)
There a number of tests in `AbstractSimpleTransportTestCase` that
create `MockTcpTransport` impls. This commit modifies two of these tests
to use the transport implementation that is being tested.
2018-01-18 10:59:42 -07:00
Tim Brooks 4ea9ddb7d3
Unify nio read / write channel contexts (#28160)
This commit is related to #27260. Right now we have separate read and
write contexts for implementing specific protocol logic. However, some
protocols require a closer relationship between read and write
operations than is allowed by our current model. An example is HTTP
which might require a write if some problem with request parsing was
encountered.

Additionally, some protocols require close messages to be sent when a
channel is shutdown. This is also problematic in our current model,
where we assume that channels should simply be queued for close and
forgotten.

This commit transitions to a single ChannelContext which implements
all read, write, and close logic for protocols. It is the job of the
context to tell the selector when to close the channel. A channel can
still be manually queued for close with a selector. This is how server
channels are closed for now. And this route allows timeout mechanisms on
normal channel closes to be implemented.
2018-01-17 09:44:21 -07:00
Alexander Reelsen d32cb8089b
Tests: Decrease log level for adding a header value (#28246)
This logging message adds considerable noise to many REST tests, if you
are using something like HTTP basic auth in every API call or set any custom
header.

The log level moves from info to debug, so can still be seen if wanted.
2018-01-17 09:14:44 +01:00
Jim Ferenczi bd11e6c441
Fix NPE on composite aggregation with sub-aggregations that need scores (#28129)
The composite aggregation defers the collection of sub-aggregations to a second pass that visits documents only if they
appear in the top buckets. Though the scorer for sub-aggregations is not set on this second pass and generates an NPE if any sub-aggregation
tries to access the score. This change creates a scorer for the second pass and makes sure that sub-aggs can use it safely to check the score of
the collected documents.
2018-01-15 18:30:38 +01:00
Tim Brooks ee7eac8dc1
`MockTcpTransport` to connect asynchronously (#28203)
The method `initiateChannel` on `TcpTransport` is explicit in that
channels can be connect asynchronously. All production implementations
do connect asynchronously. Only the blocking `MockTcpTransport`
connects in a synchronous manner. This avoids testing some of the
blocking code in `TcpTransport` that waits on connections to complete.
Additionally, it requires a more extensive method signature than
required for other transports.

This commit modifies the `MockTcpTransport` to make these connections
asynchronously on a different thread. Additionally, it simplifies that
`initiateChannel` method signature.
2018-01-15 10:20:30 -07:00
Tim Brooks 3895add2ca
Introduce elasticsearch-core jar (#28191)
This is related to #27933. It introduces a jar named elasticsearch-core
in the lib directory. This commit moves the JarHell class from server to
elasticsearch-core. Additionally, PathUtils and some of Loggers are
moved as JarHell depends on them.
2018-01-15 09:59:01 -07:00
Igor Motov c75ac319a6
Add ability to associate an ID with tasks (#27764)
Adds support for capturing the X-Opaque-Id header from a REST request and storing it's value in the tasks that this request started. It works for all user-initiated tasks (not only search).

Closes #23250

Usage:
```
$ curl -H "X-Opaque-Id: imotov" -H "foo:bar" "localhost:9200/_tasks?pretty&group_by=parents"
{
  "tasks" : {
    "7qrTVbiDQKiZfubUP7DPkg:6998" : {
      "node" : "7qrTVbiDQKiZfubUP7DPkg",
      "id" : 6998,
      "type" : "transport",
      "action" : "cluster:monitor/tasks/lists",
      "start_time_in_millis" : 1513029940042,
      "running_time_in_nanos" : 266794,
      "cancellable" : false,
      "headers" : {
        "X-Opaque-Id" : "imotov"
      },
      "children" : [
        {
          "node" : "V-PuCjPhRp2ryuEsNw6V1g",
          "id" : 6088,
          "type" : "netty",
          "action" : "cluster:monitor/tasks/lists[n]",
          "start_time_in_millis" : 1513029940043,
          "running_time_in_nanos" : 67785,
          "cancellable" : false,
          "parent_task_id" : "7qrTVbiDQKiZfubUP7DPkg:6998",
          "headers" : {
            "X-Opaque-Id" : "imotov"
          }
        },
        {
          "node" : "7qrTVbiDQKiZfubUP7DPkg",
          "id" : 6999,
          "type" : "direct",
          "action" : "cluster:monitor/tasks/lists[n]",
          "start_time_in_millis" : 1513029940043,
          "running_time_in_nanos" : 98754,
          "cancellable" : false,
          "parent_task_id" : "7qrTVbiDQKiZfubUP7DPkg:6998",
          "headers" : {
            "X-Opaque-Id" : "imotov"
          }
        }
      ]
    }
  }
}
```
2018-01-12 15:34:17 -05:00
Nhat Nguyen 626c3d1fda
Primary send safe commit in file-based recovery (#28038)
Today a primary shard transfers the most recent commit point to a 
replica shard in a file-based recovery. However, the most recent commit
may not be a "safe" commit; this causes a replica shard not having a
safe commit point until it can retain a safe commit by itself.

This commits collapses the snapshot deletion policy into the combined 
deletion policy and modifies the peer recovery source to send a safe
commit.

Relates #10708
2018-01-11 10:39:12 -05:00
Jason Tedor 2c24ac7426
Set watermarks in single-node test cases
We set the watermarks to low values in other test cases to prevent test
failures on nodes with low disk space (if the disk space is too low, the
test will fail anyway but we should not prematurely fail). This commit
sets the watermarks in the single-node test cases to avoid test failures
in such situations.

Relates #28134
2018-01-09 12:51:50 -05:00
Jim Ferenczi 36729d1c46
Add the ability to bundle multiple plugins into a meta plugin (#28022)
This commit adds the ability to package multiple plugins in a single zip.
The zip file for a meta plugin must contains the following structure:

|____elasticsearch/
| |____   <plugin1> <-- The plugin files for plugin1 (the content of the elastisearch directory)
| |____   <plugin2>  <-- The plugin files for plugin2
| |____   meta-plugin-descriptor.properties <-- example contents below
The meta plugin properties descriptor is mandatory and must contain the following properties:

description: simple summary of the meta plugin.
name: the meta plugin name
The installation process installs each plugin in a sub-folder inside the meta plugin directory.
The example above would create the following structure in the plugins directory:

|_____ plugins
| |____   <name_of_the_meta_plugin>
| | |____   meta-plugin-descriptor.properties
| | |____   <plugin1>
| | |____   <plugin2>
If the sub plugins contain a config or a bin directory, they are copied in a sub folder inside the meta plugin config/bin directory.

|_____ config
| |____   <name_of_the_meta_plugin>
| | |____   <plugin1>
| | |____   <plugin2>

|_____ bin
| |____   <name_of_the_meta_plugin>
| | |____   <plugin1>
| | |____   <plugin2>
The sub-plugins are loaded at startup like normal plugins with the same restrictions; they have a separate class loader and a sub-plugin
cannot have the same name than another plugin (or a sub-plugin inside another meta plugin).

It is also not possible to remove a sub-plugin inside a meta plugin, only full removal of the meta plugin is allowed.

Closes #27316
2018-01-09 18:28:43 +01:00
Tanguy Leroux bba591bea0
Consistent updates of IndexShardSnapshotStatus (#28130)
This commit changes IndexShardSnapshotStatus so that the Stage is updated
coherently with any required information. It also provides a asCopy()
method that returns the status of a IndexShardSnapshotStatus at a given
point in time, ensuring that all information are coherent.

Closes #26480
2018-01-09 14:01:57 +01:00
olcbean fd45a46ce8 Deprecate `isShardsAcked()` in favour of `isShardsAcknowledged()` (#27819)
Several responses include the shards_acknowledged flag (indicating whether the
requisite number of shard copies started before the completion of the operation)
and there are two different getters used : isShardsAcknowledged() and isShardsAcked().

This PR deprecates the isShardsAcked() in favour of isShardsAcknowledged() in 
CreateIndexResponse, RolloverResponse and CreateIndexClusterStateUpdateResponse.

Closes #27784
2018-01-08 10:57:45 +01:00
Jason Tedor eaa636d4bb Clarify reproduce info on Windows
This commit correct the test failure reproduction line on Windows.

Relates #28104
2018-01-06 22:49:14 -05:00
Jason Tedor d712f581ca
Fix reproduction info to point to Gradle wrapper
With the Gradle wrapper in place, we should point the reproduction info
to specify using the Gradle wrapper too.

Relates #28104
2018-01-06 08:47:23 -05:00
Tim Brooks 38701fb6ee
Create nio-transport plugin for NioTransport (#27949)
This is related to #27260. This commit moves the NioTransport from
:test:framework to a new nio-transport plugin. Additionally, supporting
tcp decoding classes are moved to this plugin. Generic byte reading and
writing contexts are moved to the nio library.

Additionally, this commit adds a basic MockNioTransport to
:test:framework that is a TcpTransport implementation for testing that
is driven by nio.
2018-01-05 09:41:29 -07:00
Tim Brooks be5da2815d
Set the elasticsearch-nio codebase for tests (#28067)
This commit sets the elasticsearch-nio code base in the
BootstrapForTesting class. This is necessary as that codebase needs
socket permissions. Setting the codebase manually is necessary as
intellij does not package our internal libraries when running tests.
2018-01-04 09:55:51 -07:00
Yannick Welsch 7cdbae2da8
Add Writeable.Reader support to TransportResponseHandler (#28010)
Allows TransportResponse objects not to implement Streamable anymore. As an example, I've adapted the response handler for ShardActiveResponse, allowing the fields in that class to become final.
2018-01-04 10:27:08 +01:00
Ryan Ernst d36ec18029
Plugins: Add plugin extension capabilities (#27881)
This commit adds the infrastructure to plugin building and loading to
allow one plugin to extend another. That is, one plugin may extend
another by the "parent" plugin allowing itself to be extended through
java SPI. When all plugins extending a plugin are finished loading, the
"parent" plugin has a callback (through the ExtensiblePlugin interface)
allowing it to reload SPI.

This commit also adds an example plugin which uses as-yet implemented
extensibility (adding to the painless whitelist).
2018-01-03 11:12:43 -08:00
Tim Brooks c775374125
Disable nio test transport (#28028)
This commit disables the nio transport as an option for the test
transport in integration tests. This is because it does not currently
run properly in intellij due to socket permissions. It should be
reenabled once #27881 is merged (and the proper permissions are added).
2017-12-31 14:59:38 -07:00
Maxime Gréau 771defb97c
Build: Add 3rd party dependencies report generation (#27727)
* Adds task dependenciesInfo to BuildPlugin to generate a CSV file with dependencies information (name,version,url,license)
* Adds `ConcatFilesTask.groovy` to concatenates multiple files into one
* Adds task `:distribution:generateDependenciesReport` to concatenate `dependencies.csv` files into a single file (`es-dependencies.csv` by default)

 # Examples:
      $ gradle dependenciesInfo :distribution:generateDependenciesReport

 ## Use `csv` system property to customize the output file path
     $ gradle dependenciesInfo :distribution:generateDependenciesReport -Dcsv=/tmp/elasticsearch-dependencies.csv

 ## When branch is not master, use `build.branch` system property to generate correct licenses URLs
     $ gradle dependenciesInfo :distribution:generateDependenciesReport -Dbuild.branch=6.x -Dcsv=/tmp/elasticsearch-dependencies.csv
2017-12-26 10:51:47 +01:00
Nhat Nguyen 6629f4ab0d
Rollback primary before recovering from translog (#27804)
Today we always recover a primary from the last commit point. However 
with a new deletion policy, we keep multiple commit points in the
existing store, thus we have chance to find a good starting commit
point. With a good starting commit point, we may be able to throw away
stale operations. This PR rollbacks a primary to a starting commit then
recovering from translog.

Relates #10708
2017-12-22 18:25:36 -05:00
Tim Brooks 06b313025c
Add elasticsearch-nio jar for base nio classes (#27801)
This is related to #27802. This commit adds a jar called
elasticsearch-nio that contains the base nio classes that will be used
for the tcp nio transport and eventually the http nio transport.

The jar does not depend on elasticsearch:core, so all references to core
have been removed.
2017-12-20 16:29:16 -06:00
Nhat Nguyen 54b6885844
Check index under the store metadata lock (#27768)
Today when we get a metadata snapshot directly from a store directory, 
we acquire a metadata lock, then acquire an IndexWriter lock. However,
we create a CheckIndex in IndexShard without acquiring the metadata lock 
first. This causes a recovery failed because the IndexWriter lock can be
still held by method snapshotStoreMetadata. This commit makes sure to
create a CheckIndex under the metadata lock.

Closes #24481
Closes #27731
Relates #24787
2017-12-20 11:26:06 -05:00
Tanguy Leroux 0f80e7c5f6
[Test] Fix IndicesClientDocumentationIT (#27899)
The last operation executed in IndicesClientDocumentationIT.testCreate()
 is an asynchronous index creation. Because nothing waits for its
 completion, on slow machines the index can sometimes be created after
 the testCreate() test is finished, and it can fail the following test.

 Closes #27754
2017-12-20 09:31:10 +01:00
Nik Everett 32669ca265
Test: Change randomValueOtherThan(null, supplier) (#27901)
When the first parameter of `ESTestCase#randomValueOtherThan` is `null`
then run the supplier until it returns non-`null`. Previously,
`randomValueOtherThan` just ran the supplier one time which was
confusing.

Unexpectedly, it looks like not tests rely on the original `null`
handling.

Closes #27821
2017-12-19 10:23:38 -05:00
Boaz Leskes bea9471b2f
Use port 0 InternalTestCluster nodes (#27859)
We currently have a complicated port assignment scheme to make sure that the nodes span off by the internal test cluster will be assigned fixed port ranges that will also not collide between clusters. The port ranges need to be fixed in advance so that the nodes will be able to find each other via `UnicastZenPing`.

This approach worked well for the last few years but we are now at a point that our testing has grown beyond it and we exceed the 5 reusable ranges per JVM. This means that nodes are not always assigned the first 5 ports in their range which causes cluster formation issues. On top of that, most of the clusters that are span up don't even rely on `UnicastZenPing` but rather `MockZenPings` that uses in memory maps for discovery (with the down side that they are not influenced by network disruption simulations).

This PR changes `InternalTestCluster` to use port 0 as a fixed assignment. This will allow the OS to manage ports and will ensure we don't have collisions. For tests that need to simulate network disruptions (and thus can't use `MockZenPings`), a new `UnicastHostProvider` is introduced that is based on the current state of the test cluster. Since that is only resolved at run time, it is aware of the port assignments of the OS.

Closes #27818
Closes #27762
2017-12-19 08:43:03 +01:00
Jason Tedor aebdb2a646 Filter current version from compatible versions
We need to filter the current version from the list of compatible
versions to match how we calculate the list of compatible versions in
Gradle.
2017-12-18 17:37:22 -05:00
Yannick Welsch a5e8a221ec
Move GlobalCheckpointTracker and remove SequenceNumbersService (#27837)
This commit moves GlobalCheckpointTracker from the engine to IndexShard, where it better fits logically: Tracking the global checkpoint based on the local checkpoints of all shards in the replication group is not a property of the engine, but rather a property fulfilled by the current primary shard. The LocalCheckpointTracker on the other hand is driven by the contents of the local translog. By moving GlobalCheckpointTracker to IndexShard, it makes little sense to keep the SequenceNumbersService class around - it would only wrap the LocalCheckpointTracker. This commit therefore removes the class and replaces occurrences of SequenceNumbersService in the engine directly by LocalCheckpointTracker.
2017-12-18 15:27:44 +01:00
Alan Woodward af3f63616b
Allow TrimFilter to be used in custom normalizers (#27758)
AnalysisFactoryTestCase checks that the ES custom token filter multi-term
awareness matches the underlying lucene factory.  For the trim filter this
won't be the case until LUCENE-8093 is released in 7.3, so we add a
temporary exclusion

Closes #27310
2017-12-18 14:27:03 +00:00
Jason Tedor 76771242e8 Fix version tests for release tests
This commit fixes the version tests for release tests. The problem here
is that during release tests all version should be treated as released
so the assertions must be modified accordingly.

Relates #27815
2017-12-18 08:51:37 -05:00
Boaz Leskes 9cd69e7ec1
recovery from snapshot should fill gaps (#27850)
When snapshotting the primary we capture a lucene commit at an arbitrary moment from a sequence number perspective. This means that it is possible that the commit misses operations and that there is a gap between the local checkpoint in the commit and the maximum sequence number.

When we restore, this will create a primary that "misses" operations and currently will mean that the sequence number system is stuck (i.e., the local checkpoint will be stuck). To fix this we should fill in gaps when we restore, in a similar fashion to normal store recovery.
2017-12-18 13:33:39 +01:00
David Turner f0b21e3182
Make randomNonNegativeLong() draw from a uniform distribution (#27856)
Currently randomNonNegativeLong() returns 0 half as often as any positive long,
but random number generators are typically expected to return
uniformly-distributed values unless otherwise specified. This fixes this issue
by mapping Long.MIN_VALUE directly onto 0 rather than resampling.
2017-12-18 09:57:40 +00:00
Tim Brooks 916e7dbe29
Add NioGroup for use in different transports (#27737)
This commit is related to #27260. It adds a base NioGroup for use in
different transports. This class creates and starts the underlying
selectors. Different protocols or transports are established by passing
the ChannelFactory to the bindServerChannel or openChannel
methods. This allows a TcpChannelFactory to be passed which will
create and register channels that support the elasticsearch tcp binary
protocol or a channel factory that will create http channels (or other).
2017-12-15 10:42:00 -07:00
Tim Brooks f33f9612a7
Remove potential nio selector leak (#27825)
When an ESSelector is created an underlying nio selector is opened. This
selector is closed by the event loop after close has been signalled by
another thread.

However, there is a possibility that an ESSelector is created and some
exception in the startup process prevents it from ever being started
(however, close will still be called). The allows the selector to leak.

This commit addresses this issue by having the signalling thread close
the selector if the event loop is not running when close is signalled.
2017-12-14 14:37:41 -07:00
Adrien Grand 1b660821a2
Allow `_doc` as a type. (#27816)
Allowing `_doc` as a type will enable users to make the transition to 7.0
smoother since the index APIs will be `PUT index/_doc/id` and `POST index/_doc`.
This also moves most of the documentation to `_doc` as a type name.

Closes #27750
Closes #27751
2017-12-14 17:47:53 +01:00
Daniel Mitterdorfer d26b33dea2 Mute VersionUtilsTest#testGradleVersionsMatchVersionUtils
Relates #27815
2017-12-14 12:33:41 +01:00
Nhat Nguyen 57fc705d5e
Keep commits and translog up to the global checkpoint (#27606)
We need to keep index commits and translog operations up to the current 
global checkpoint to allow us to throw away unsafe operations and
increase the operation-based recovery chance. This is achieved by a new
index deletion policy.

Relates #10708
2017-12-12 19:20:08 -05:00
Tim Brooks d1acb7697b
Remove internal channel tracking in transports (#27711)
This commit attempts to continue unifying the logic between different
transport implementations. As transports call a `TcpTransport` callback
when a new channel is accepted, there is no need to internally track
channels accepted. Instead there is a set of accepted channels in
`TcpTransport`. This set is used for metrics and shutting down channels.
2017-12-08 16:56:53 -07:00
Tim Brooks d82c40d35c
Implement byte array reusage in `NioTransport` (#27696)
This is related to #27563. This commit modifies the
InboundChannelBuffer to support releasable byte pages. These byte
pages are provided by the PageCacheRecycler. The PageCacheRecycler
must be passed to the Transport with this change.
2017-12-08 10:39:30 -07:00
Tim Brooks da5f52a2fc
Add test for writer operation buffer accounting (#27707)
This is a follow up to #27695. This commit adds a test checking that
across multiple writes using multiple buffers, a write operation
properly keeps track of which buffers still need to be written.
2017-12-07 12:48:49 -07:00
Christoph Büscher b83e14858a Correcting some minor typos in comments 2017-12-07 16:39:23 +01:00
Tim Brooks 5b3230cbae
Fix issue where the incorrect buffers are written (#27695)
This is a followup to #27551. That commit introduced a bug where the
incorrect byte buffers would be returned when we attempted a write. This
commit fixes the logic.
2017-12-06 20:57:46 -07:00
Tim Brooks 2aa62daed4
Introduce resizable inbound byte buffer (#27551)
This is related to #27563. In order to interface with java nio, we must
have buffers that are compatible with ByteBuffer. This commit introduces
a basic ByteBufferReference to easily allow transferring bytes off the
wire to usage in the application.

Additionally it introduces an InboundChannelBuffer. This is a buffer
that can internally expand as more space is needed. It is designed to
be integrated with a page recycler so that it can internally reuse pages.
The final piece is moving all of the index work for writing bytes to a
channel into the WriteOperation.
2017-12-06 11:02:25 -07:00
Jim Ferenczi caea6b70fa
Add a new cluster setting to limit the total number of buckets returned by a request (#27581)
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.

Closes #27452 #26012
2017-12-06 09:15:28 +01:00
Luca Cavanna f4fb4d3bf5
Add support for filtering mappings fields (#27603)
Add support for filtering fields returned as part of mappings in get index, get mappings, get field mappings and field capabilities API.

Plugins can plug in their own function, which receives the index as argument, and return a predicate which controls whether each field is included or not in the returned output.
2017-12-05 20:31:29 +01:00
Jason Tedor 42a4ad35da
Add node name to thread pool executor name
This commit adds the node name to the names of thread pool executors so
that the node name is visible in rejected execution exception messages.

Relates #27663
2017-12-05 07:45:40 -05:00
Lee Hinman 1ff5ef9055 [TEST] Check accounting breaker is equal to segment stats rather than 0
If there are existing indices, it may not be 0
2017-12-04 14:15:23 -07:00
Simon Willnauer 84ec472428
Include internal refreshes in refresh stats (#27615)
Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.
2017-12-04 16:33:47 +01:00
Boaz Leskes f58a3d0b96 testRelocationWithConcurrentIndexing: wait for green (on relevan index) and shard initialization to settle down before starting relocation 2017-12-04 13:18:42 +01:00
Boaz Leskes 1a976ea7a4 Cherry pick tests and seqNo recovery hardning from #27580 2017-12-04 13:15:40 +01:00
James Baiera e16f1271b6
Fix SecurityException when HDFS Repository used against HA Namenodes (#27196)
* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured. 
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has 
been added for reproducing the effects of a Namenode failing over during regular repository 
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission 
restrictions during normal use, but will no longer restrict them when running against an HA 
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not 
further restrict the permissions so that the transparent operation to failover to a different 
Namenode in the client does not raise security exceptions. Additionally, we are now testing the 
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This 
includes a missing library (commons codec) in order to support this change.
2017-12-01 14:26:05 -05:00
Lee Hinman 623d3700f0
Add accounting circuit breaker and track segment memory usage (#27116)
* Add accounting circuit breaker and track segment memory usage

This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.

The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.

The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.

Resolves #27044
2017-12-01 07:59:45 -07:00
Luca Cavanna 3e8ca38fca
Deprecate the transport client in favour of the high-level REST client (#27085) 2017-12-01 12:24:16 +01:00
Tim Brooks b8557651aa
Add exception handling for write listeners (#27590)
This potential issue was exposed when I saw this PR #27542. Essentially
we currently execute the write listeners all over the place without
consistently catching and handling exceptions. Some of these exceptions
will be logged in different ways (including as low as `debug`).

This commit adds a single location where these listeners are executed.
If the listener throws an execption, the exception is caught and logged
at the `warn` level.
2017-11-29 15:47:12 -07:00
David Turner 00867e618d
Transpose expected and actual, and remove duplicate info from message. (#27515)
Previously:
```
   > Throwable #1: java.lang.AssertionError: Expected all shards successful but got successful [8] total [9]
   > Expected: <8>
   >      but: was <9>
```

Now:
```
   > Throwable #1: java.lang.AssertionError: Expected all shards successful
   > Expected: <9>
   >      but: was <8>
```
2017-11-24 17:45:34 +00:00
Tanguy Leroux 5dc5580eac
Delete shard store files before restoring a snapshot (#27476)
Pull request #20220 added a change where the store files
that have the same name but are different from the ones in the
snapshot are deleted first before the snapshot is restored.
This logic was based on the `Store.RecoveryDiff.different`
set of files which works by computing a diff between an
existing store and a snapshot.

This works well when the files on the filesystem form valid
shard store, ie there's a `segments` file and store files
are not corrupted. Otherwise, the existing store's snapshot
metadata cannot be read (using Store#snapshotStoreMetadata())
and an exception is thrown
(CorruptIndexException, IndexFormatTooOldException etc) which
is later caught as the begining of the restore process
(see RestoreContext#restore()) and is translated into
an empty store metadata (Store.MetadataSnapshot.EMPTY).

This will make the deletion of different files introduced
in #20220 useless as the set of files will always be empty
even when store files exist on the filesystem. And if some
files are present within the store directory, then restoring
a snapshot with files with same names will fail with a
FileAlreadyExistException.

This is part of the #26865 issue.

There are various cases were some files could exist in the
 store directory before a snapshot is restored. One that
Igor identified is a restore attempt that failed on a node
and only first files were restored, then the shard is allocated
again to the same node and the restore starts again (but fails
 because of existing files). Another one is when some files
of a closed index are corrupted / deleted and the index is
restored.

This commit adds a test that uses the infrastructure provided
by IndexShardTestCase in order to test that restoring a shard
succeed even when files with same names exist on filesystem.

Related to #26865
2017-11-24 13:15:34 +01:00
Martijn van Groningen f1ebf366bf
unmuted test, this has been fixed by #27397
Closes #27497
2017-11-24 08:53:00 +01:00
David Turner 89ba8996c6 Consolidate version numbering semantics (#27397)
Fixes to the build system, particularly around BWC testing, and to make future
version bumps less painful.
2017-11-23 20:21:53 +00:00
Martijn van Groningen ca9c476d88
muted test 2017-11-22 19:18:35 +01:00
Tim Brooks ef34555b29
Decouple nio constructs from the tcp transport (#27484)
This is related to #27260. Currently, basic nio constructs (nio
channels, the channel factories, selector event handlers, etc) implement
logic that is specific to the tcp transport. For example, NioChannel
implements the TcpChannel interface. These nio constructs at some point
will also need to support other protocols (ex: http).

This commit separates the TcpTransport logic from the nio building
blocks.
2017-11-22 11:39:31 -06:00
Jim Ferenczi 6319424e4a
Move composite aggregation to core (#27474)
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
2017-11-21 13:31:01 +01:00
Tim Brooks f37eb1b403
Remove tcp profile from low level nio channel (#27441)
This is related to #27260. Currently every nio channel has a profile
field. Profile is a concept that only relates to the tcp transport. Http
channels will not have profiles. This commit moves the profile from the
nio channel to the read context. The context is the level that protocol
specific features and logic should live.
2017-11-20 12:20:42 -07:00
Tim Brooks 0a8f48d592
Transition transport apis to use void listeners (#27440)
Currently we use ActionListener<TcpChannel> for connect, close, and send
message listeners in TcpTransport. However, all of the listeners have to
capture a reference to a channel in the case of the exception api being
called. This commit changes these listeners to be type <Void> as passing
the channel to onResponse is not necessary. Additionally, this change
makes it easier to integrate with low level transports (which use
different implementations of TcpChannel).
2017-11-20 10:47:47 -07:00
Michael Basnight 2949c53174
Remove config prompting for secrets and text (#27216)
This commit removes the ability to use ${prompt.secret} and
${prompt.text} as valid config settings. Secure settings has obsoleted
the need for this, and it cleans up some of the code in Bootstrap.
2017-11-19 22:33:17 -06:00
Michael Basnight cb3e8f4763
Move the CLI into its own subproject (#27114)
Projects the depend on the CLI currently depend on core. This should not
always be the case. The EnvironmentAwareCommand will remain in :core,
but the rest of the CLI components have been moved into their own
subproject of :core, :core:cli.
2017-11-18 21:42:57 -06:00
Tim Brooks ce45e29be7
Remove manual tracking of registered channels (#27445)
This is related to #27260. Currently, every ESSelector keeps track of
all channels that are registered with it. ESSelector is just an
abstraction over a raw java nio selector. The java nio selector already
tracks its own selection keys. This commit removes our tracking and
relies on the java nio selector tracking.
2017-11-17 16:20:09 -07:00
David Turner 08a257327f
Remove newline from log message (#27425)
It leads to harder-to-parse logs that look like this:

```
  1> [2017-11-16T20:46:21,804][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,812][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,820][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,966][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
```
2017-11-17 14:12:06 +00:00
Tim Brooks f761a0e0e4
Remove unneeded Throwable handling in nio (#27412)
This is related to #27260. In the nio transport work we do not catch or
handle `Throwable`. There are a few places where we have exception
handlers that accept `Throwable`. This commit removes those cases.
2017-11-16 18:24:06 -07:00
David Turner 9766b858d0
Prepare for bump to 6.0.1 on the master branch (#27391)
An assortment of fixes, particularly to version number calculations, in preparation for the bump to 6.0.1.
2017-11-16 18:38:54 +00:00
Tim Brooks 80ef9bbdb1
Remove parameterization from TcpTransport (#27407)
This commit is a follow up to the work completed in #27132. Essentially
it transitions two more methods (sendMessage and getLocalAddress) from
Transport to TcpChannel. With this change, there is no longer a need for
TcpTransport to be aware of the specific type of channel a transport
returns. So that class is no longer parameterized by channel type.
2017-11-16 11:19:36 -07:00
Tim Brooks 35a5922927
Delete unneeded nio client (#27408)
This is a follow up to #27132. As that PR greatly simplified the
connection logic inside a low level transport implementation, much of
the functionality provided by the NioClient class is no longer
necessary. This commit removes that class.
2017-11-16 09:22:40 -07:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Tim Brooks ca11085bb6
Add TcpChannel to unify Transport implementations (#27132)
Right now our different transport implementations must duplicate
functionality in order to stay compliant with the requirements of
TcpTransport. They must all implement common logic to open channels,
close channels, keep track of channels for eventual shutdown, etc.

Additionally, there is a weird and complicated relationship between
Transport and TransportService. We eventually want to start merging
some of the functionality between these classes.

This commit starts moving towards a world where TransportService retains
all the application logic and channel state. Transport implementations
in this world will only be tasked with returning a channel when one is
requested, calling transport service when a channel is accepted from
a server, and starting / stopping itself.

Specifically this commit changes how channels are opened and closed. All
Transport implementations now return a channel type that must comply with
the new TcpChannel interface. This interface has the methods necessary
for TcpTransport to completely manage the lifecycle of a channel. This
includes setting the channel up, waiting for connection, adding close
listeners, and eventually closing.
2017-11-15 12:38:39 -07:00
Luca Cavanna 382da0f227
REST spec: Validate that api name matches file name that contains it (#27366)
This commit validates that each spec json file contains an API that has the same name as the file
2017-11-14 14:53:00 +01:00
Simon Willnauer 2299c70371
Allow affix settings to specify dependencies (#27161)
We use affix settings to group settings / values under a certain namespace.
In some cases like login information for instance a setting is only valid if
one or more other settings are present. For instance `x.test.user` is only valid
if there is an `x.test.passwd` present and vice versa. This change allows to specify
such a dependency to prevent settings updates that leave settings in an inconsistent
state.
2017-11-13 12:06:36 +01:00
Simon Willnauer a34c2f0b8d
Ensure external refreshes will also refresh internal searcher to minimize segment creation (#27253)
We cut over to internal and external IndexReader/IndexSearcher in #26972 which uses
two independent searcher managers. This has the downside that refreshes of the external
reader will never clear the internal version map which in-turn will trigger additional
and potentially unnecessary segment flushes since memory must be freed. Under heavy
indexing load with low refresh intervals this can cause excessive segment creation which
causes high GC activity and significantly increases the required segment merges.

This change adds a dedicated external reference manager that delegates refreshes to the
internal reference manager that then `steals` the refreshed reader from the internal
reference manager for external usage. This ensures that external and internal readers
are consistent on an external refresh. As a sideeffect this also releases old segments
referenced by the internal reference manager which can potentially hold on to already merged
away segments until it is refreshed due to a flush or indexing activity.
2017-11-09 08:40:22 +00:00
Tim Brooks dc86b4c2ed
Decouple `ChannelFactory` from Tcp classes (#27286)
* Decouple `ChannelFactory` from Tcp classes

This is related to #27260. Currently `ChannelFactory` is tightly coupled
to classes related to the elasticsearch Tcp binary protocol. This commit
modifies the factory to be able to construct http or other protocol
channels.
2017-11-08 14:30:00 -07:00
Jason Tedor d5451b2037
Die with dignity while merging
If an out of memory error is thrown while merging, today we quietly
rewrap it into a merge exception and the out of memory error is
lost. Instead, we need to rethrow out of memory errors, and in fact any
fatal error here, and let those go uncaught so that the node is torn
down. This commit causes this to be the case.

Relates #27265
2017-11-06 17:55:11 -05:00
Jason Tedor 766d29e7cf
Correctly encode warning headers
The warnings headers have a fairly limited set of valid characters
(cf. quoted-text in RFC 7230). While we have assertions that we adhere
to this set of valid characters ensuring that our warning messages do
not violate the specificaion, we were neglecting the possibility that
arbitrary user input would trickle into these warning headers. Thus,
missing here was tests for these situations and encoding of characters
that appear outside the set of valid characters. This commit addresses
this by encoding any characters in a deprecation message that are not
from the set of valid characters.

Relates #27269
2017-11-06 13:20:30 -05:00
Simon Willnauer bd7efa908a Add ability to split shards (#26931)
This change adds a new `_split` API that allows to split indices into a new
index with a power of two more shards that the source index.  This API works
alongside the `_shrink` API but doesn't require any shard relocation before
indices can be split.

The split operation is conceptually an inverse `_shrink` operation since we
initialize the index with a _syntetic_ number of routing shards that are used
for the consistent hashing at index time. Compared to indices created with
earlier versions this might produce slightly different shard distributions but
has no impact on the per-index backwards compatibility.  For now, the user is
required to prepare an index to be splittable by setting the
`index.number_of_routing_shards` at index creation time.  The setting allows the
user to prepare the index to be splittable in factors of
`index.number_of_routing_shards` ie. if the index is created with
`index.number_of_routing_shards: 16` and `index.number_of_shards: 2` it can be
split into `4, 8, 16` shards. This is an intermediate step until we can make
this the default. This also allows us to safely backport this change to 6.x.

The `_split` operation is implemented internally as a DeleteByQuery on the
lucene level that is executed while the primary shards execute their initial
recovery. Subsequent merges that are triggered due to this operation will not be
executed immediately. All merges will be deferred unti the shards are started
and will then be throttled accordingly.

This change is intended for the 6.1 feature release but will not support pre-6.1
indices to be split unless these indices have been shrunk before. In that case
these indices can be split backwards into their original number of shards.
2017-11-06 11:37:55 +01:00
Tanguy Leroux 43e7a4a349
Upgrade to Jackson 2.8.10 (#27230)
While it's not possible to upgrade the Jackson dependencies 
to their latest versions yet (see #27032 (comment) for more) 
it's still possible to upgrade to the latest 2.8.x version.
2017-11-06 10:20:05 +01:00
Jim Ferenczi 429275a773
Remove ElasticsearchQueryCachingPolicy (#27190)
We have an hidden setting called `index.queries.cache.term_queries` that disables caching of term queries in the query cache.
Though term queries are not cached in the Lucene UsageTrackingQueryCachingPolicy since version 6.5.
This makes the es policy useless but also makes it impossible to re-enable caching for term queries.
This change appeared in Lucene 6.5 so this setting is no-op since version 5.4 of Elasticsearch
The change in this PR removes the setting and the custom policy.
2017-11-06 08:26:24 +01:00
David Roberts 749c3ec716
Remove the single argument Environment constructor (#27235)
Only tests should use the single argument Environment constructor.  To
enforce this the single arg Environment constructor has been replaced with
a test framework factory method.

Production code (beyond initial Bootstrap) should always use the same
Environment object that Node.getEnvironment() returns.  This Environment
is also available via dependency injection.
2017-11-04 13:25:09 +00:00
kel 0f21262b36 Do not create directories if repository is readonly (#26909)
For FsBlobStore and HdfsBlobStore, if the repository is read only, the blob store should be aware of the readonly setting and do not create directories if they don't exist.

Closes #21495
2017-11-03 13:10:50 +01:00
Jason Tedor d6d830ff0b
Fix logic detecting unreleased versions
When partitioning version constants into released and unreleased
versions, today we have a bug in finding the last unreleased
version. Namely, consider the following version constants on the 6.x
branch: ..., 5.6.3, 5.6.4, 6.0.0-alpha1, ..., 6.0.0-rc1, 6.0.0-rc2,
6.0.0, 6.1.0. In this case, our convention dictates that: 5.6.4, 6.0.0,
and 6.1.0 are unreleased. Today we correctly detect that 6.0.0 and 6.1.0
are unreleased, and then we say the previous patch version is unreleased
too. The problem is the logic to remove that previous patch version is
broken, it does not skip alphas/betas/RCs which have been released. This
commit fixes this by skipping backwards over pre-release versions when
finding the previous patch version to remove.

Relates #27206
2017-11-01 13:01:45 -04:00
Colin Goodheart-Smithe 99aca9cdfc
Enhances exists queries to reduce need for `_field_names` (#26930)
* Enhances exists queries to reduce need for `_field_names`

Before this change we wrote the name all the fields in a document to a `_field_names` field and then implemented exists queries as a term query on this field. The problem with this approach is that it bloats the index and also affects indexing performance.

This change adds a new method `existsQuery()` to `MappedFieldType` which is implemented by each sub-class. For most field types if doc values are available a `DocValuesFieldExistsQuery` is used, falling back to using `_field_names` if doc values are disabled. Note that only fields where no doc values are available are written to `_field_names`.

Closes #26770

* Addresses review comments

* Addresses more review comments

* implements existsQuery explicitly on every mapper

* Reinstates ability to perform term query on `_field_names`

* Added bwc depending on index created version

* Review Comments

* Skips tests that are not supported in 6.1.0

These values will need to be changed after backporting this PR to 6.x
2017-11-01 10:46:59 +00:00
kel c3e2bdf20c Raise IllegalArgumentException if query validation failed (#26811)
Closes #26799
2017-10-31 12:17:27 +01:00
Adrien Grand 3812d3cb43
TopHitsAggregator must propagate calls to `setScorer`. (#27138)
It is required in order to work correctly with bulk scorer implementations
that change the scorer during the collection process. Otherwise sub collectors
might call `Scorer.score()` on the wrong scorer.

Closes #27131
2017-10-31 09:59:06 +01:00
Jason Tedor a566942219
Refactor internal engine
This commit is a minor refactoring of internal engine to move hooks for
generating sequence numbers into the engine itself. As such, we refactor
tests that relied on this hook to use the new hook, and remove the hook
from the sequence number service itself.

Relates #27082
2017-10-30 13:10:20 -04:00
Ryan Ernst 2a8452b513 Reindex: Fix headers in reindex action (#26937)
The headers passed to reindex were skipped except for the last one. This
commit fixes the copying of the headers, as well as adds a base test
case for rest client builders to access the headers within the built
rest client.

relates #22976
2017-10-25 16:37:01 -07:00
olcbean 981b7f4d39 Make yaml test runner stricter by enforcing `required` for paths and parameters (#27035)
Till now the yaml test runner was verifying that the provided path parts and parameters are supported.
With this PR, yaml test runner also checks that all required path parts and parameters are provided.
2017-10-25 19:36:42 +00:00
Luca Cavanna 8caf7d4ff8 Decouple BulkProcessor from ThreadPool (#26727)
Introduce minimal thread scheduler as a base class for `ThreadPool`. Such a class can be used from the `BulkProcessor` to schedule retries and the flush task. This allows to remove the `ThreadPool` dependency from `BulkProcessor`, which requires to provide settings that contain `node.name` and also needed log4j for logging. Instead, it needs now a `Scheduler` that is much lighter and gets automatically created and shut down on close.

Closes #26028
2017-10-25 10:30:23 +02:00
Lee Hinman fcfbdf1f37 Expose adaptive replica selection stats in /_nodes/stats API
This exposes the collected metrics we store for ARS in the nodes stats, as well
as the computed rank of nodes. Each node exposes its perspective about the
cluster.

Here's an example output (with `?human`):

```json
...
"adaptive_selection" : {
  "_k6v1-wERxyUd5ke6s-D0g" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "7.8ms",
    "avg_service_time_ns" : 7896963,
    "avg_response_time" : "9ms",
    "avg_response_time_ns" : 9095598,
    "rank" : "9.1"
  },
  "VJiCUFoiTpySGmO00eWmtQ" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "1.3ms",
    "avg_service_time_ns" : 1330240,
    "avg_response_time" : "4.5ms",
    "avg_response_time_ns" : 4524154,
    "rank" : "4.5"
  },
  "DHNGTdzyT9iiaCpEUsIAKA" : {
    "outgoing_searches" : 0,
    "avg_queue_size" : 0,
    "avg_service_time" : "2.1ms",
    "avg_service_time_ns" : 2113164,
    "avg_response_time" : "6.3ms",
    "avg_response_time_ns" : 6375810,
    "rank" : "6.4"
  }
}
...
```
2017-10-24 08:58:42 -06:00
Tim Brooks 277637f42f Do not set SO_LINGER on server channels (#26997)
Right now we are attempting to set SO_LINGER to 0 on server channels
when we are stopping the tcp transport. This is not a supported socket
option and throws an exception. This also prevents the channels from
being closed.

This commit 1. doesn't set SO_LINGER for server channges, 2. checks
that it is a supported option in nio, and 3. changes the log message
to warn for server channel close exceptions.
2017-10-13 13:06:38 -06:00
Jason Tedor 393e73612e Fix formatting in channel close test
This commit fixes the indentation in the transport test case for a
channel closing while connecting.
2017-10-10 13:39:45 -04:00
Jason Tedor 4c06b8f1d2 Check for closed connection while opening
While opening a connection to a node, a channel can subsequently
close. If this happens, a future callback whose purpose is to close all
other channels and disconnect from the node will fire. However, this
future will not be ready to close all the channels because the
connection will not be exposed to the future callback yet. Since this
callback is run once, we will never try to disconnect from this node
again and we will be left with a closed channel. This commit adds a
check that all channels are open before exposing the channel and throws
a general connection exception. In this case, the usual connection retry
logic will take over.

Relates #26932
2017-10-10 13:34:51 -04:00
Simon Willnauer cdd7c1e6c2 Return List instead of an array from settings (#26903)
Today we return a `String[]` that requires copying values for every
access. Yet, we already store the setting as a list so we can also directly
return the unmodifiable list directly. This makes list / array access in settings
a much cheaper operation especially if lists are large.
2017-10-09 09:52:08 +02:00
Nhat bf4c3642b2 remove _primary and _replica shard preferences (#26791)
The shard preference _primary, _replica and its variants were useful
for the asynchronous replication. However, with the current impl, they
are no longer useful and should be removed.

Closes #26335
2017-10-08 11:03:06 -04:00
Jason Tedor 470e5e7cfc Add additional low-level logging handler ()
* Add additional low-level logging handler

We have the trace handler which is useful for recording sent messages
but there are times where it would be useful to have more low-level
logging about the events occurring on a channel. This commit adds a
logging handler that can be enabled by setting a certain log level
(org.elasticsearch.transport.netty4.ESLoggingHandler) to trace that
provides trace logging on low-level channel events and includes some
information about the request/response read/write events on the channel
as well.

* Remove imports

* License header

* Remove redundant

* Add test

* More assertions
2017-10-05 12:10:58 -04:00
Martijn van Groningen b27e408ed2
Removed void token filter entries and added two tests 2017-10-05 13:25:05 +02:00
Md. Abdulla-Al-Sun a40c474e10
Added Bengali Analyzer to Elasticsearch with respect to the lucene update(PR#238) 2017-10-05 13:25:05 +02:00
Boaz Leskes 2a04118e88 Promote common rest test utility methods to ESRestTestCase
We have duplicates in some classes and I was about to create one more.
2017-10-05 10:08:10 +02:00
Simon Willnauer 00dfdf50cf Represent lists as actual lists inside Settings (#26878)
Today we represent each value of a list setting with it's own dedicated key
that ends with the index of the value in the list. Aside of the obvious
weirdness this has several issues especially if lists are massive since it
causes massive runtime penalties when validating settings. Like a list of 100k
words will literally cause a create index call to timeout and in-turn massive
slowdown on all subsequent validations runs.

With this change we use a simple string list to represent the list. This change
also forbids to add a settings that ends with a .0 which was internally used to
detect a list setting.  Once this has been rolled out for an entire major
version all the internal .0 handling can be removed since all settings will be
converted.

Relates to #26723
2017-10-05 09:27:08 +02:00
Martijn van Groningen dca787ed8a
upgrade to Lucene 7.1.0 snapshot version 2017-10-05 09:06:56 +02:00
Simon Willnauer d1533e2397 Remove Settings#getAsMap() (#26845)
Since `#getAsMap` exposes internal representation we are trying to remove it
step by step. This commit is cleaning up some xcontent writing as well as
usage in tests
2017-10-04 01:21:38 -06:00
Boaz Leskes a18bd9caa2 Increase ESRestTestCase.waitForClusterStateUpdatesToFinish time out to 30s
It is set to 10 sec but sometimes it takes the cluster longer to settle.
2017-10-03 12:24:36 +02:00
Tim Brooks d80ad7f097 Check channel i open before setting SO_LINGER (#26857)
This commit fixes a #26855. Right now we set SO_LINGER to 0 if we are
stopping the transport. This can throw a ChannelClosedException if the
raw channel is already closed. We have a number of scenarios where it is
possible this could be called with a channel that is already closed.
This commit fixes the issue be checking that the channel is not closed
before attempting to set the socket option.
2017-10-02 15:09:52 -06:00
Tim Brooks 9ae7a80ba5 Move raw selector usage into ESSelector (#26825)
Currently we only log generic messages about errors in logs from the
nio event handler. This means that we do not know which channel had
issues connection, reading, writing, etc.

This commit changes the logs to include the local and remote addresses
and profile for a channel.
2017-10-01 17:59:57 -06:00
Simon Willnauer 7b8d036ab5 Replace group map settings with affix setting (#26819)
We use group settings historically instead of using a prefix setting which is more restrictive and type safe. The majority of the usecases needs to access a key, value map based on the _leave node_ of the setting ie. the setting `index.tag.*` might be used to tag an index with `index.tag.test=42` and `index.tag.staging=12` which then would be turned into a `{"test": 42, "staging": 12}` map. The group settings would always use `Settings#getAsMap` which is loosing type information and uses internal representation of the settings. Using prefix settings allows now to access such a method type-safe and natively.
2017-09-30 14:27:21 +02:00
Tim Brooks bf403ae028 Add information about nio channels in logs (#26806)
Currently we only log generic messages about errors in logs from the
nio event handler. This means that we do not know which channel had
issues connection, reading, writing, etc.

This commit changes the logs to include the local and remote addresses
and profile for a channel.
2017-09-28 17:11:26 -06:00
Simon Willnauer 25d6778d31 Add comment to TCP transport impls why we set SO_LINGER on close 2017-09-28 13:07:01 +02:00
Armin Braun af06231d4c #26701 Close TcpTransport on RST in some Spots to Prevent Leaking TIME_WAIT Sockets (#26764)
#26701 Added option to RST instead of FIN to TcpTransport#closeChannels
2017-09-26 19:58:11 +00:00
Simon Willnauer a506ba8602 Remove `Settings,put(Map<String,String>)` (#26785)
`Map<String,String>` is basically erasing the type while other methods on
the `Settings.Builder` are type safe and have corresponding `get` methods.
2017-09-26 12:15:20 +02:00
Simon Willnauer aab4655e63 Unify Settings xcontent reading and writing (#26739)
This change adds a fromXContent method to Settings that allows to read
the xcontent that is produced by toXContent. It also replaces the entire settings
loader infrastructure and removes the structured map representation. Future PRs will
also tackle the `getAsMap` that exposes the internal represenation of settings for
better encapsulation.
2017-09-25 13:23:01 +02:00