Commit Graph

150 Commits

Author SHA1 Message Date
David Turner 89ba8996c6 Consolidate version numbering semantics (#27397)
Fixes to the build system, particularly around BWC testing, and to make future
version bumps less painful.
2017-11-23 20:21:53 +00:00
Jim Ferenczi 6319424e4a
Move composite aggregation to core (#27474)
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
2017-11-21 13:31:01 +01:00
Luca Cavanna 29450de7b5
Cross Cluster Search: make remote clusters optional (#27182)
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.

This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.

Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.

The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:

"_clusters" : {
    "total" : 3,
    "successful" : 2,
    "skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.

The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 11:41:47 +01:00
Michael Basnight cb3e8f4763
Move the CLI into its own subproject (#27114)
Projects the depend on the CLI currently depend on core. This should not
always be the case. The EnvironmentAwareCommand will remain in :core,
but the rest of the CLI components have been moved into their own
subproject of :core, :core:cli.
2017-11-18 21:42:57 -06:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Adrien Grand 93da7720ff Move non-core mappers to a module. (#26549)
Today we have all non-plugin mappers in core. I'd like to start moving those
that neither map to json datatypes nor are very frequently used like `date` or
`ip` to a module.

This commit creates a new module called `mappers-extra` and moves the
`scaled_float` and `token_count` mappers to it. I'd like to eventually move
`range` fields there but it's more complicated due to their intimate
relationship with range queries.

Relates #10368
2017-09-13 17:58:53 +02:00
Michael Basnight cfd14cd2b8 Revert shading for the low level rest client (#26367)
At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.

Closes #26328

This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
2017-08-25 14:13:12 -05:00
Martijn van Groningen 7c3735bdc4
percolator: Store the QueryBuilder's Writable representation instead of its XContent representation.
The Writeble representation is less heavy to parse and that will benefit percolate performance and throughput.

The query builder's binary format has now the same bwc guarentees as the xcontent format.

Added a qa test that verifies that percolator queries written in older versions are still readable by the current version.
2017-07-28 12:24:10 +02:00
Yannick Welsch 1a01514081 Move tribe to a module (#25778)
This commit moves tribe to a module, stripping core from the tribe functionality.
2017-07-28 11:23:50 +02:00
Michael Basnight 3a20922046 Fix eclipse issues related to rest client shading (#25874)
* A cycle was detected in eclipse, and was fixed in the same fashion as
  core and core-tests.
* The rest client deps jar was not properly exported in the generated
  eclipse classpath file for rest client.

Relates #25208
2017-07-27 10:20:53 -05:00
Nik Everett bc4df47a76 Rework bwc snapshot projects to build up to two bwc versions (#24870)
Removes the `distribution:bwc` project in favor of
`distribution:bwc-release-snapshot` and
`distribution:bwc-stable-snapshot`.
`distribution:bwc-release-snapshot` builds a snapshot of the
latest release branch (5.4 now) if needed for backwards
compatibility. `distribution:bwc-stable-snapshot` builds a
snapshot of the latest stable branch (5.x now) if needed for
backwards compatibility.
2017-05-29 10:22:32 -04:00
Jason Tedor 09dd03e19f Verify Lucene version constants
The Lucene version constants for 5.4.1 and 5.5.0 are wrong, they are
listed as 6.5.0 instead of 6.5.1. This commit fixes these issues, and
adds a test to ensure that this does not happen again.

Relates #24923
2017-05-27 15:46:16 -04:00
Nik Everett e072cc7770 Begin replacing static index tests with full restart tests (#24846)
These tests spin up two nodes of an older version of Elasticsearch,
create some stuff, shut down the nodes, start the current version,
and verify that the created stuff works.

You can run `gradle qa:full-cluster-restart:check` to run these
tests against the head of the previous branch of Elasticsearch
(5.x for master, 5.4 for 5.x, etc) or you can run
`gradle qa:full-cluster-restart:bwcTest` to run this test against
all "index compatible" versions, one after the other. For master
this is every released version in the 5.x.y version *and* the tip
of the 5.x branch.

I'd love to add more to these tests in the future but these
currently just cover the functionality of the `create_bwc_index.py`
script and start to cover the assertions in the
`OldIndexBackwardsCompatibilityIT` test.
2017-05-26 14:07:48 -04:00
Jason Tedor 95ebdbaddb Add BWC packaging distributions
Some packaging tests depend on snapshot versions of packaging
distributions yet the build does not use a repository that includes such
distributions. While we could add such a repository, a better strategy
is to follow our approach for other BWC tests where we depend on a
locally-compiled archive distribution. This commit adds a local
compilation of packaging artifacts and substitutes these anywhere that
we would otherwise depend on a snapshot of these artifacts.

Relates #24861
2017-05-24 11:55:32 -04:00
Ryan Ernst 1964e5c1d0 Test: Make mixed cluster bwc test per wire compat version (#24780)
This commit renames the backwards-5.0 qa test to mixed-cluster and
creates a test within the project per wire compat version. Like with
rolling upgrade tests, the integTest task will run against the most
recent version, while all versions will be tested with the bwcTest task.
2017-05-18 14:20:23 -07:00
Jim Ferenczi 279a18a527 Add parent-join module (#24638)
* Add parent-join module

This change adds a new module named `parent-join`.
The goal of this module is to provide a replacement for the `_parent` field but as a first step this change only moves the `has_child`, `has_parent` queries and the `children` aggregation to this module.
These queries and aggregations are no longer in core but they are deployed by default as a module.

Relates #20257
2017-05-12 15:58:06 +02:00
Ryan Ernst c1f1f66509 Scripting: Replace advanced and native scripts with ScriptEngine docs (#24603)
This commit documents how to write a `ScriptEngine` in order to use
expert internal apis, such as using Lucene directly to find index term
statistics. These documents prepare the way to remove both native
scripts and IndexLookup.

The example java code is actually compiled and tested under a new gradle
subproject for example plugins. This change does not yet breakup
jvm-example into the new examples dir, which should be done separately.

relates #19359
relates #19966
2017-05-11 12:15:16 -07:00
Nik Everett 8188569fd1 Add qa module that tests reindex-from-remote against pre-5.0 versions of Elasticsearch (#24561)
Adds tests for reindex-from-remote for the latest 2.4, 1.7, and
0.90 releases. 2.4 and 1.7 are fairly popular versions but 0.90
is a point of pride.

This fixes any issues those tests revealed.

Closes #23828
Closes #24520
2017-05-11 10:06:20 -04:00
James Baiera 6a113ae499 Introduce Kerberos Test Fixture for Repository HDFS Security Tests (#24493)
This PR introduces a subproject in test/fixtures that contains a Vagrantfile used for standing up a 
KRB5 KDC (Kerberos). The PR also includes helper scripts for provisioning principals, a few 
changes to the HDFS Fixture to allow it to interface with the KDC, as well as a new suite of 
integration tests for the HDFS Repository plugin.

The HDFS Repository plugin senses if the local environment can support the HDFS Fixture 
(Windows is generally a restricted environment). If it can use the regular fixture, it then tests if 
Vagrant is installed with a compatible version to determine if the secure test fixtures should be 
enabled. If the secure tests are enabled, then we create a Kerberos KDC fixture, tasks for adding 
the required principals, and an HDFS fixture configured for security. A new integration test task is 
also configured to use the KDC and secure HDFS fixture and to run a testing suite that uses 
authentication. At the end of the secure integration test the fixtures are torn down.
2017-05-10 17:42:20 -04:00
Nik Everett 447f307ebb Fix _bulk response when it can't create an index (#24048)
Before #22488 when an index couldn't be created during a `_bulk`
operation we'd do all the *other* actions and return the index
creation error on each failing action. In #22488 we accidentally
changed it so that we now reject the entire bulk request if a single
action cannot create an index that it must create to run. This
gets reverts to the old behavior while still keeping the nicer
error messages. Instead of failing the entire request we now only
fail the portions of the request that can't work because the index
doesn't exist.

Closes #24028
2017-04-21 18:56:04 -04:00
Jason Tedor 2dd924bc15 Add Wildfly integration test
An important use case for our users is deploying our clients inside of
applications containers like Wildly. Sometimes, we make changes that
unintentionally break this use case. We need to know before we ship a
release that we have broken such use cases. As Wildfly is one of the
bigger application containers, this commit starts by adding an
integration test that deploys an application using the transport client
to Wildfly and ensures that all is well. Future work can add similar
integration tests for the low-level and high-level REST clients.

Relates #24147
2017-04-21 12:51:14 -04:00
Ryan Ernst ba48674695 Build: Move plugin cli and tests to distribution tool (#24220)
The plugin cli currently resides inside the elasticsearch jar. This
commit moves it into a plugin-cli jar. This is change alone is a no-op;
it does not change anything about what is loaded at runtime. But it will
allow easier testing (with fixtures in the future to test ES or maven
installation), as well as eventually not loading these classes when
starting elasticsearch.
2017-04-21 09:25:58 -07:00
Nik Everett caf376c8af Start building analysis-common module (#23614)
Start moving built in analysis components into the new analysis-common
module. The goal of this project is:
1. Remove core's dependency on lucene-analyzers-common.jar which should
shrink the dependencies for transport client and high level rest client.
2. Prove that analysis plugins can do all the "built in" things by moving all
"built in" behavior to a plugin.
3. Force tests not to depend on any oddball analyzer behavior. If tests
need anything more than the standard analyzer they can use the mock
analyzer provided by Lucene's test infrastructure.
2017-04-19 18:51:34 -04:00
Ryan Ernst 8c53555b28 Tests: Use local clone build of 5.x with bwc tests (#22946)
The current rest backcompat tests, which run against a mixed cluster of
5.x and 6.0 nodes, depend on snapshot builds of 5.x. However, this has
the potential for inconsistency that results in CI failures, and happens
quite often, whenever some backcompat logic is added to 5.x, but the bwc
test on master fails because the 5.x code has not yet been published as
a snapshot.

This change creates a git clone of the 5.x branch,
builds the zip distribution, and ties that into gradle substitutions for
the 5.x version.
2017-03-23 22:32:13 -07:00
Ryan Ernst 60b823c756 Add version checker tool to distributions 2017-02-16 09:06:49 -05:00
Tim Brooks 719e75bb3f Add repository-url module and move URLRepository (#22752)
This is related to #22116. URLRepository requires SocketPermission
connect. This commit introduces a new module called "repository-url"
where URLRepository will reside. With the new module, permissions can
be removed from core.
2017-01-25 17:09:25 -06:00
Simon Willnauer 418ec62bfb Merge branch 'master' into feature/multi_cluster_search 2017-01-06 10:24:40 +01:00
javanna f0181b19f5 add REST high level client gradle submodule and first simple method
The RestHighLevelClient class takes as as an argument a low level client instance RestClient. The first method added is ping, which returns true if the call to HEAD / went ok and false if an IOException was thrown. Any other exception gets bubbled up.

There are two kinds of tests, a unit test (RestHighLevelClientTests) that verifies the interaction between high level and low level client, and an integration test (MainActionIT) which relies on an externally started es cluster to send requests to.
2017-01-05 10:55:47 +01:00
Simon Willnauer 7be3af1123 Merge branch 'master' into feature/multi_cluster_search 2016-12-20 14:32:09 +01:00
Ryan Ernst f0c0f571bf Build: Remove hardcoded reference to x-plugins in build (#21773)
* Build: Remove hardcoded reference to x-plugins in build

The gradle build currenlty allows extra plugins to be hooked into the
elasticsearch build by placing under an x-plugins directory as a sibling
of the elasticsearch checkout. This change converts this directory to
one called elasticsearch-extra. The subdirectories of
elasticsearch-extra become first level projects in gradle (while before
they would have been subprojects of `:x-plugins`. Additionally, this
allows major version checkouts to be associated with extra plugins. For
example, you could have elasticsearch checked out as
`elasticsearch-5.x`, and a sibling `elasticsearch-5.x-extra` directory.
2016-12-14 15:02:07 -08:00
Simon Willnauer ec86771f6e Add a dedicated integ test project for multi-cluster-search
This commit adds `qa/multi-cluster-search` which currently does a
simple search across 2 clusters. This commit also adds support for IPv6
addresses and fixes an issue where all shards of the local cluster are searched
when only a remote index was given.
2016-11-24 14:17:53 +01:00
Ryan Ernst 6940b2b8c7 Remove groovy scripting language (#21607)
* Scripting: Remove groovy scripting language

Groovy was deprecated in 5.0. This change removes it, along with the
legacy default language infrastructure in scripting.
2016-11-22 19:24:12 -08:00
Nik Everett c79371fd5b Remove lang-python and lang-javascript (#20734)
They were deprecated in 5.0. We are concentrating on making
Painless awesome rather than supporting every language possible.

Closes #20698
2016-11-21 22:13:25 -05:00
Simon Willnauer de04aad994 Remove `modules/transport_netty_3` in favor of `netty_4` (#21590)
We kept `netty_3` as a fallback in the 5.x series but now that master
is 6.0 we don't need this or in other words all issues coming up with
netty 4 will be blockers for 6.0.
2016-11-17 12:44:42 +01:00
David Roberts 116593e5f5 Adjust bootstrap sequence (#21543)
Added the ability for plugins to spawn a controller process at startup
2016-11-17 09:58:09 +00:00
Jason Tedor 491a945ac8 Add socket permissions for tribe nodes
Today when a node starts, we create dynamic socket permissions based on
the configured HTTP ports and transport ports. If no ports are
configured, we use the default port ranges. When a tribe node starts, a
tribe node creates an internal node client for connecting to each remote
cluster. If neither an explicit HTTP port nor transport ports were
specified, the default port ranges are large enough for the tribe node
and its internal node clients. If an explicit HTTP port or transport
port was specified for the tribe node, then socket permissions for those
ports will be created, but not for the internal node clients. Whether
the internal node clients have explicit ports specified, or attempt to
bind within the default range, socket permissions for these will not
have been created and the internal node clients will hit a permissions
issue when attempting to bind. This commit addresses this issue by also
accounting for tribe nodes when creating the dynamic socket
permissions. Additionally, we add our first real integration test for
tribe nodes.
2016-11-14 11:58:44 -05:00
Christoph Büscher a9b0b97703 Expose Lucenes Ukrainian analyzer
Since Lucene 6.2. the UkrainianMorfologikAnalyzer is available through the
lucene-analyzers-morfologik jar. This change exposes it to be used as an
elasticsearch plugin.
2016-10-31 18:20:39 +01:00
Ali Beyad 50584c4103 Merge pull request #20532 from rjernst/rolling_upgrades
This PR introduces backward compatibility index tests to test the rolling upgrade process amongst Elasticsearch instances within the same major version. The test executes in three phases. In the first phase, we form a cluster of 2 ES instances on an old version. In the second phase, we keep one of the nodes from the old cluster, kill the other node, but preserve its data directory and start an instance of the current version of ES using the same data directory as the killed instance. In the third phase, we kill the other old version ES instance from the first phase and launch a new instance, using the same data directory as the killed instance. Therefore, during phase 3, we have fully migrated and have all current versions of ES running. In each phase, we run REST tests that index documents and search them, ensuring at each stage that the documents from the previous phase are still there.

Note that because we haven't released a GA yet of 5.0, the tests currently don't start an old version cluster in the first phase. Once GA is released, this will be changed to make the backward compatibility version 5.0, while the current version in the cluster will be 5.x.
2016-09-19 16:14:38 -04:00
David Pilato dfd1eebdd0 Remove mapper attachments plugin
We now have in 5.0.0 `ingest-attachment` plugin.
We can remove `mapper-attachments` plugin for 6.0.

Closes #18837.
2016-09-19 09:01:16 +02:00
Ali Beyad ce86ed1fdd Merge remote-tracking branch 'upstream/master' into rolling_upgrades 2016-09-16 10:43:38 -04:00
Ali Beyad 4431720c3d File-based discovery plugin (#20394)
This commit introduces a new plugin for file-based unicast hosts
discovery. This allows specifying the unicast hosts participating
in discovery through a `unicast_hosts.txt` file located in the
`config/discovery-file` directory. The plugin will use the hosts 
specified in this file as the set of hosts to ping during discovery.

The format of the `unicast_hosts.txt` file is to have one host/port
entry per line. The hosts file is read and parsed every time
discovery makes ping requests, thus a new version of the file that
is published to the config directory will automatically be picked
up.

Closes #20323
2016-09-13 20:52:39 -04:00
Ryan Ernst ecaf6ef001 Add rolling upgrade test project 2016-08-31 16:45:03 -07:00
Daniel Mitterdorfer 7b81c4ca59 Add client-benchmark-noop-api-plugin to stress clients even more in benchmarks (#20103) 2016-08-26 09:05:47 +02:00
Daniel Mitterdorfer c33f85bc37 Add client benchmark
With this commit we add a benchmark for the new REST client and the
existing transport client.

Closes #19281
2016-07-26 11:01:22 +02:00
Jason Tedor 2d1b0587dd Introduce Netty 4
This commit adds transport-netty4, a transport and HTTP implementation
based on Netty 4.

Relates #19526
2016-07-22 22:26:35 -04:00
Simon Willnauer 8394544548 Add a dedicated client/transport project for transport-client (#19435)
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.

Preinstalled modules are:
 * transport-netty
 * lang-mustache
 * reindex
 * percolator

The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.

Closes #19412
2016-07-18 15:42:24 +02:00
Jason Tedor 31c648eee8 Rename transport-netty to transport-netty3
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.

Relates #19439
2016-07-14 22:03:14 -04:00
Simon Willnauer 048e4416e7 Move netty transport and http into a module
This moves all netty code and it's dependency into a module.
2016-07-11 22:21:29 +02:00
Simon Willnauer 47bd2f9ca5 More cleanups aroung tests that require HTTP to be enalbed. (#19363)
this commit moves the most of the http related integ tests out into it's own 
`qa/smoke-test-http` project where most of the test can run against the external cluster.
2016-07-11 20:44:57 +02:00
Martijn van Groningen b4defafcb2 ingest: Renamed from `ingest-useragent` to `ingest-user-agent` and processor from `useragent` to `user_agent`
and on some other places did similar renaming. This is consistent with ES naming. Also made sure that the docs are navigable from the reference guide.
2016-07-07 09:43:43 +02:00