Commit Graph

1260 Commits

Author SHA1 Message Date
Tim Brooks f33f9612a7
Remove potential nio selector leak (#27825)
When an ESSelector is created an underlying nio selector is opened. This
selector is closed by the event loop after close has been signalled by
another thread.

However, there is a possibility that an ESSelector is created and some
exception in the startup process prevents it from ever being started
(however, close will still be called). The allows the selector to leak.

This commit addresses this issue by having the signalling thread close
the selector if the event loop is not running when close is signalled.
2017-12-14 14:37:41 -07:00
Adrien Grand 1b660821a2
Allow `_doc` as a type. (#27816)
Allowing `_doc` as a type will enable users to make the transition to 7.0
smoother since the index APIs will be `PUT index/_doc/id` and `POST index/_doc`.
This also moves most of the documentation to `_doc` as a type name.

Closes #27750
Closes #27751
2017-12-14 17:47:53 +01:00
Daniel Mitterdorfer d26b33dea2 Mute VersionUtilsTest#testGradleVersionsMatchVersionUtils
Relates #27815
2017-12-14 12:33:41 +01:00
Nhat Nguyen 57fc705d5e
Keep commits and translog up to the global checkpoint (#27606)
We need to keep index commits and translog operations up to the current 
global checkpoint to allow us to throw away unsafe operations and
increase the operation-based recovery chance. This is achieved by a new
index deletion policy.

Relates #10708
2017-12-12 19:20:08 -05:00
Tim Brooks d1acb7697b
Remove internal channel tracking in transports (#27711)
This commit attempts to continue unifying the logic between different
transport implementations. As transports call a `TcpTransport` callback
when a new channel is accepted, there is no need to internally track
channels accepted. Instead there is a set of accepted channels in
`TcpTransport`. This set is used for metrics and shutting down channels.
2017-12-08 16:56:53 -07:00
Tim Brooks d82c40d35c
Implement byte array reusage in `NioTransport` (#27696)
This is related to #27563. This commit modifies the
InboundChannelBuffer to support releasable byte pages. These byte
pages are provided by the PageCacheRecycler. The PageCacheRecycler
must be passed to the Transport with this change.
2017-12-08 10:39:30 -07:00
Tim Brooks da5f52a2fc
Add test for writer operation buffer accounting (#27707)
This is a follow up to #27695. This commit adds a test checking that
across multiple writes using multiple buffers, a write operation
properly keeps track of which buffers still need to be written.
2017-12-07 12:48:49 -07:00
Christoph Büscher b83e14858a Correcting some minor typos in comments 2017-12-07 16:39:23 +01:00
Tim Brooks 5b3230cbae
Fix issue where the incorrect buffers are written (#27695)
This is a followup to #27551. That commit introduced a bug where the
incorrect byte buffers would be returned when we attempted a write. This
commit fixes the logic.
2017-12-06 20:57:46 -07:00
Tim Brooks 2aa62daed4
Introduce resizable inbound byte buffer (#27551)
This is related to #27563. In order to interface with java nio, we must
have buffers that are compatible with ByteBuffer. This commit introduces
a basic ByteBufferReference to easily allow transferring bytes off the
wire to usage in the application.

Additionally it introduces an InboundChannelBuffer. This is a buffer
that can internally expand as more space is needed. It is designed to
be integrated with a page recycler so that it can internally reuse pages.
The final piece is moving all of the index work for writing bytes to a
channel into the WriteOperation.
2017-12-06 11:02:25 -07:00
Jim Ferenczi caea6b70fa
Add a new cluster setting to limit the total number of buckets returned by a request (#27581)
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.

Closes #27452 #26012
2017-12-06 09:15:28 +01:00
Luca Cavanna f4fb4d3bf5
Add support for filtering mappings fields (#27603)
Add support for filtering fields returned as part of mappings in get index, get mappings, get field mappings and field capabilities API.

Plugins can plug in their own function, which receives the index as argument, and return a predicate which controls whether each field is included or not in the returned output.
2017-12-05 20:31:29 +01:00
Jason Tedor 42a4ad35da
Add node name to thread pool executor name
This commit adds the node name to the names of thread pool executors so
that the node name is visible in rejected execution exception messages.

Relates #27663
2017-12-05 07:45:40 -05:00
Lee Hinman 1ff5ef9055 [TEST] Check accounting breaker is equal to segment stats rather than 0
If there are existing indices, it may not be 0
2017-12-04 14:15:23 -07:00
Simon Willnauer 84ec472428
Include internal refreshes in refresh stats (#27615)
Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.
2017-12-04 16:33:47 +01:00
Boaz Leskes f58a3d0b96 testRelocationWithConcurrentIndexing: wait for green (on relevan index) and shard initialization to settle down before starting relocation 2017-12-04 13:18:42 +01:00
Boaz Leskes 1a976ea7a4 Cherry pick tests and seqNo recovery hardning from #27580 2017-12-04 13:15:40 +01:00
Lee Hinman 623d3700f0
Add accounting circuit breaker and track segment memory usage (#27116)
* Add accounting circuit breaker and track segment memory usage

This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.

The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.

The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.

Resolves #27044
2017-12-01 07:59:45 -07:00
Luca Cavanna 3e8ca38fca
Deprecate the transport client in favour of the high-level REST client (#27085) 2017-12-01 12:24:16 +01:00
Tim Brooks b8557651aa
Add exception handling for write listeners (#27590)
This potential issue was exposed when I saw this PR #27542. Essentially
we currently execute the write listeners all over the place without
consistently catching and handling exceptions. Some of these exceptions
will be logged in different ways (including as low as `debug`).

This commit adds a single location where these listeners are executed.
If the listener throws an execption, the exception is caught and logged
at the `warn` level.
2017-11-29 15:47:12 -07:00
David Turner 00867e618d
Transpose expected and actual, and remove duplicate info from message. (#27515)
Previously:
```
   > Throwable #1: java.lang.AssertionError: Expected all shards successful but got successful [8] total [9]
   > Expected: <8>
   >      but: was <9>
```

Now:
```
   > Throwable #1: java.lang.AssertionError: Expected all shards successful
   > Expected: <9>
   >      but: was <8>
```
2017-11-24 17:45:34 +00:00
Tanguy Leroux 5dc5580eac
Delete shard store files before restoring a snapshot (#27476)
Pull request #20220 added a change where the store files
that have the same name but are different from the ones in the
snapshot are deleted first before the snapshot is restored.
This logic was based on the `Store.RecoveryDiff.different`
set of files which works by computing a diff between an
existing store and a snapshot.

This works well when the files on the filesystem form valid
shard store, ie there's a `segments` file and store files
are not corrupted. Otherwise, the existing store's snapshot
metadata cannot be read (using Store#snapshotStoreMetadata())
and an exception is thrown
(CorruptIndexException, IndexFormatTooOldException etc) which
is later caught as the begining of the restore process
(see RestoreContext#restore()) and is translated into
an empty store metadata (Store.MetadataSnapshot.EMPTY).

This will make the deletion of different files introduced
in #20220 useless as the set of files will always be empty
even when store files exist on the filesystem. And if some
files are present within the store directory, then restoring
a snapshot with files with same names will fail with a
FileAlreadyExistException.

This is part of the #26865 issue.

There are various cases were some files could exist in the
 store directory before a snapshot is restored. One that
Igor identified is a restore attempt that failed on a node
and only first files were restored, then the shard is allocated
again to the same node and the restore starts again (but fails
 because of existing files). Another one is when some files
of a closed index are corrupted / deleted and the index is
restored.

This commit adds a test that uses the infrastructure provided
by IndexShardTestCase in order to test that restoring a shard
succeed even when files with same names exist on filesystem.

Related to #26865
2017-11-24 13:15:34 +01:00
Martijn van Groningen f1ebf366bf
unmuted test, this has been fixed by #27397
Closes #27497
2017-11-24 08:53:00 +01:00
David Turner 89ba8996c6 Consolidate version numbering semantics (#27397)
Fixes to the build system, particularly around BWC testing, and to make future
version bumps less painful.
2017-11-23 20:21:53 +00:00
Martijn van Groningen ca9c476d88
muted test 2017-11-22 19:18:35 +01:00
Tim Brooks ef34555b29
Decouple nio constructs from the tcp transport (#27484)
This is related to #27260. Currently, basic nio constructs (nio
channels, the channel factories, selector event handlers, etc) implement
logic that is specific to the tcp transport. For example, NioChannel
implements the TcpChannel interface. These nio constructs at some point
will also need to support other protocols (ex: http).

This commit separates the TcpTransport logic from the nio building
blocks.
2017-11-22 11:39:31 -06:00
Jim Ferenczi 6319424e4a
Move composite aggregation to core (#27474)
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
2017-11-21 13:31:01 +01:00
Tim Brooks f37eb1b403
Remove tcp profile from low level nio channel (#27441)
This is related to #27260. Currently every nio channel has a profile
field. Profile is a concept that only relates to the tcp transport. Http
channels will not have profiles. This commit moves the profile from the
nio channel to the read context. The context is the level that protocol
specific features and logic should live.
2017-11-20 12:20:42 -07:00
Tim Brooks 0a8f48d592
Transition transport apis to use void listeners (#27440)
Currently we use ActionListener<TcpChannel> for connect, close, and send
message listeners in TcpTransport. However, all of the listeners have to
capture a reference to a channel in the case of the exception api being
called. This commit changes these listeners to be type <Void> as passing
the channel to onResponse is not necessary. Additionally, this change
makes it easier to integrate with low level transports (which use
different implementations of TcpChannel).
2017-11-20 10:47:47 -07:00
Michael Basnight 2949c53174
Remove config prompting for secrets and text (#27216)
This commit removes the ability to use ${prompt.secret} and
${prompt.text} as valid config settings. Secure settings has obsoleted
the need for this, and it cleans up some of the code in Bootstrap.
2017-11-19 22:33:17 -06:00
Michael Basnight cb3e8f4763
Move the CLI into its own subproject (#27114)
Projects the depend on the CLI currently depend on core. This should not
always be the case. The EnvironmentAwareCommand will remain in :core,
but the rest of the CLI components have been moved into their own
subproject of :core, :core:cli.
2017-11-18 21:42:57 -06:00
Tim Brooks ce45e29be7
Remove manual tracking of registered channels (#27445)
This is related to #27260. Currently, every ESSelector keeps track of
all channels that are registered with it. ESSelector is just an
abstraction over a raw java nio selector. The java nio selector already
tracks its own selection keys. This commit removes our tracking and
relies on the java nio selector tracking.
2017-11-17 16:20:09 -07:00
David Turner 08a257327f
Remove newline from log message (#27425)
It leads to harder-to-parse logs that look like this:

```
  1> [2017-11-16T20:46:21,804][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,812][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,820][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
  1> [2017-11-16T20:46:21,966][INFO ][o.e.t.r.y.ClientYamlTestClient] Adding header Content-Type
  1>  with value application/json
```
2017-11-17 14:12:06 +00:00
Tim Brooks f761a0e0e4
Remove unneeded Throwable handling in nio (#27412)
This is related to #27260. In the nio transport work we do not catch or
handle `Throwable`. There are a few places where we have exception
handlers that accept `Throwable`. This commit removes those cases.
2017-11-16 18:24:06 -07:00
David Turner 9766b858d0
Prepare for bump to 6.0.1 on the master branch (#27391)
An assortment of fixes, particularly to version number calculations, in preparation for the bump to 6.0.1.
2017-11-16 18:38:54 +00:00
Tim Brooks 80ef9bbdb1
Remove parameterization from TcpTransport (#27407)
This commit is a follow up to the work completed in #27132. Essentially
it transitions two more methods (sendMessage and getLocalAddress) from
Transport to TcpChannel. With this change, there is no longer a need for
TcpTransport to be aware of the specific type of channel a transport
returns. So that class is no longer parameterized by channel type.
2017-11-16 11:19:36 -07:00
Tim Brooks 35a5922927
Delete unneeded nio client (#27408)
This is a follow up to #27132. As that PR greatly simplified the
connection logic inside a low level transport implementation, much of
the functionality provided by the NioClient class is no longer
necessary. This commit removes that class.
2017-11-16 09:22:40 -07:00
Jim Ferenczi 623367d793
Add composite aggregator (#26800)
* This change adds a module called `aggs-composite` that defines a new aggregation named `composite`.
The `composite` aggregation is a multi-buckets aggregation that creates composite buckets made of multiple sources.
The sources for each bucket can be defined as:
  * A `terms` source, values are extracted from a field or a script.
  * A `date_histogram` source, values are extracted from a date field and rounded to the provided interval.
This aggregation can be used to retrieve all buckets of a deeply nested aggregation by flattening the nested aggregation in composite buckets.
A composite buckets is composed of one value per source and is built for each document as the combinations of values in the provided sources.
For instance the following aggregation:

````
"test_agg": {
  "terms": {
    "field": "field1"
  },
  "aggs": {
    "nested_test_agg":
      "terms": {
        "field": "field2"
      }
  }
}
````
... which retrieves the top N terms for `field1` and for each top term in `field1` the top N terms for `field2`, can be replaced by a `composite` aggregation in order to retrieve **all** the combinations of `field1`, `field2` in the matching documents:

````
"composite_agg": {
  "composite": {
    "sources": [
      {
	"field1": {
          "terms": {
              "field": "field1"
            }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
        }
      },
    }
  }
````

The response of the aggregation looks like this:

````
"aggregations": {
  "composite_agg": {
    "buckets": [
      {
        "key": {
          "field1": "alabama",
          "field2": "almanach"
        },
        "doc_count": 100
      },
      {
        "key": {
          "field1": "alabama",
          "field2": "calendar"
        },
        "doc_count": 1
      },
      {
        "key": {
          "field1": "arizona",
          "field2": "calendar"
        },
        "doc_count": 1
      }
    ]
  }
}
````

By default this aggregation returns 10 buckets sorted in ascending order of the composite key.
Pagination can be achieved by providing `after` values, the values of the composite key to aggregate after.
For instance the following aggregation will aggregate all composite keys that sorts after `arizona, calendar`:

````
"composite_agg": {
  "composite": {
    "after": {"field1": "alabama", "field2": "calendar"},
    "size": 100,
    "sources": [
      {
	"field1": {
          "terms": {
            "field": "field1"
          }
        }
      },
      {
	"field2": {
          "terms": {
            "field": "field2"
          }
	}
      }
    }
  }
````

This aggregation is optimized for indices that set an index sorting that match the composite source definition.
For instance the aggregation above could run faster on indices that defines an index sorting like this:

````
"settings": {
  "index.sort.field": ["field1", "field2"]
}
````

In this case the `composite` aggregation can early terminate on each segment.
This aggregation also accepts multi-valued field but disables early termination for these fields even if index sorting matches the sources definition.
This is mandatory because index sorting picks only one value per document to perform the sort.
2017-11-16 15:13:36 +01:00
Tim Brooks ca11085bb6
Add TcpChannel to unify Transport implementations (#27132)
Right now our different transport implementations must duplicate
functionality in order to stay compliant with the requirements of
TcpTransport. They must all implement common logic to open channels,
close channels, keep track of channels for eventual shutdown, etc.

Additionally, there is a weird and complicated relationship between
Transport and TransportService. We eventually want to start merging
some of the functionality between these classes.

This commit starts moving towards a world where TransportService retains
all the application logic and channel state. Transport implementations
in this world will only be tasked with returning a channel when one is
requested, calling transport service when a channel is accepted from
a server, and starting / stopping itself.

Specifically this commit changes how channels are opened and closed. All
Transport implementations now return a channel type that must comply with
the new TcpChannel interface. This interface has the methods necessary
for TcpTransport to completely manage the lifecycle of a channel. This
includes setting the channel up, waiting for connection, adding close
listeners, and eventually closing.
2017-11-15 12:38:39 -07:00
Luca Cavanna 382da0f227
REST spec: Validate that api name matches file name that contains it (#27366)
This commit validates that each spec json file contains an API that has the same name as the file
2017-11-14 14:53:00 +01:00
Simon Willnauer 2299c70371
Allow affix settings to specify dependencies (#27161)
We use affix settings to group settings / values under a certain namespace.
In some cases like login information for instance a setting is only valid if
one or more other settings are present. For instance `x.test.user` is only valid
if there is an `x.test.passwd` present and vice versa. This change allows to specify
such a dependency to prevent settings updates that leave settings in an inconsistent
state.
2017-11-13 12:06:36 +01:00
Simon Willnauer a34c2f0b8d
Ensure external refreshes will also refresh internal searcher to minimize segment creation (#27253)
We cut over to internal and external IndexReader/IndexSearcher in #26972 which uses
two independent searcher managers. This has the downside that refreshes of the external
reader will never clear the internal version map which in-turn will trigger additional
and potentially unnecessary segment flushes since memory must be freed. Under heavy
indexing load with low refresh intervals this can cause excessive segment creation which
causes high GC activity and significantly increases the required segment merges.

This change adds a dedicated external reference manager that delegates refreshes to the
internal reference manager that then `steals` the refreshed reader from the internal
reference manager for external usage. This ensures that external and internal readers
are consistent on an external refresh. As a sideeffect this also releases old segments
referenced by the internal reference manager which can potentially hold on to already merged
away segments until it is refreshed due to a flush or indexing activity.
2017-11-09 08:40:22 +00:00
Tim Brooks dc86b4c2ed
Decouple `ChannelFactory` from Tcp classes (#27286)
* Decouple `ChannelFactory` from Tcp classes

This is related to #27260. Currently `ChannelFactory` is tightly coupled
to classes related to the elasticsearch Tcp binary protocol. This commit
modifies the factory to be able to construct http or other protocol
channels.
2017-11-08 14:30:00 -07:00
Jason Tedor d5451b2037
Die with dignity while merging
If an out of memory error is thrown while merging, today we quietly
rewrap it into a merge exception and the out of memory error is
lost. Instead, we need to rethrow out of memory errors, and in fact any
fatal error here, and let those go uncaught so that the node is torn
down. This commit causes this to be the case.

Relates #27265
2017-11-06 17:55:11 -05:00
Jason Tedor 766d29e7cf
Correctly encode warning headers
The warnings headers have a fairly limited set of valid characters
(cf. quoted-text in RFC 7230). While we have assertions that we adhere
to this set of valid characters ensuring that our warning messages do
not violate the specificaion, we were neglecting the possibility that
arbitrary user input would trickle into these warning headers. Thus,
missing here was tests for these situations and encoding of characters
that appear outside the set of valid characters. This commit addresses
this by encoding any characters in a deprecation message that are not
from the set of valid characters.

Relates #27269
2017-11-06 13:20:30 -05:00
Simon Willnauer bd7efa908a Add ability to split shards (#26931)
This change adds a new `_split` API that allows to split indices into a new
index with a power of two more shards that the source index.  This API works
alongside the `_shrink` API but doesn't require any shard relocation before
indices can be split.

The split operation is conceptually an inverse `_shrink` operation since we
initialize the index with a _syntetic_ number of routing shards that are used
for the consistent hashing at index time. Compared to indices created with
earlier versions this might produce slightly different shard distributions but
has no impact on the per-index backwards compatibility.  For now, the user is
required to prepare an index to be splittable by setting the
`index.number_of_routing_shards` at index creation time.  The setting allows the
user to prepare the index to be splittable in factors of
`index.number_of_routing_shards` ie. if the index is created with
`index.number_of_routing_shards: 16` and `index.number_of_shards: 2` it can be
split into `4, 8, 16` shards. This is an intermediate step until we can make
this the default. This also allows us to safely backport this change to 6.x.

The `_split` operation is implemented internally as a DeleteByQuery on the
lucene level that is executed while the primary shards execute their initial
recovery. Subsequent merges that are triggered due to this operation will not be
executed immediately. All merges will be deferred unti the shards are started
and will then be throttled accordingly.

This change is intended for the 6.1 feature release but will not support pre-6.1
indices to be split unless these indices have been shrunk before. In that case
these indices can be split backwards into their original number of shards.
2017-11-06 11:37:55 +01:00
Tanguy Leroux 43e7a4a349
Upgrade to Jackson 2.8.10 (#27230)
While it's not possible to upgrade the Jackson dependencies 
to their latest versions yet (see #27032 (comment) for more) 
it's still possible to upgrade to the latest 2.8.x version.
2017-11-06 10:20:05 +01:00
Jim Ferenczi 429275a773
Remove ElasticsearchQueryCachingPolicy (#27190)
We have an hidden setting called `index.queries.cache.term_queries` that disables caching of term queries in the query cache.
Though term queries are not cached in the Lucene UsageTrackingQueryCachingPolicy since version 6.5.
This makes the es policy useless but also makes it impossible to re-enable caching for term queries.
This change appeared in Lucene 6.5 so this setting is no-op since version 5.4 of Elasticsearch
The change in this PR removes the setting and the custom policy.
2017-11-06 08:26:24 +01:00
David Roberts 749c3ec716
Remove the single argument Environment constructor (#27235)
Only tests should use the single argument Environment constructor.  To
enforce this the single arg Environment constructor has been replaced with
a test framework factory method.

Production code (beyond initial Bootstrap) should always use the same
Environment object that Node.getEnvironment() returns.  This Environment
is also available via dependency injection.
2017-11-04 13:25:09 +00:00
kel 0f21262b36 Do not create directories if repository is readonly (#26909)
For FsBlobStore and HdfsBlobStore, if the repository is read only, the blob store should be aware of the readonly setting and do not create directories if they don't exist.

Closes #21495
2017-11-03 13:10:50 +01:00