This adds the `_primary_term` field internally to the mappings. This field is
populated with the current shard's primary term.
It is intended to be used for collision resolution when two document copies have
the same sequence id, therefore, doc_values for the field are stored but the
filed itself is not indexed.
This also fixes the `_seq_no` field so that doc_values are retrievable (they
were previously stored but irretrievable) and changes the `stats` implementation
to more efficiently use the points API to retrieve the min/max instead of
iterating on each doc_value value. Additionally, even though we intend to be
able to search on the field, it was previously not searchable. This commit makes
it searchable.
There is no user-visible `_primary_term` field. Instead, the fields are
updated by calling:
```java
index.parsedDoc().updateSeqID(seqNum, primaryTerm);
```
This includes example methods in `Versions` and `Engine` for retrieving the
sequence id values from the index (see `Engine.getSequenceID`) that are only
used in unit tests. These will be extended/replaced by actual implementations
once we make use of sequence numbers as a conflict resolution measure.
Relates to #10708
Supercedes #21480
P.S. As a side effect of this commit, `SlowCompositeReaderWrapper` cannot be
used for documents that contain `_seq_no` because it is a Point value and SCRW
cannot wrap documents with points, so the tests have been updated to loop
through the `LeafReaderContext`s now instead.
This adds a fromXContent method and unit test to the HighlightField class so we
can parse it as part of a serch response. This is part of the preparation for
parsing search responses on the client side.
`_update_by_query` supports specifying the `pipeline` to process the
documents as a url parameter but `_reindex` doesn't. It doesn't because
everything about the `_reindex` request that has to do with writing
the documents is grouped under the `dest` object in the request body.
This changes the response parameter from
`request [_reindex] contains unrecognized parameter: [pipeline]` to
`_reindex doesn't support [pipeline] as a query parmaeter. Specify it in the [dest] object instead.`
Action filters currently have the ability to filter both the request and
response. But the response side was not actually used. This change
removes support for filtering responses with action filters.
Changes the default socket and connection timeouts for the rest
client from 10 seconds to the more generous 30 seconds.
Defaults reindex-from-remote to those timeouts and make the
timeouts configurable like so:
```
POST _reindex
{
"source": {
"remote": {
"host": "http://otherhost:9200",
"socket_timeout": "1m",
"connect_timeout": "10s"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
```
Closes#21707
I added an assertion to Netty4/Netty3Transport in 5.x that is not in
master yet. This commit port the assert to ensure we consumed all connection
in `connectToChannels`
We don't use the test infra nor do we run the tests. They might all be
entirely out of date. We also have a different BWC test infra in-place.
This change removes all of the legacy infra.
Timeouts are global today across all connections this commit allows to specify
a connection timeout per node such that depending on the context connections can
be established with different timeouts.
Relates to #19719
We currently treat every node equally when we establish connections to a node.
Yet, if we are not master eligible or can't hold any data there is no point in creating
a dedicated connection for sending the cluster state or running remote recoveries respectively.
The usage of STATE and RECOVERY connections on non-master and/or non-data nodes will result in an IllegalStateException.
For the record, I also had to remove the geo-hash cell and geo-distance range
queries to make the code compile. These queries already throw an exception in
all cases with 5.x indices, so that does not hurt any more.
I also had to rename all 2.x bwc indices from `index-${version}` to
`unsupported-${version}` to make `OldIndexBackwardCompatibilityIT`
happy.
SearchTemplateRequest to implement CompositeIndicesRequest
Given that SearchTemplateRequest effectively delegates to search when a search is being executed, it should implement the CompositeIndicesRequest interface. The subrequests method should return a single search request. When a search is not going to be executed, because we are in simulate mode, there are no inner requests, and there are no corresponding indices to that request either.
Closes#21747
Set lucene version to 6.4.0-snapshot-ec38570 and update all the sha1s/license
Fix invalid combo after upgrade in query_string query. split_on_whitespace=false is disallowed if auto_generate_phrase_queries=true
Adapt the expectations of some tests to the new format of the Lucene explain output
Lucene 6.2 added index and query support for numeric ranges. This commit adds a new RangeFieldMapper for indexing numeric (int, long, float, double) and date ranges and creating appropriate range and term queries. The design is similar to NumericFieldMapper in that it uses a RangeType enumerator for implementing the logic specific to each type. The following range types are supported by this field mapper: int_range, float_range, long_range, double_range, date_range.
Lucene does not provide a DocValue field specific to RangeField types so the RangeFieldMapper implements a CustomRangeDocValuesField for handling doc value support.
When executing a Range query over a Range field, the RangeQueryBuilder has been enhanced to accept a new relation parameter for defining the type of query as one of: WITHIN, CONTAINS, INTERSECTS. This provides support for finding all ranges that are related to a specific range in a desired way. As with other spatial queries, DISJOINT can be achieved as a MUST_NOT of an INTERSECTS query.
The Transport#connectToNodeLight concepts is confusing and not very flexible.
neither really testable on a unittest level. This commit cleans up the code used
to connect to nodes and simplifies transport implementations to share more code.
This also allows to connect to nodes with custom profiles if needed, for instance
future improvements can be added to connect to/from nodes that are non-data nodes without
dedicated bulks and recovery connections.
When Netty listens on a socket, it specifies the established connection
backlog for the socket. On Linux, Netty tries to read the system-wide
configuration for this from /proc/sys/net/core/somaxconn and falls back
to a default value when it can not read this value. This commit grants
Netty permission to read this file so that it can honor the system-wide
configuration for the connection backlog for sockets that it is
listening on. This also removes an obnoxious stack trace that appears
when Netty logging is set to debug logging.
Relates #21840
In the past we ran yaml tests against an internal cluster, which would get restarted after each test failure, hence the client objects needed to eventually be refreshed before each test. That is why we had the initClient method to re-initialize the YamlTestClient in the execution context. We ended up though re-initializing the client unconditionally, which is not needed.
Also, ESRestTestCase recreates the RestClient against the external cluster before each test, which is not needed given that nothing changes in the external cluster.
This commit removes the initClient method from the yaml tests execution context. The YamlTestClient can be eagerly created before the first yaml test runs and then re-used in subsequent tests. Also api calls to check for nodes versions etc. are moved out of YamlTestClient to ESClientYamlSuiteTestCase. Also the RestClient is now initialized in ESRestTestCase before the first test runs, and kept around afterwards as a static member.
Basically each subclass of EsRestTestCase will have its own RestClient instance, but the client will be shared across the different tests within the same class. The yaml test suite is just a special suite, composed of 600+ tests that are loaded from files, which will share the same client instance.
This change should speed tests up as well, as we don't recreate the RestClient before each single test, and we don't call _cat/nodes either before each single test.
This commit simplifies the handling of fatal errors on the network
layer. The simplification here is to remove the use of a
StringWriter/PrintWriter pair to format the stack trace, removing the
need for the method to declare that it throws a checked IOException.
If a bug occurs in painless compilation (not from a user, but from the
painless infrastructure), a VerifyError may be thrown when compiling the
broken generated class. This commit wraps VerifyErrors in
ScriptException so that useful information is returned to the user,
which can be passed on to the ES team for analysis.
This bug would cause a VerifyError when scripts using the === operator
were comparing a def type against a primitive type since the primitive
type wasn't being appropriately boxed.
Today when handling unreleased versions for backwards compatilibity
support, we scatted version constants across the code base and add some
asserts to support removing these constants when the version in question
is actually released. This commit improves this situation, enabling us
to just add a single unreleased version constant that can be renamed
when the version is actually released. This should make maintenance of
these versions simpler.
Relates #21760
NOTE: The result of `?.` and `?:` can't be assigned to primitives. So
`int[] someArray = null; int l = someArray?.length` and
`int s = params.size ?: 100` don't work. Do
`def someArray = null; def l = someArray?.length` and
`def s = params.size ?: 100` instead.
Relates to #21748
* Scripting: Remove groovy scripting language
Groovy was deprecated in 5.0. This change removes it, along with the
legacy default language infrastructure in scripting.
You can use `Debug.explain(someObject)` in painless to throw an
`Error` that can't be caught by painless code and contains an
object's class. This is useful because painless's sandbox doesn't
allow you to call `someObject.getClass()`.
Closes#20263
When a fatal error is thrown on the network layer, such an error never
makes its way to the uncaught exception handler. This prevents the node
from being torn down if an out of memory error or other fatal error is
thrown while handling HTTP or transport traffic. This commit adds logic
to ensure that such errors bubble their way up to the uncaught exception
handler, even though Netty tries really hard to swallow everything.
Relates #21720
This should make debugging painless' analysis and code generation a
little easier.
The `toString` implementations mirror the AST somewhat, and look like
`(SSource (SReturn (ENumeric 1)))`.
Today we read a vint from the stream to allocate the size of an array up-front
before we start reading the values. This can be dangerous if for instance we read
from a corrupted stream or if some manipulated bytes are send for instance from
an attacker or a fuzzer. In most of the cases we can apply some best effort and
validate the array size to be _sane_ by ensuring we can at read at least N bytes
where N is the expected size of the array.
When Groovy detects a bug in its runtime because an internal assertion
was violated, it throws an GroovyBugError. This descends from
AssertionError and if it goes uncaught will land in the uncaught
exception handler and will not deliver any useful information to the
user. This commit wraps GroovyBugErrors in ScriptExceptions so that
useful information is returned to the user.
Implements a null coalescing operator in painless that looks like `?:`. This form was chosen to emulate Groovy's `?:` operator. It is different in that it only coalesces null values, instead of Groovy's `?:` operator which coalesces all falsy values. I believe that makes it the same as Kotlin's `?:` operator. In other languages this operator looks like `??` (C#) and `COALESCE` (SQL) and `:-` (bash).
This operator is lazy, meaning the right hand side is only evaluated at all if the left hand side is null.
This commit exposes the executor service interface from thread
pool. This will enable some high-level concurrency primitives that will
make some code cleaner and simpler.
Relates #21608
We kept `netty_3` as a fallback in the 5.x series but now that master
is 6.0 we don't need this or in other words all issues coming up with
netty 4 will be blockers for 6.0.
* master: (22 commits)
Add proper toString() method to UpdateTask (#21582)
Fix `InternalEngine#isThrottled` to not always return `false`. (#21592)
add `ignore_missing` option to SplitProcessor (#20982)
fix trace_match behavior for when there is only one grok pattern (#21413)
Remove dead code from GetResponse.java
Fixes date range query using epoch with timezone (#21542)
Do not cache term queries. (#21566)
Updated dynamic mapper section
Docs: Clarify date_histogram bucket sizes for DST time zones
Handle release of 5.0.1
Fix skip reason for stats API parameters test
Reduce skip version for stats API parameter tests
Strict level parsing for indices stats
Remove cluster update task when task times out (#21578)
[DOCS] Mention "all-fields" mode doesn't search across nested documents
InternalTestCluster: when restarting a node we should validate the cluster is formed via the node we just restarted
Fixed bad asciidoc in boolean mapping docs
Fixed bad asciidoc ID in node stats
Be strict when parsing values searching for booleans (#21555)
Fix time zone rounding edge case for DST overlaps
...
There is an issue in the Grok Processor, where trace_match: true does not inject the _ingest._grok_match_index into the ingest-document when there is just one pattern provided. This is due to an optimization in the regex construction. This commit adds a check for when this is the case, and injects a static index value of "0", since there is only one pattern matched (at the first index into the patterns).
To make this clearer, more documentation was added to the grok-processor docs.
Fixes#21371.
* master:
Hack around cluster service and logging race
Do not prematurely shutdown Log4j
Support decimal constants with trailing [dD] in painless (#21412)
In painless suggest a long constant if int won't do (#21415)
Account for different paths for sysctl utilities
[TEST] testRebalancePossible() may not have an assigned node id
Tests: Disable merge in SearchCancellationTests
Tests: clean search scroll at the end of SearchCancellationIT
This adds support to painless for decimal constants with trailing `d` or
`D` to make it compatible with Java. It already supported integer
constants with a trailing `d` or `D` but this adds tests for it.
Closes#21116
In painless we prefer explicit types over implicit ones whereas
groovy is the other way around. Take this groovy code:
```
> 86400000.class
java.lang.Integer
> 864000000000.class
java.lang.Long
```
Painless accepts `86400000` just fine because that is a valid `int`
in the jvm. It rejects `864000000000` as an invlid `int` constant
because, in painless as in java, `long` constants always end in `L`
or `l`.
To ease the transition from groovy to painless, this changes the
compilation error returned from these invalid constants from:
```
Invalid int constant [864000000000].
```
to
```
Invalid int constant [864000000000]. If you want a long constant then change it to [864000000000L].
```
Inspired by #21313
* master: (516 commits)
Avoid angering Log4j in TransportNodesActionTests
Add trace logging when aquiring and releasing operation locks for replication requests
Fix handler name on message not fully read
Remove accidental import.
Improve log message in TransportNodesAction
Clean up of Script.
Update Joda Time to version 2.9.5 (#21468)
Remove unused ClusterService dependency from SearchPhaseController (#21421)
Remove max_local_storage_nodes from elasticsearch.yml (#21467)
Wait for all reindex subtasks before rethrottling
Correcting a typo-Maan to Man-in README.textile (#21466)
Fix InternalSearchHit#hasSource to return the proper boolean value (#21441)
Replace all index date-math examples with the URI encoded form
Fix typos (#21456)
Adapt ES_JVM_OPTIONS packaging test to ubuntu-1204
Add null check in InternalSearchHit#sourceRef to prevent NPE (#21431)
Add VirtualBox version check (#21370)
Export ES_JVM_OPTIONS for SysV init
Skip reindex rethrottle tests with workers
Make forbidden APIs be quieter about classpath warnings (#21443)
...
In the test for reindex and friend's rethrottling feature we were waiting
only for a single reindex sub task to start before rethrottling. This
mostly worked because starting tasks is fast. But it didn't *always work
and CI found that for us. This fixes the test to wait for all subtasks
to start before rethrottling.
I reproduced this locally semi-consistently with some fairly creative
`Thread.sleep` calls and this test fix fixes the issue even with the
sleeps so I'm fairly sure this will work consistently.
Closes#21446
The method used to be called `isSourceEmpty`, and was renamed to `hasSource`, but the return value never changed. Updated tests and users accordingly.
Closes#21419
Null safe dereferences make handling null or missing values shorter.
Compare without:
```
if (ctx._source.missing != null && ctx._source.missing.foo != null) {
ctx._source.foo_length = ctx.source.missing.foo.length()
}
```
To with:
```
Integer length = ctx._source.missing?.foo?.length();
if (length != null) {
ctx._source.foo_length = length
}
```
Combining this with the as of yet unimplemented elvis operator allows
for very concise defaults for nulls:
```
ctx._source.foo_length = ctx._source.missing?.foo?.length() ?: 0;
```
Since you have to start somewhere, we started with null safe dereferenes.
Anyway, this is a feature borrowed from groovy. Groovy allows writing to
null values like:
```
def v = null
v?.field = 'cat'
```
And the writes are simply ignored. Painless doesn't support this at this
point because it'd be complex to implement and maybe not all that useful.
There is no runtime cost for this feature if it is not used. When it is
used we implement it fairly efficiently, adding a jump rather than a
temporary variable.
This should also work fairly well with doc values.
If you try to reindex with multiple slices against a node that
doesn't support it we throw an `IllegalArgumentException` so
`assertVersionSerializable` is ok with it and so if this happens
in REST it comes back as a 400 error.
* Rest client: don't reuse that same HttpAsyncResponseConsumer across multiple retries
Turns out that AbstractAsyncResponseConsumer from apache async http client is stateful and cannot be reused across multiple requests. The failover mechanism was mistakenly reusing that same instance, which can be provided by users, across retries in case nodes are down or return 5xx errors. The downside is that we have to change the signature of two public methods, as HttpAsyncResponseConsumer cannot be provided directly anymore, rather its factory needs to be provided which is going to be used to create one instance of the consumer per request attempt.
Up until now we tested our RestClient against multiple nodes only in a mock environment, where we don't really send http requests. In that scenario we can verify that retries etc. work properly but the interaction with the http client library in a real scenario is different and can catch other problems. With this commit we also add an integration test that sends requests to multiple hosts, and some of them may also get stopped meanwhile. The specific test for pathPrefix was also removed as pathPrefix is now randomly applied by default, hence implicitly tested. Moved also a small test method that checked the validity of the path argument to the unit test RestClientSingleHostTests.
Also increase default buffer limit to 100MB and make it required in default consumer
The default buffer limit used to be 10MB but that proved not to be high enough for scroll requests (see reindex from remote). With this commit we increase the limit to 100MB and make it a bit more visibile in the consumer factory.
At one point in the past when moving out the rest tests from core to
their own subproject, we had multiple test classes which evenly split up
the tests to run. However, we simplified this and went back to a single
test runner to have better reproduceability in tests. This change
removes the remnants of that multiplexing support.
Adds support for `?slices=N` to reindex which automatically
parallelizes the process using parallel scrolls on `_uid`. Performance
testing sees a 3x performance improvement for simple docs
on decent hardware, maybe 30% performance improvement
for more complex docs. Still compelling, especially because
clusters should be able to get closer to the 3x than the 30%
number.
Closes#20624
This adds checks for expected warning headers to the query builder test
infrastructure. Tests that are adding deprecation warnings to the response
headers need to check those, otherwise the abstract base class for the test
class will complain at teardown.
The `IndexService#newQueryShardContext()` method creates a QueryShardContext on
shard `0`, with a `null` reader and that uses `System.currentTimeMillis()` to
resolve `now`. This may hide bugs, since the shard id is sometimes used for
query parsing (it is used to salt random score generation in `function_score`),
passing a `null` reader disables query rewriting and for some use-cases, it is
simply not ok to rely on the current timestamp (eg. percolation). So this pull
request removes this method and instead requires that all call sites provide
these parameters explicitly.
It was 10mb and that was causing trouble when folks reindex-from-remoted
with large documents.
We also improve the error reporting so it tells folks to use a smaller
batch size if they hit a buffer size exception. Finally, adds some docs
to reindex-from-remote mentioning the buffer and giving an example of
lowering the size.
Closes#21185
Previously Elasticsearch would only use the package name for logging
levels, truncating the package prefix and the class name. This meant
that logger names for Netty were just prefixed by netty3 and netty. We
changed this for Elasticsearch so that it's the fully-qualified class
name now, but never corrected this for Netty. This commit fixes the
logger names for the Netty modules so that their levels are controlled
by the fully-qualified class name.
Relates #21223
Refactored ScriptType to clean up some of the variable and method names. Added more documentation. Deprecated the 'in' ParseField in favor of 'stored' to match the indexed scripts being replaced by stored scripts.
Java 9's exception message when lists have an out of bounds index
is much better than java 8 but the painless code asserted on the
java 8 message. Now it'll accept either.
I'm tempted to weaken the assertion but I like asserting that the
message is readable.
Adds support for indexing into lists and arrays with negative
indexes meaning "counting from the back". So for if
`x = ["cat", "dog", "chicken"]` then `x[-1] == "chicken"`.
This adds an extra branch to every array and list access but
some performance testing makes it look like the branch predictor
successfully predicts the branch every time so there isn't a
in execution time for this feature when the index is positive.
When the index is negative performance testing showed the runtime
is the same as writing `x[x.length - 1]`, again, presumably thanks
to the branch predictor.
Those performance metrics were calculated for lists and arrays but
`def`s get roughly the same treatment though instead of inlining
the test they need to make a invoke dynamic so we don't screw up
maps.
Closes#20870
Lucene 6.3 is expected to be released in the next weeks so it'd be good to give
it some integration testing. I had to upgrade randomized-testing too so that
both Lucene and Elasticsearch are on the same version.
This commit fixes responses to HEAD requests so that the value of the
Content-Length is correct per the HTTP spec. Namely, the value of this
header should be equal to the Content-Length if the request were not a
HEAD request.
This commit also fixes a memory leak on HEAD requests to the main action
that arose from the bytes on a builder not being released due to them
being dropped on the floor to ensure that the response to the main
action did not have a body.
Relates #21123
Versions before 2.0 needed to be told to return interesting fields
like `_parent`, `_routing`, `_ttl`, and `_timestamp`. And they come
back inside a `fields` block which we need to parse.
Closes#21044
This commit upgrades the transport-netty4 module dependency from Netty
version 4.1.5 to version 4.1.6. This is a bug fix release of Netty.
Relates #21051
This allows you to whitelist `localhost:*` or `127.0.10.*:9200`.
It explicitly checks for patterns like `*` in the whitelist and
refuses to start if the whitelist would match everything. Beyond
that the user is on their own designing a secure whitelist.
When running `gradle run`, a developer usually intends to get a running
instance as if they had run elasticsearch from the command line. This is
different than the isolated environment we use for integration testing
plugins. This change switches the run task to use the zip distribution,
so that all modules included in the normal distribution are included.
* Scripting: Add support for booleans in scripts
Since 2.0, booleans have been represented as numeric fields (longs).
However, in scripts, this is odd, since you expect doing a comparison
against a boolean to work. While languages like groovy will auto convert
between booleans and longs, painless does not.
This changes the doc values accessor for boolean fields in scripts to
return Boolean objects instead of Long objects.
closes#20949
* Make Booleans final and remove wrapping of `this` for getValues()
This commit fixes an issue with the handling of the value "keep-alive"
on the Connection header in the Netty 4 HTTP implementation while
handling an HTTP 1.0 request. The issue was using the wrong equals
method to compare an AsciiString instance and a String instance (they
could never be equal). This commit fixes this to use the correct equals
method to compare for content equality.
This commit fixes an issue with the handling of the value "close" on the
Connection header in the Netty 4 HTTP implementation. The issue was
using the wrong equals method to compare an AsciiString instance and a
String instance (they could never be equal). This commit fixes this to
use the correct equals method to compare for content equality.
Relates #20956
Today Elasticsearch limits the number of processors used in computing
thread counts to 32. This was from a time when Elasticsearch created
more threads than it does now and users would run into out of memory
errors. It appears the real cause of these out of memory errors was not
well understood (it's often due to ulimit settings) and so users were
left hitting these out of memory errors on boxes with high core
counts. Today Elasticsearch creates less threads (but still a lot) and
we have a bootstrap check in place to ensure that the relevant ulimit is
not too low.
There are some caveats still to having too many concurrent indexing
threads as it can lead to too many little segments, and it's not a
magical go faster knob if indexing is already bottlenecked by disk, but
this limitation is artificial and surprising to users and so it should
be removed.
This commit also increases the lower bound of the max processes ulimit,
to prepare for a world where Elasticsearch instances might be running
with more the previous cap of 32 processors. With the current settings,
Elasticsearch wants to create roughly 576 + 25 * p / 2 threads, where p
is the number of processors. Add in roughly 7 * p / 8 threads for the GC
threads and a fudge factor, and 4096 should cover us pretty well up to
256 cores.
Relates #20874
Both netty3 and netty4 http implementation printed the default
toString representation of PortRange if ports couldn't be bound.
This commit adds a better default toString method to PortRange and
uses the string representation for the error message in the http
implementations.
Some objects like maps, iterables or arrays of objects can self-reference themselves. This is mostly due to a bug in code but the XContentBuilder should be able to detect such situations and throws an IllegalArgumentException instead of building objects over and over until a stackoverflow occurs.
closes#20540closes#19475
Update scripts might want to update the documents `_timestamp` but need a notion of `now()`.
Painless doesn't support any notion of now() since it would make scripts non-pure functions. Yet,
in the update case this is a valid value and we can pass it with the context together to allow the
script to record the timestamp the document was updated.
Relates to #17895
* Replace org.elasticsearch.common.lucene.search.MatchNoDocsQuery with its Lucene version (org.apache.lucene.search.MatchNoDocsQuery)
This change removes the ES version of the match no docs query and replaces it with the Lucene version.
relates #18030
* Add missing change
Today we throw an assertion error if we release an AbstractArray more than once.
Yet, it's recommended to implement close methods such that they can be invoked
more than once. Guaranteed single release calls are hard to implement and some
situations might not be tested causing for instance `CircuitBreaker` to operate on
corrupted memory stats.
UpdateHelper, MetaDataIndexUpgradeService, and some recovery
stuff.
Move ClusterSettings to nullable ctor parameter of TransportService
so it isn't forgotten.
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
Today SearchContext expose the current context as a thread local which makes any kind of sane interface design very very hard. This PR removes the thread local entirely and instead passes the relevant context anywhere needed. This simplifies state management dramatically and will allow for a much leaner SearchContext interface down the road.
Today when parsing a request, Elasticsearch silently ignores incorrect
(including parameters with typos) or unused parameters. This is bad as
it leads to requests having unintended behavior (e.g., if a user hits
the _analyze API and misspell the "tokenizer" then Elasticsearch will
just use the standard analyzer, completely against intentions).
This commit removes lenient URL parameter parsing. The strategy is
simple: when a request is handled and a parameter is touched, we mark it
as such. Before the request is actually executed, we check to ensure
that all parameters have been consumed. If there are remaining
parameters yet to be consumed, we fail the request with a list of the
unconsumed parameters. An exception has to be made for parameters that
format the response (as opposed to controlling the request); for this
case, handlers are able to provide a list of parameters that should be
excluded from tripping the unconsumed parameters check because those
parameters will be used in formatting the response.
Additionally, some inconsistencies between the parameters in the code
and in the docs are corrected.
Relates #20722
The invalid ingest configuration field name used to show itself,
even when it was null, in error messages. Sometimes this does not make
sense.
e.g.
```[null] Only one of [file], [id], or [inline] may be configure```
vs.
```Only one of [file], [id], or [inline] may be configure```
The above deals with three fields, therefore this no one property
responsible.
* master: (1199 commits)
[DOCS] Remove non-valid link to mapping migration document
Revert "Default `include_in_all` for numeric-like types to false"
test: add a test with ipv6 address
docs: clearify that both ip4 and ip6 addresses are supported
Include complex settings in settings requests
Add production warning for pre-release builds
Clean up confusing error message on unhandled endpoint
[TEST] Increase logging level in testDelayShards()
change health from string to enum (#20661)
Provide error message when plugin id is missing
Document that sliced scroll works for reindex
Make reindex-from-remote ignore unknown fields
Remove NoopGatewayAllocator in favor of a more realistic mock (#20637)
Remove Marvel character reference from guide
Fix documentation for setting Java I/O temp dir
Update client benchmarks to log4j2
Changes the API of GatewayAllocator#applyStartedShards and (#20642)
Removes FailedRerouteAllocation and StartedRerouteAllocation
IndexRoutingTable.initializeEmpty shouldn't override supplied primary RecoverySource (#20638)
Smoke tester: Adjust to latest changes (#20611)
...
reindex-from-remote should ignore unknown fields so it is mostly
future compatible. This makes it ignore unknown fields by adding an
option to `ObjectParser` and `ConstructingObjectParser` that, if
enabled, causes them to ignore unknown fields.
Closes#20504
Today we hold on to all possible tokenizers, tokenfilters etc. when we create
an index service on a node. This was mainly done to allow the `_analyze` API to
directly access all these primitive. We fixed this in #19827 and can now get rid of
the AnalysisService entirely and replace it with a simple map like class. This
ensures we don't create a gazillion long living objects that are entirely useless since
they are never used in most of the indices. Also those objects might consume a considerable
amount of memory since they might load stopwords or synonyms etc.
Closes#19828
This commit removes `ByteSizeValue`'s methods that are duplicated (ex: `mbFrac()` and `getMbFrac()`) in order to only keep the `getN` form.
It also renames `mb()` -> `getMb()`, `kb()` -> `getKB()` in order to be more coherent with the `ByteSizeUnit` method names.
* Build: Remove old maven deploy support
This change removes the old maven deploy that we have in parallel to
maven-publish, and makes maven-publish fully work with publishing to
maven local. Using `gradle publishToMavenLocal` should be used to
publish to .m2.
Note that there is an unfortunate hack that means for
zip artifacts we must first create/publish a dummy pom file, and then
follow that with the real pom file. It would be nice to have the pom
file contains packaging=zip, but maven central then requires sources and
javadocs. But our zips are really just attached artifacts, so we already
set the packaging type to pom for our zip files. This change just works
around a limitation of the underlying maven publishing library which
silently skips attached artifacts when the packaging type is set to pom.
relates #20164closes#20375
* Remove unnecessary extra spacing
This change removes all guice interaction from Transport, HttpServerTransport,
HttpServer and TransportService. All these classes as well as their subclasses
or extended version configured via plugins are now created by using plain old
bloody java constructors. YAY!
We can now run templates using `explain` and/or `profile` parameters.
Which is interesting when you have defined a complicated profile but want to debug it in an easier way than running the full query again.
You can use `explain` parameter when running a template:
```js
GET /_search/template
{
"file": "my_template",
"params": {
"status": [ "pending", "published" ]
},
"explain": true
}
```
You can use `profile` parameter when running a template:
```js
GET /_search/template
{
"file": "my_template",
"params": {
"status": [ "pending", "published" ]
},
"profile": true
}
```
TransportService is such a central part of the core server, replacing
it's implementation is risky and can cause serious issues. This change removes the ability to
plug in TransportService but allows registering a TransportInterceptor that enables
plugins to intercept requests on both the sender and the receiver ends. This is a commonly used
and overwritten functionality but encapsulates the custom code in a contained manner.
This change replaces the fields parameter with stored_fields when it makes sense.
This is dictated by the renaming we made in #18943 for the search API.
The following list of endpoint has been changed to use `stored_fields` instead of `fields`:
* get
* mget
* explain
The documentation and the rest API spec has been updated to cope with the changes for the following APIs:
* delete_by_query
* get
* mget
* explain
The `fields` parameter has been deprecated for the following APIs (it is replaced by _source filtering):
* update: the fields are extracted from the _source directly.
* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.
Some APIs still have the `fields` parameter for various reasons:
* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.
* indices.clear_cache: used to indicate which fielddata fields should be cleared.
* indices.get_field_mapping: used to filter fields in the mapping.
* indices.stats: get stats on fields (stored or not stored).
* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.
* mtermvectors:
* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.
Fixes#20155
In 5.x we allowed this with a deprecation warning. This removes the code
added for that deprecation, requiring the cluster name to not be in the
data path.
Resolves#20391
** The default script language is now maintained in `Script` class.
* Added `script.legacy.default_lang` setting that controls the default language for scripts that are stored inside documents (for example percolator queries). This defaults to groovy.
** Added `QueryParseContext#getDefaultScriptLanguage()` that manages the default scripting language. Returns always `painless`, unless loading query/search request in legacy mode then the returns what is configured in `script.legacy.default_lang` setting.
** In the aggregation parsing code added `ParserContext` that also holds the default scripting language like `QueryParseContext`. Most parser don't have access to `QueryParseContext`. This is for scripts in aggregations.
* The `lang` script field is always serialized (toXContent).
Closes#20122
* master:
Avoid NPE in LoggingListener
Randomly use Netty 3 plugin in some tests
Skip smoke test client on JDK 9
Revert "Don't allow XContentBuilder#writeValue(TimeValue)"
[docs] Remove coming in 2.0.0
Don't allow XContentBuilder#writeValue(TimeValue)
[doc] Remove leftover from CONSOLE conversion
Parameter improvements to Cluster Health API wait for shards (#20223)
Add 2.4.0 to packaging tests list
Docs: clarify scale is applied at origin+offest (#20242)
When Netty 4 was introduced, it was not the default network
implementation. Some tests were constructed to randomly use Netty 4
instead of the default network implementation. When Netty 4 was made the
default implementation, these tests were not updated. Thus, these tests
are randomly choosing between the default network implementation (Netty
4) and Netty 4. This commit updates these tests to reverse the role of
Netty 3 and Netty 4 so that the randomization is choosing between Netty
3 and the default (again, now Netty 4).
Relates #20265
* master:
Increase visibility of deprecation logger
Skip transport client plugin installed on JDK 9
Explicitly disable Netty key set replacement
percolator: Fail indexing percolator queries containing either a has_child or has_parent query.
Make it possible for Ingest Processors to access AnalysisRegistry
Allow RestClient to send array-based headers
Silence rest util tests until the bogusness can be simplified
Remove unknown HttpContext-based test as it fails unpredictably on different JVMs
Tests: Improve rest suite names and generated test names for docs tests
Add support for a RestClient base path
This commit modifies the call sites that allocate a parameterized
message to use a supplier so that allocations are avoided unless the log
level is fine enough to emit the corresponding log message.
This commit upgrades the Netty dependencies from version 4.1.4 to
version 4.1.5. This upgrade brings several bug fixes including the
removal of a obnoxious and scary-looking log message when unsafe is
explicitly disabled.
Relates #20222
Instead of get, set and remove we do get, remove and then set to avoid type conflicts in IngestDocument.
If the set still fails we try to restore the original field in ingest document.
Closes#19892
Netty3/4 TcpTransport implementations are creating thread factories with a "http_server" thread prefix whereas it should start with "transport_server" and let the "http_server" prefix for the HttpServerTransport implementations.
The network types in use on a cluster can be useful information to have,
so this commit adds aggregate metrics for the network types in use in a
cluster to the cluster stats.
Relates #20144
The Netty 4 HTTP server pipeline tests contains two different test
cases. The general idea behind these tests is to submit some requests to
a Netty 4 HTTP server, one test with pipelining enabled and another test
with pipelining disabled. These requests are submitted to two endpoints,
one with a path like /{id} and another with a path like /slow with a
query string parameter sleep. This parameter tells the request handler
how long to sleep for before replying. The idea is that in the case of
the pipelining enabled tests, the requests should come back exactly in
the order submitted, even with some of the requests hitting the slow
endpoint with random sleep durations; this is the guarantee that
pipelining provides. And in the case of the pipelining disabled tests,
requests were randombly submitted to /{id} and /slow with sleep
parameters starting at 600ms and increasing by 100ms for each slow
request constructed. We would expect the requests to come back with the
all the responses to the /{id} requests first because these requests
will execute instantaneously, and then the responses to the /slow
requests. Further, it was expected that the slow requests would come
back ordered by the length of the sleep, the thinking being that 100ms
should be enough of a difference between each request that we would
avoid any race conditions. Sadly, this is not the case, the threads do
sometimes hit race conditions.
This commit modifies the HTTP server pipelining tests to address this
race condition. The modification is that the query string parameter on
the /slow endpoint is removed in favor of just submitting requests to
the path /slow/{id}, where id just used a marker to distinguish each
request. The server chooses a random sleep of at least 500ms for each
request on the slow path. The assertion here then is that the /{id}
responses arrive first, then then /slow responses. We can not make an
assertion on the order of the responses, but we can assert that we did
see every expected response.
Relates #19845
This change adds a special field named _none_ that allows to disable the retrieval of the stored fields in a search request or in a TopHitsAggregation.
To completely disable stored fields retrieval (including disabling metadata fields retrieval such as _id or _type) use _none_ like this:
````
POST _search
{
"stored_fields": "_none_"
}
````
Currently, bulk item requests can be any ActionRequest, this commit
restricts bulk item requests to DocumentRequest. This simplifies
handling failures during bulk requests. Additionally, a new enum
is added to DocumentRequest to represent the intended operation
to be performed by a document request. Now, index operation type
also uses the new enum to specify whether the request should
create or index a document.
Reindex intentionally tries to fail the search operation to make sure
that the exception flows back. The exception message changed so we
should catch the appropriate exception.
This test was failing in the presence of transport clients. This turns
off transport clients while I fix the test so it doesn't fail for
everyone in the mean time.
The big change here is cleaning up the `TaskListResponse` so it doesn't
have a breaky `toString` implementation. That was causing the reindex
tests to break.
Also removed `NetworkModule#registerTaskStatus` which is part of the
Plugin API. Use `Plugin#getNamedWriteables` instead.
Parsing a search request is currently split up among a number of
classes, using multiple public static methods, which take multiple
regstries of elements that may appear in the search request like query
parsers and aggregations. This change begins consolidating all this code
by collapsing the registries normally used for parsing search requests
into a single SearchRequestParsers class. It is also made available to
plugin services to enable templating of search requests. Eventually all
of the actual parsing logic should move to the class, and the registries
should be hidden, but for now they are at least co-located to reduce the
number of objects that must be passed around.
Squashes all the subpackages of `org.elasticsearch.rest.action` down to
the following:
* `o.e.rest.action.admin` - Administrative actions
* `o.e.rest.action.cat` - Actions that make tables for `grep`ing
* `o.e.rest.action.document` - Actions that act on documents
* `o.e.rest.action.ingest` - Actions that act on ingest pipelines
* `o.e.rest.action.search` - Actions that search
I'm tempted to merge `search` into `document` but the `document`
package feels fairly complete as is and `Suggest` isn't actually always
about documents either....
I'm also tempted to merge `ingest` into `admin.cluster` because the
latter contains the actions for dealing with stored scripts.
I've moved the `o.e.rest.action.support` into `o.e.rest.action`.
I've also added `package-info.java`s to all packges in `o.e.rest`. I
figure if the package is too small to deserve a `package-info.java` file
then it is too small to deserve to be a package....
Also fixes checkstyle in all moved classes.
The term persisted task was used to indicate that a task should store its results upon its completion. We would like to use this term to indicate that a task can survive restart of nodes instead. This commit removes usages of the term "persist" when it means store results.
As the most complicated `FetchSubPhase` highlighting gets its own package
(`o.e.seach.fetch.subphase.highlight`. No other `FetchSubPhase`s get their
own package. Instead they all reside together in `o.e.search.fetch.subphase`.
Add package descriptions to `o.e.search.fetch` and subpackages.
Previously, we only caught subclasses of Exception, however, there are
some cases when Errors are thrown instead of Exceptions. These two cases
are `assert` and when a class cannot be found.
Without this change, the exception would bubble up to the
`uncaughtExceptionHandler`, which would in turn, exit the JVM
(related: #19923).
A note of difference between regular Java asserts and Groovy asserts,
from http://docs.groovy-lang.org/docs/latest/html/documentation/core-testing-guide.html
"Another important difference from Java is that in Groovy assertions are
enabled by default. It has been a language design decision to remove the
possibility to deactivate assertions."
In the event that a user uses an assert such as:
```groovy
def bar=false; assert bar, "message";
```
The GroovyScriptEngineService throws a NoClassDefFoundError being unable
to find the `java.lang.StringBuffer` class. It is *highly* recommended
that any Groovy scripting user switch to using regular exceptions rather
than unconfiguration Groovy assertions.
Resolves#19806
Update by query is a shortcut to search + index. UpdateByQueryRequest gets serialized on the transport layer only when the transport client is used. Given that the request supports wildcards and allows to set its indices, it should implement IndicesRequest.Repleaceable. implementing CompositeIndicesRequest makes little sense as the indices that the request works against depend entirely on the inner search request.
Delete by query is a shortcut to search + delete. DeleteByQueryRequest gets serialized on the transport layer only when the transport client is used. Given that the request supports wildcards and allows to set its indices, it should implement IndicesRequest.Repleaceable
Fixes two issues:
1. lang-javascript doesn't support `executable` with a `null` `vars`
parameters. The parameter is quite nullable.
2. reindex didn't support script engines who's `unwrap` method wasn't
a noop. This didn't come up for lang-groovy or lang-painless because
both of those `unwrap`s were noops. lang-javascript copys all maps that
it `unwrap`s.
This adds fairly low level unit tests for these fixes but dosen't add
an integration test that makes sure that reindex and lang-javascript
play well together. That'd make backporting this difficult and would
add a fairly significant amount of time to the build for a fairly rare
interaction. Hopefully the unit tests will be enough.
I also reduced the visibility of a couple classes and renamed/consolidated some
test classes for consistency, eg. removing the `Simple` prefix or using the
`<Type>FieldMapperTests` convention for testing field mappers.
Due to a misordering of the HTTP handlers, the Netty 4 HTTP server
mishandles Expect: 100-continue headers from clients. This commit fixes
this issue by ordering the handlers correctly.
Relates #19904
testUnknownObjectException used to generate malformed json objects in some cases, due to the existence of arrays as it was not closing the injected object correctly. That is why the test was catching JsonParseException among the exception that are expected to be thrown. That is fixed by tracking where the new object is placed and placing its end object marker to the right level rather than always at the end.
Also introduced a mechanism to explicitly declare objects that won't cause any exception when they get additional objects injected, so that there is no need to override the method anymore as that caused copy pasting of the whole test method. This also makes sure that changes are reflected in tests, as those inner objects are not skipped but we actually check that what is declared is true (no exceptions get thrown when an additional object is added within them.
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.
If more dynamic scripts are sent, a user will get the following
exception:
```json
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "i",
"node" : "a5V1eXcZRYiIk8lecjZ4Jw",
"reason" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
}
],
"caused_by" : {
"type" : "general_script_exception",
"reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
"bytes_wanted" : 0,
"bytes_limit" : 0
}
}
},
"status" : 500
}
```
This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.
Relates to #19396
GeoDistance is implemented using a crazy enum that causes issues with the scripting modules. This commit moves all distance calculations to arcDistance and planeDistance static methods in GeoUtils. It also removes unnecessary distance helper methods from ScriptDocValues.GeoPoints.
Some random operations were conditionally performed in the before test, which made tests not repeatable. For instance take the seed chain to repeat a specific iteration and try to reproduce it, this conditional code would get executed in both cases when trying to isolate the failure, but not among the different iterations (as only the first method/iteration executes it), hence the failure will not reproduce.
Moved the random operations to beforeClass and left the non random part in the before method, which is needed as it depends on some method that can be overridden by subclasses.
Today when we load the Netty plugins, we indirectly cause several Netty
classes to initialize. This is because we attempt to load some classes
by name, and loading these classes is done in a way that triggers a long
chain of class initializers within Netty. We should not do this, this
can lead to log messages before the logger is loader, and it leads to
initialization in cases when the classes would never be needed (for
example, Netty 3 class initialization is never needed if Netty 4 is
used, and vice versa). This commit avoids this early initialization of
these classes by removing the need for the early loading.
Relates #19819
This commit updates Jackson to the 2.8.1 version, which is more strict when it comes to build objects. It also adds the snakeyaml dependency that was previously shaded in jackson libs.
It also closes#18076
Adds `warnings` syntax to the yaml test that allows you to expect
a `Warning` header that looks like:
```
- do:
warnings:
- '[index] is deprecated'
- quotes are not required because yaml
- but this argument is always a list, never a single string
- no matter how many warnings you expect
get:
index: test
type: test
id: 1
```
These are accessible from the docs with:
```
// TEST[warning:some warning]
```
This should help to force you to update the docs if you deprecate
something. You *must* add the warnings marker to the docs or the build
will fail. While you are there you *should* update the docs to add
deprecation warnings visible in the rendered results.
This commit increases the Netty 3 REST test suite timeout to thirty
minutes. This is to address these tests running slowly after increasing
the number of nodes in the tests to two. This has surfaced that the
tests are heavily impacted by excessive fsyncs from most tests using the
default number of shards of five.
* master:
Fix REST test documentation
[Test] move methods from bwc test to test package for use in plugins (#19738)
package-info.java should be in src/main only.
Split regular histograms from date histograms. #19551
Tighten up concurrent store metadata listing and engine writes (#19684)
Plugins: Make NamedWriteableRegistry immutable and add extenion point for named writeables
Add documentation for the 'elasticsearch-translog' tool
[TEST] Increase time waiting for all shards to move off/on to a node
Fixes the active shard count check in the case of (#19760)
Fixes cat tasks operation in detailed mode
ignore some docker craziness in scccomp environment checks
Currently both aggregations really share the same implementation. This commit
splits the implementations so that regular histograms can support decimal
intervals/offsets and compute correct buckets for negative decimal values.
However the response API is still the same. So for intance both regular
histograms and date histograms will produce an
`org.elasticsearch.search.aggregations.bucket.histogram.Histogram`
aggregation.
The optimization to compute an identifier of the rounded value and the
rounded value itself has been removed since it was only used by regular
histograms, which now do the rounding themselves instead of relying on the
Rounding abstraction.
Closes#8082Closes#4847
Currently any code that wants to added NamedWriteables to the
NamedWriteableRegistry can do so via guice injection of the registry,
and registering at construction time. However, this makes the registry
complex: it has both get and register methods synchronized, and there is
likely contention on the read side from multiple threads. The
registration has mostly already been contained to guice modules at node
construction time.
This change makes the registry immutable, taking all of the
NamedWriteable readers at construction time. It also allows plugins to
added arbitrary named writables that it may use in its own transport
actions.
This fixes a bug in the RunningStats class for the matrix stats aggregation module. doc counts were not being searlized which means they were only computed the first time the aggregation was computed. This was causing incorrect results when the aggregation was pulled from cache.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
The work started with #19280, #19302 and #19336 and this PR moves the remaining messy tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
It also moves IndexLookupIT test back (even if it has good chance to be removed soon) and fixes its tests.
It also changes AbstractQueryTestCase to use custom script plugins in tests.
closes#13837
* Rename operation to result and reworking responses
* Rename DocWriteResponse.Operation enum to DocWriteResponse.Result
These are just easier to interpret names.
Closes#19664
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.cluster` into
`org.elasticsearch.rest.action.admin.cluster`.
Also fixes line length in these packages.
`_reindex` only needs the `_version` if the `dest` has
`"version_type": "external"`. So it shouldn't ask for it unless it does.
`_update_by_query` and `_delete_by_query` always need the `_version`.
Closes#19135
This is cleanup work from #19566, where @nik9000 suggested trying to nuke the isCreated and isFound methods. I've combined nuking the two methods with removing UpdateHelper.Operation in favor of DocWriteResponse.Operation here.
Closes#19631.
Reindex from remote uses the Elasticsearch client which uses apache
httpasyncclient which spins up 5 thread by default, 1 as a dispatcher
and 4 more to handle IO. This changes Reindex's usage so it only spins
up two thread - 1 dispatcher and one to handle io. It also renames the
threads to "es-client-$taskid-$thread_number". That way if we see any
thread sticking around we can trace it back to the task.
The tests for authentication extend ESIntegTestCase and use a mock
authentication plugin. This way the clients don't have to worry about
running it. Sadly, that means we don't really have good coverage on the
REST portion of the authentication.
This also adds ElasticsearchStatusException, and exception on which you
can set an explicit status. The nice thing about it is that you can
set the RestStatus that it returns to whatever arbitrary status you like
based on the status that comes back from the remote system.
reindex-from-remote then uses it to wrap all remote failures, preserving
the status from the remote Elasticsearch or whatever proxy is between us
and the remove Elasticsearch.
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds an extra REST handler for "_ingest/pipeline" so that users do not need to supply "_ingest/pipeline/*" to get all of them.
- Also adds a teardown section to related REST-tests for ingest.
Performing the bulk request shown in #19267 now results in the following:
```
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"create","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":201}
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"noop","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":200}
```
Recently, we experience timeouts on our Windows build slaves for
Netty4RestIT. Until we have figured out what's going on, we
increase this test suite's timeout temporarily to ensure this
timeout does not mask other problems.
We disable transitive dependencies in our build plugin
for all dependencies except for the group `org.elasticsearch`.
However, in the reindex plugin we depend on the REST client
and declare its dependencies again which is not necessary
(and led to problems with conflicting versions in #19281).
With this PR we remove the duplicate declaration.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
Messy tests with mustache were either moved to core, moved to a rest test or remained untouched if they actually tested mustache.
Also removed tests that were redundant.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
Fixes CORS handling so that it uses the defaults for http.cors.allow-methods
and http.cors.allow-headers if none are specified in the config.
Closes#19520
We better read the header, but who knows what can happen, maybe headers are filtered out for some reasons and we don't want to run into an NPE, then we fallback to auto-detection.
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
* Remove outdated aggregation registration method
* Remove AggregationStreams
* Adds StreamInput#readNamedWriteableList and
StreamOutput#writeNamedWriteableList convenience methods. We strive to
make the reading and writing from the streams terse so they are easier
to scan visually.
* Remove PipelineAggregatorStreams
* Remove stream info from InternalAggreation.Type
* Remove InternalAggregation#type
* Remove Streamable from PipelineAggregator
* Remove Streamable from MultiBucketsAggregation.Bucket
The new method accepts the usual parameters (method, endpoint, params, entity and headers) plus a response listener and an async response consumer. Shortcut methods are also added that don't require params, entity and the async response consumer optional.
There are a few relevant api changes as a consequence of the move to async client that affect sync methods:
- Response doesn't implement Closeable anymore, responses don't need to be closed
- performRequest throws Exception rather than just IOException, as that is the the exception that we get from the FutureCallback#failed method in the async http client
- ssl configuration is a bit simpler, one only needs to call setSSLStrategy from a custom HttpClientConfigCallback, that doesn't end up overridng any other default around connection pooling (it used to happen with the sync client and make ssl configuration more complex)
Relates to #19055
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412
* Removed `Template` class and unified script & template parsing logic. Templates are scripts, so they should be defined as a script. Unless there will be separate template infrastructure, templates should share as much code as possible with scripts.
* Removed ScriptParseException in favour for ElasticsearchParseException
* Moved TemplateQueryBuilder to lang-mustache module because this query is hard coded to work with mustache only
creation in the REST tests, as we no longer need it due
to index creation now waiting for active shard copies
before returning (by default, it waits for the primary of
each shard, which is the same as ensuring yellow health).
Relates #19450
integration tests, as they are no longer needed with
index creation now waiting for shards to be started before
returning from the index creation call (by default, it waits
for the primary of each shard to be started before returning,
which is what ensureYellow() was ensuring anyway).
Closes#19452
Relates #19450
Also introduced a `Processor.Parameters` class that is holder for several services processors rely on,
the IngestPlugin#getProcessors(...) method has been changed to accept `Processor.Parameters` instead
of each service seperately.
This commit renames the Netty 3 transport module from transport-netty to
transport-netty3. This is to make room for a Netty 4 transport module,
transport-netty4.
Relates #19439
Today `node.mode` and `node.local` serve almost the same purpose, they
are a shortcut for `discovery.type` and `transport.type`. If `node.local: true`
or `node.mode: local` is set elasticsearch will start in _local_ mode which means
only nodes within the same JVM are discovered and a non-network based transport
is used. The _local_ mode it only really used in tests or if nodes are embedded.
For both, embedding and tests explicit configuration via `discovery.type` and `transport.type`
should be preferred.
This change removes all the usage of these settings and by-default doesn't
configure a default transport implemenation since netty is now a module. Yet, to make
the user expericence flawless, plugins or modules can set a `http.type.default` and
`transport.type.default`. Plugins set this via `PluginService#additionalSettings()`
which enforces _set-once_ which prevents node startup if set multiple times. This means
that our distributions will just startup with netty transport since it's packaged as a
module unless `transport.type` or `http.transport.type` is explicitly set.
This change also found a bunch of bugs since several NamedWriteables were not registered if a
transport client is used. Now that we don't rely on the `node.mode` leniency which is inherited
instead of using explicit settings, `TransportClient` uses `AssertingLocalTransport` which detects these problems since it serializes all messages.
Closes#16234
This changes adds a flag which can be set in the esplugin closure in
build.gradle for plugins and modules which contain pieces that must be
published to maven, for use in the transport client. The jar/pom and
source/javadoc jars are moved to a new name that has the suffix
"-client".
I enabled this for the two modules that I know definitely need this;
there may be more. One open question is which groupId to use for the
generated pom.
closes#19411
Some tests still start http implicitly or miss configuring the transport clients correctly.
This commit fixes all remaining tests and adds a depdenceny to `transport-netty` from
`qa/smoke-test-http` and `modules/reindex` since they need an http server running on the nodes.
This also moves all required permissions for netty into it's module and out of core.
Switches most search behavior extensions from push (`onModule(SearchModule)`)
to pull (`implements SearchPlugin`). This effort in general gives plugin
authors a much cleaner view of how to extend Elasticsearch and starts to
set up portions of Elasticsearch as "the plugin API". This commit in
particular does that for search-time behavior like customized suggesters,
highlighters, score functions, and significance heuristics.
It also switches most such customization to being done at search module
construction time which is much, much easier to reason about from a testing
perspective. It also helps significantly in the process of de-guice-ing
Elasticsearch's startup.
There are at least two major search time extensions that aren't covered in
this commit that will simply have to wait for the next commit on the topic
because this one has already grown large: custom aggregations and custom
queries. These will likely live in the same SearchPlugin interface as well.
If there are percolator queries containing `range` queries with ranges based on the current time then this can lead to incorrect results if the `percolate` query gets cached. These ranges are changing each time the `percolate` query gets executed and if this query gets cached then the results will be based on how the range was at the time when the `percolate` query got cached.
The ExtractQueryTermsService has been renamed `QueryAnalyzer` and now only deals with analyzing the query (extracting terms and deciding if the entire query is a verified match) . The `PercolatorFieldMapper` is responsible for adding the right fields based on the analysis the `QueryAnalyzer` has performed, because this is highly dependent on the field mappings. Also the `PercolatorFieldMapper` is responsible for creating the percolate query.
This exposes a method to start an action and return a task from
`NodeClient`. This allows reindex to use the injected `Client` rather
than require injecting `TransportAction`s
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This commit moves back some messy tests that have been placed in lang-groovy module in https://github.com/elastic/elasticsearch/pull/13834. It removes the dependency on Groovy plugin as well as change back the tests to integration tests (IT suffix).
It also changes the current MockScriptEngine and MockScriptPlugin to make it easier to use.
* master: (192 commits)
[TEST] Fix rare OBOE in AbstractBytesReferenceTestCase
Reindex from remote
Rename writeThrowable to writeException
Start transport client round-robin randomly
Reword Refresh API reference (#19270)
Update fielddata.asciidoc
Fix stored_fields message
Add missing footer notes in mapper size docs
Remote BucketStreams
Add doc values support to the _size field in the mapper-size plugin
Bump version to 5.0.0-alpha5.
Update refresh.asciidoc
Update shrink-index.asciidoc
Change Debian repository for Vagrant debian-8 box
[TEST] fix test to account for internal empyt reference optimization
Upgrade to netty 3.10.6.Final (#19235)
[TEST] fix histogram test when extended bounds overlaps data
Remove redundant modifier
Simplify TcpTransport interface by reducing send code to a single send method (#19223)
Fix style violation in InstallPluginCommand.java
...
This adds a remote option to reindex that looks like
```
curl -POST 'localhost:9200/_reindex?pretty' -d'{
"source": {
"remote": {
"host": "http://otherhost:9200"
},
"index": "target",
"query": {
"match": {
"foo": "bar"
}
}
},
"dest": {
"index": "target"
}
}'
```
This reindex has all of the features of local reindex:
* Using queries to filter what is copied
* Retry on rejection
* Throttle/rethottle
The big advantage of this version is that it goes over the HTTP API
which can be made backwards compatible.
Some things are different:
The query field is sent directly to the other node rather than parsed
on the coordinating node. This should allow it to support constructs
that are invalid on the coordinating node but are valid on the target
node. Mostly, that means old syntax.
This commit renames writeThrowable to writeException. The situation here
stems from the fact that the StreamOutput method for serializing
Exceptions needs to accept Throwables too as Throwables can be the cause
of serialized Exceptions. Yet, we do not serialize Throwables in the
Error sub-hierarchy in a way that they can be deserialized into their
initial type. This leads to an asymmetry in the StreamOutput method for
serializing Exceptions and the StreamInput method for writing
Excpetions. Namely, the former will accept Throwables but the latter
will only return Exceptions. A goal with the stream methods has always
been symmetry in the method names so that serialization/deserialization
routines appear symmetrical in code. It is this asymmetry on the
input/output types for Exceptions on StreamOutput/StreamInput that
clashes with the desired symmetry of naming. Despite this, we should
favor symmetry in the naming of the methods. This commit renames
StreamOutput#writeThrowable to StreamOutput#writeException which leaves
us with Exception StreamInput#readException and void
StreamOutput#writeException(Throwable).
Today throughout the codebase, catch throwable is used with reckless
abandon. This is dangerous because the throwable could be a fatal
virtual machine error resulting from an internal error in the JVM, or an
out of memory error or a stack overflow error that leaves the virtual
machine in an unstable and unpredictable state. This commit removes
catch throwable from the codebase and removes the temptation to use it
by modifying listener APIs to receive instances of Exception instead of
the top-level Throwable.
Relates #19231
Rename `fields` to `stored_fields` and add `docvalue_fields`
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
BytesReference should be a really simple interface, yet it has a gazillion
ways to achieve the same this. Methods like `#hasArray`, `#toBytesArray`, `#copyBytesArray`
`#toBytesRef` `#bytes` are all really duplicates. This change simplifies the interface
dramatically and makes implementations of it much simpler. All array access has been removed
and is streamlined through a single `#toBytesRef` method. Utility methods to materialize a
compact byte array has been added too for convenience.
We'll migrate to NamedWriteable so we can share code with the rest
of the system. So we can work on this in multiple pull requests without
breaking Elasticsearch in between the commits this change supports
*both* old style `InternalAggregations.stream` serialization and
`NamedWriteable` style serialization. As such it creates about a
half dozen `// NORELEASE` comments that will have to be removed
once the migration is complete.
This also introduces a boolean `transportClient` flag to `SearchModule`
which is used to skip inappropriate registrations for for the
transport client while still registering the things it needs. In
this case that means that the `InternalAggregation` subclasses are
registered with the `NamedWriteableRegistry` but the `AggregationBuilder`
subclasses are not.
Finally, this moves aggregation registration from guice configuration
time to `SearchModule` construction time. This will make it simpler to
work with in the future as we further clean up Elasticsearch's
extension points.
We have long worked to capture different partitioning scenarios in our testing infra. This PR adds a new variant, inspired by the Jepsen blogs, which was forgotten far - namely a partition where one node can still see and be seen by all other nodes. It also updates the resiliency page to better reflect all the work that was done in this area.
Update-By-Query and Delete-By-Query use internal versioning to update/delete documents. But documents can have a version number equal to zero using the external versioning... making the UBQ/DBQ request fail because zero is not a valid version number and they only support internal versioning for now. Sequence numbers might help to solve this issue in the future.
The factory for ingest processor is generic, but that is only for the
return type of the create mehtod. However, the actual consumer of the
factories only cares about Processor, so generics are not needed.
This change removes the generic type from the factory. It also removes
AbstractProcessorFactory which only existed in order pull the optional
tag from config. This functionality is moved to the caller of the
factories in ConfigurationUtil, and the create method now takes the tag.
This allows the covariant return of the implementation to work with
tests not needing casts.
Previously all rest handlers would take Client in their injected ctor.
However, it was only to hold the client around for runtime. Instead,
this can be done just once in the HttpService which handles rest
requests, and passed along through the handleRequest method. It also
should always be a NodeClient, and other types of Clients (eg a
TransportClient) would not work anyways (and some handlers can be
simplified in follow ups like reindex by taking NodeClient).
`RestHandler`s are highly tied to actions so registering them in the
same place makes sense.
Removes the need to for plugins to check if they are in transport client
mode before registering a RestHandler - `getRestHandlers` isn't called
at all in transport client mode.
This caused guice to throw a massive fit about the circular dependency
between NodeClient and the allocation deciders. I broke the circular
dependency by registering the actions map with the node client after
instantiation.
This pull request adds two util functions to the Mustache templating engine:
- {{#toJson}}my_map{{/toJson}} to render a Map parameter as a JSON string
- {{#join}}my_iterable{{/join}} to render any iterable (including arrays) as a comma separated list of values like `1, 2, 3`. It's also possible de change the default delimiter (comma) to something else.
closes#18970
These are useful methods in groovy that give you control over
the replacements used:
```
'the quick brown fox'.replaceAll(/[aeiou]/,
m -> m.group().toUpperCase(Locale.ROOT))
```
Stored scripts are pulled from the cluster state, and the current api
requires passing the ClusterState on each call to compile. However, this
means every user of the ScriptService needs to depend on the
ClusterService. Instead, this change makes the ScriptService a
ClusterStateListener. It also simplifies tests a lot, as they no longer
need to create fake cluster states (except when testing stored scripts).
Instead of implementing onModule(ActionModule) to register actions,
this has plugins implement ActionPlugin to declare actions. This is
yet another step in cleaning up the plugin infrastructure.
While I was in there I switched AutoCreateIndex and DestructiveOperations
to be eagerly constructed which makes them easier to use when
de-guice-ing the code base.
Previous to this change the unresolved extended bounds was passed into the histogram aggregator which meant extendedbounds.min and extendedbounds.max was passed through as null. This had two effects on the histogram aggregator:
1. If the histogram aggregator was unmapped across all shards, the reduce phase would not add buckets for the extended bounds and the response would contain zero buckets
2. If the histogram aggregator was not unmapped in some shards, the reduce phase might sometimes chose to reduce based on the unmapped shard response and therefore the extended bounds would be ignored.
This change resolves the extended bounds in the unmapped case and solves the above two issues.
Closes#19009
This commit fixes several NPEs caused by implicitly performing a get request for a document that exists with its _source disabled and then trying to access the source. Instead of causing an NPE the following queries will throw an exception with a "source disabled" message (similar behavior as if the document does not exist).:
- GeoShape query for pre-indexed shape (throws IllegalArgumentException)
- Percolate query for an existing document (throws IllegalArgumentException)
A Terms query with a lookup will ignore the document if the source does not exist (same as if the document does not exist).
GET and HEAD requests for the document _source will return a 404 if the source is disabled (even if the document exists).
Instead of plugins calling `registerTokenizer` to extend the analyzer
they now instead have to implement `AnalysisPlugin` and override
`getTokenizer`. This lines up extending plugins in with extending
scripts. This allows `AnalysisModule` to construct the `AnalysisRegistry`
immediately as part of its constructor which makes testing anslysis
much simpler.
This also moves the default analysis configuration into `AnalysisModule`
which is how search is setup.
Like `ScriptModule`, `AnalysisModule` no longer extends `AbstractModule`.
Instead it is only responsible for building `AnslysisRegistry`. We still
bind `AnalysisRegistry` but we only do so in `Node`. This is means it
is available at module construction time so we slowly remove the need to
bind it in guice.
This commit fixes several NPEs caused by implicitly performing a get request for a document that exists with its _source disabled and then trying to access the source. Instead of causing an NPE the following queries will throw an exception with a "source disabled" message (similar behavior as if the document does not exist).:
- GeoShape query for pre-indexed shape (throws IllegalArgumentException)
- Percolate query for an existing document (throws IllegalArgumentException)
A Terms query with a lookup will ignore the document if the source does not exist (same as if the document does not exist).
GET and HEAD requests for the document _source will return a 404 if the source is disabled (even if the document exists).
If we don't care about scoring then for certain candidate matches we can be certain, that if they are a candidate match,
then they will always match. So verifying these queries with the MemoryIndex can be skipped.
* master: (416 commits)
docs: removed obsolete information, percolator queries are not longer loaded into jvm heap memory.
Upgrade JNA to 4.2.2 and remove optionality
[TEST] Increase timeouts for Rest test client (#19042)
Update migrate_5_0.asciidoc
Add ThreadLeakLingering option to Rest client tests
Add a MultiTermAwareComponent marker interface to analysis factories. #19028
Attempt at fixing IndexStatsIT.testFilterCacheStats.
Fix docs build.
Move templates out of the Search API, into lang-mustache module
revert - Inline reroute with process of node join/master election (#18938)
Build valid slices in SearchSourceBuilderTests
Docs: Convert aggs/misc to CONSOLE
Docs: migration notes for _timestamp and _ttl
Group client projects under :client
[TEST] Add client-test module and make client tests use randomized runner directly
Move upgrade test to upgrade from version 2.3.3
Tasks: Add completed to the mapping
Fail to start if plugin tries broken onModule
Remove duplicated read byte array methods
Rename `fields` to `stored_fields` and add `docvalue_fields`
...
This commit moves template support out of the Search API to its own dedicated Search Template API in the lang-mustache module. It provides a new SearchTemplateAction that can be used to render templates before it gets delegated to the usual Search API. The current REST endpoint are identical, but the Render Search Template endpoint now uses the same Search Template API with a new "simulate" option. When this option is enabled, the Search Template API only renders template and returns immediatly, without executing the search.
Closes#17906
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
This changes adds a MapperPlugin interface which allows pull style
retrieval of mappers and metadata mappers added by plugins. For now, I
have kept the MapperRegistry, but this should be removed in the future
as it is just a silly container for 2 maps which could themselves be
passed around.
Makes ScriptModule just a plain class that manages building the
ScriptSettings and ScriptService from plugins. When we *need*
to bind ScriptService with guice we bind it in a lambda.
- fix it so that processors with the `ignore_failure` option do not
record their exception in the response
- add more tests to make empty `on_failure`. This now throws an
exception
This makes this sequence:
```
curl -XDELETE localhost:9200/source,dest?pretty
for i in $( seq 1 100 ); do
curl -XPOST localhost:9200/source/test -d'{"test": "test"}'; echo
done
curl localhost:9200/_refresh?pretty
curl -XPOST 'localhost:9200/_reindex?pretty&wait_for_completion=false' -d'{
"source": {
"index": "source"
},
"dest": {
"index": "dest"
}
}'
curl 'localhost:9200/_tasks/Jsyd6d9wSRW-O-NiiKbPcQ:237?wait_for_completion&pretty'
```
Return task *AND* the response to the user.
This also renames "result" to "response" in the persisted task info
to line it up with how we name the objects in Elasticsearch.
Perviously we used token level lookbehind in the parser. That worked,
but only if the parser didn't have any ambiguity *at all*. Since the
parser has ambiguity it didn't work everywhere. In particular it failed
when parsing blocks in lambdas like `a -> {int b = a + 2; b * b}`.
This moves the hack from the parser into the lexer. There we can use
token lookbehind (same trick) to *insert* semicolons into the token
stream. This works much better for antlr because antlr's prediction
code can work with real tokens.
Also, the lexer is simpler than the parser, so if there is a place
to introduce a hack, that is a better place.
This change removes some unnecessary dependencies from ClusterService
and cleans up ClusterName creation. ClusterService is now not created
by guice anymore.
Painless: Add support for //m
Painless: Add support for //s
Painless: Add support for //i
Painless: Add support for //u
Painless: Add support for //U
Painless: Add support for //l
This means "literal" and is exposed for completeness sake with
the java api.
Painless: Add support for //c
c enables Java's CANON_EQ (canonical equivalence) flag which makes
unicode characters that are canonically equal match. Java's javadoc
gives "a\u030A" being equal to "\u00E5". That is that the "a" code
point followed by the "combining ring above" code point is equal to
the "a with combining ring above" code point.
Update docs and add multi-flag test
Whitelist most of the Pattern class.
Today we have a push model for registering basically anything. All our extension points
are defined on modules which we pass in to plugins. This is harder to maintain and adds
unnecessary dependencies on the modules itself. This change moves towards a pull model
where the plugin offers a getter kind of method to get the extensions. This will also
help in the future if we need to pass dependencies to the extension points which can
easily be defined on the method as arguments if a pull model is used.
Adds support for the find operator (=~) and the match operator (==~)
to painless's regexes. Also whitelists most of the Matcher class and
documents regex support in painless.
The find operator (=~) returns a boolean that is the result of building
a matcher on the lhs with the Pattern on the RHS and calling `find` on
it. Use it like this:
```
if (ctx._source.last =~ /b/)
```
The match operator (==~) returns boolean like find but instead of calling
`find` on the Matcher it calls `matches`.
```
if (ctx._source.last ==~ /[^aeiou].*[aeiou]/)
```
Finally, if you want the actual matcher you do:
```
Matcher m = /[aeiou]/.matcher(ctx._source.last)
```
Registering a script engine or native scripts still uses Guice today
and is much more complicated than needed. This change moves to a pull
based model where script plugins have to implement a dedicated interface
`ScriptPlugin` and defines simple getter returning instances rather than
classes.
In 2.0 we added plugin descriptors which require defining a name and
description for the plugin. However, we still have name() and
description() which must be overriden from the Plugin class. This still
exists for classpath plugins. But classpath plugins are mainly for
tests, and even then, referring to classpath plugins with their class is
a better idea. This change removes name() and description(), replacing
the name for classpath plugins with the full class name.
This interface used to have dedicated methods to prevent calling execute
methods. These methods are unnecessary as the checks can simply be
done inside the execute methods itself. This simplifies the interface
as well as its usage.
This adds a get task API that supports GET /_tasks/${taskId} and
removes that responsibility from the list tasks API. The get task
API supports wait_for_complation just as the list tasks API does
but doesn't support any of the list task API's filters. In exchange,
it supports falling back to the .results index when the task isn't
running any more. Like any good GET API it 404s when it doesn't
find the task.
Then we change reindex, update-by-query, and delete-by-query to
persist the task result when wait_for_completion=false. The leads
to the neat behavior that, once you start a reindex with
wait_for_completion=false, you can fetch the result of the task by
using the get task API and see the result when it has finished.
Also rename the .results index to .tasks.
Adds `/regex/` as a regex constructor. A couple of fun points:
1. This makes generic the idea of arbitrary stuff adding a constant.
Both SFunction and LRegex create a statically initialized constant.
Both go through Locals to do this because they LRegex isn't directly
iterable from SScript.
2. Differentiating `/` as-in-division from `/` as-in-start-of-regex
is hard. See:
http://www-archive.mozilla.org/js/language/js20-2002-04/rationale/syntax.html#regular-expressions
The javascript folks have a way, way tougher time of it then we do
because they have semicolon insertion. We have the much simpler
delimiter rules. Even with our simpler life we still have to add
a hack to get lexing `/regex/` to work properly. I chose to add
token-level lookbehind because it seems to be a pretty contained hack.
I considered and rejected lexer modes, a lexer member variable,
having the parser set variables on the lexer (this is a fairly common
solution for js, I believe), and moving regex parsing to the parser
level.
3. I've only added a very small subset of java.util.regex to the
whitelist because it is the subset I needed to test LRegex sanely.
More deserves to be added, and maybe more regex syntax like `=~` and
`==~`. Those can probably be added without too much pain.
Add `}` is statement delimiter but only in places where it is
otherwise a valid part of the syntax, specificall the end of a block.
We do this by matching but not consuming it. Antlr 4 doesn't have
syntax for this so we have to kind of hack it together by actually
matching the `}` and then seeking backwards in the token stream to
"unmatch" it. This looks reasonably efficient. Not perfect, but way
better than the alternatives.
I tried and rejected a few options:
1. Actually consuming the `}` and piping a boolean all through the
grammar from the last statement in a block to the delimiter. This
ended up being a rather large change and made the grammar way more
complicated.
2. Adding a semantic predicate to delimiter that just does the
lookahead. This doesn't work out well because it doesn't work (I
never figured out why) and because it generates an *amazing*
`adaptivePredict` which makes a super huge DFA. It looks super
inefficient.
Closes#18821
Writeable is better for immutable objects like TimeValue.
Switch to writeZLong which takes up less space than the original
writeLong in the majority of cases. Since we expect negative
TimeValues we shouldn't use
writeVLong.
Today we use a random source of UUIDs for assigning allocation IDs,
cluster IDs, etc. Yet, the source of randomness for this is not
reproducible in tests. Since allocation IDs end up as keys in hash maps,
this means allocation decisions and not reproducible in tests and this
leads to non-reproducible test failures. This commit modifies the
behavior of random UUIDs so that they are reproducible under tests. The
behavior for production code is not changed, we still use a true source
of secure randomness but under tests we just use a reproducible source
of non-secure randomness.
It is important to note that there is a test,
UUIDTests#testThreadedRandomUUID that relies on the UUIDs being truly
random. Thus, we have to modify the setup for this test to use a true
source of randomness. Thus, this is one test that will never be
reproducible but it is intentionally so.
Relates #18808
The painless whitelist has a lot of self-checking, in this case, it checks
for missing covariant overrides. It fails on java 9, because LocalDate.getEra()
now returns IsoEra instead of Era: https://bugs.openjdk.java.net/browse/JDK-8072746
To our checker, it thinks we were lazy with whitelisting :)
This means painless works on java 9 again
* master: (51 commits)
Switch QueryBuilders to new MatchPhraseQueryBuilder
Added method to allow creation of new methods on-the-fly.
more cleanups
Remove cluster name from data path
Remove explicit parallel new GC flag
rehash the docvalues in DocValuesSliceQuery using BitMixer.mix instead of the naive Long.hashCode.
switch FunctionRef over to methodhandles
ingest: Move processors from core to ingest-common module.
Fix some typos (#18746)
Fix ut
convert FunctionRef/Def usage to methodhandles.
Add the ability to partition a scroll in multiple slices. API:
use painless types in FunctionRef
Update ingest-node.asciidoc
compute functional interface stuff in Definition
Use method name in bootstrap check might fork test
Make checkstyle happy (add Lookup import, line length)
Don't hide LambdaConversionException and behave like real javac compiled code when a conversion fails. This works anyways, because fallback is allowed to throw any Throwable
Pass through the lookup given by invokedynamic to the LambdaMetaFactory. Without it real lambdas won't work, as their implementations are private to script class
checkstyle have your upper L
...
Folded grok processor into ingest-common module.
The rest tests have been moved to ingest-common module as well, because these tests don't run in the rest-api-spec module but in the distribution:integ-test-zip module
and adding a test plugin there felt just wrong to me. I think this is ok. I left a tiny ingest rest test behind in that tests with an empty pipeline.
Removed messy tests, these tests were already covered in the rest tests
Added ingest test plugin in test infra so that each module testing integration with ingest doesn't need write its own plugin
Moved reindex ingest tests to qa module
Closes#18490
This commit refactors the handling of thread pool settings so that the
individual settings can be registered rather than registering the top
level group. With this refactoring, individual plugins must now register
their own settings for custom thread pools that they need, but a
dedicated API is provided for this in the thread pool module. This
commit also renames the prefix on the thread pool settings from
"threadpool" to "thread_pool". This enables a hard break on the settings
so that:
- some of the settings can be given more sensible names (e.g., the max
number of threads in a scaling thread pool is now named "max" instead
of "size")
- change the soft limit on the number of threads in the bulk and
indexing thread pools to a hard limit
- the settings names for custom plugins for thread pools can be
prefixed (e.g., "xpack.watcher.thread_pool.size")
- remove dynamic thread pool settings
Relates #18674
This adds support for setting the refresh request parameter to
`wait_for` in the `index`, `delete`, `update`, and `bulk` APIs. When
`refresh=wait_for` is set those APIs will not return until their
results have been made visible to search by a refresh.
Also it adds a `forced_refresh` field to the response of `index`,
`delete`, `update`, and to each item in a bulk response. This will
be true for requests with `?refresh` or `?refresh=true` and will be
true for some requests (see below) with `refresh=wait_for` but ought
to otherwise always be false.
`refresh=wait_for` is implemented as a list of
`Tuple<Translog.Location, Consumer<Boolean>>`s in the new `RefreshListeners`
class that is managed by `IndexShard`. The dynamic, index scoped
`index.max_refresh_listeners` setting controls a maximum number of
listeners allowed in any shard. If more than that many listeners
accumulate in the engine then a refresh will be forced, the thread that
adds the listener will be blocked until the refresh completes, and then the
listener will be called with a `forcedRefresh` flag so it knows that it was
the "straw that broke the camel's back". These listeners are only used by
`refresh=wait_for` and that flag manifests itself as `forced_refresh` being
`true` in the response.
About half of this change comes from piping async-ness down to the appropriate
layer in a way that is compatible with the ongoing with with sequence ids.
Closes#1063
You can look up the winding story of all the commits here:
https://github.com/elastic/elasticsearch/pull/17986
Here are the commit messages in case they are intersting to you:
commit 59a753b89109828d2b8f0de05cb104fc663cf95e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:18:23 2016 -0400
Replace a method reference with implementing an interface
Saves a single allocation and forces more commonality
between the WriteResults.
commit 31f7861a85b457fb7378a6f27fa0a0c171538f68
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 10:07:55 2016 -0400
Revert "Replace static method that takes consumer with delegate class that takes an interface"
This reverts commit 777e23a6592c75db0081a53458cc760f4db69507.
commit 777e23a6592c75db0081a53458cc760f4db69507
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 09:29:35 2016 -0400
Replace static method that takes consumer with delegate class that takes an interface
Same number of allocations, much less code duplication.
commit 9b49a480ca9587a0a16ebe941662849f38289644
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Jun 6 08:25:38 2016 -0400
Patch from boaz
commit c2bc36524fda119fd0514415127e8901d94409c8
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:46:27 2016 -0400
Fix docs
After updating to master we are actually testing them.
commit 03975ac056e44954eb0a371149d410dcf303e212
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:20:11 2016 -0400
Cleanup after merge from master
commit 9c9a1deb002c5bebb2a997c89fa12b3d7978e02e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:09:14 2016 -0400
Breaking changes notes
commit 1c3e64ae06c07a85f7af80534fab88279adb30b4
Merge: 9e63ad6 f67e580
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 14:00:05 2016 -0400
Merge branch 'master' into block_until_refresh2
commit 9e63ad6de52d0b28f0b6d7203721baf1ebf6f56b
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 13:21:27 2016 -0400
Test for TransportWriteAction
commit 522ecb59d39b3c9e8df0d3b8df34b9e7aeaf0ce9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:30:18 2016 -0400
Document deprecation
commit 0cd67b947f58867e704a1f0e66928a6fb5a11f11
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:26:23 2016 -0400
Deprecate setRefresh(boolean)
Users should use `setRefresh(RefreshPolicy)` instead.
commit aeb1be3f2c501990b33fb1f8230d496035f498ef
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:12:27 2016 -0400
Remove checkstyle suppression
It is fixed
commit 00d09a9caa638b6f90f4896b5502dd98d8fad56e
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:08:28 2016 -0400
Improve comment
commit 788164b898a6ee2878a273961230122b7386c3c9
Author: Nik Everett <nik9000@gmail.com>
Date: Thu Jun 2 10:01:01 2016 -0400
S/ReplicatedWriteResponse/WriteResponse/
Now it lines up with WriteRequest.
commit b74cf3fe778352b140355afcaa08d3d4412d749d
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 18:27:52 2016 -0400
Preserve `?refresh` behavior
`?refresh` means the same things as `?refresh=true`.
commit 30f972bdaeaaa0de6fe67746cdb8628aa86f5a8c
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 17:39:05 2016 -0400
Handle hanging documents
If a document is added to the index during a refresh we weren't properly
firing its refresh listener. This happened because the way we detect
whether a refresh makes something visible or not is imperfect. It is
ok because it always errs on the side of thinking that something isn't
yet visible.
So when a document arrives during a refresh the refresh listeners
won't think it made it into a refresh when, often, it does. The way
we work around this is by telling Elasticsearch that it ought to
trigger a refresh if there are any pending refresh listeners even
if there aren't pending documents to update. Lucene short circuits
the refresh so it doesn't take that much effort, but the refresh
listeners still get the signal that a refresh has come in and they
still pick up the change and notify the listener.
This means that the time that a listener can wait is actually slightly
longer than the refresh interval.
commit d523b5702b60c7ba309fb0dcf3cd3a4798f11960
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:34:01 2016 -0400
Explain Integer.MAX_VALUE
commit 4ffb7c0e954343cc1c04b3d7be2ebad66d3a016b
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:27:39 2016 -0400
Fire all refresh listeners in a single thread
Rather than queueing a runnable each.
commit 19606ec3bbe612095df45eba734c5b7eb2709c01
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 14:09:52 2016 -0400
Assert translog ordering
commit 6bb4e5c75e850f4a42518f06fbc955f7ec76d245
Author: Nik Everett <nik9000@gmail.com>
Date: Wed Jun 1 13:17:44 2016 -0400
Support null RefreshListeners in InternalEngine
Just skip using it.
commit 74be1480d6e44af2b354ff9ea47c234d4870b6c2
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 18:02:03 2016 -0400
Move funny ShardInfo hack for bulk into bulk
This should make it easier to understand because it is closer to where it
matters....
commit 2b771f8dabd488e056cfdc9989608d18264ddfb0
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:39:46 2016 -0400
Pull listener out into an inner class with javadoc and stuff
commit 058481ad72019c0492b03a7a4ac32a48673697d3
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:33:42 2016 -0400
Fix javadoc links
commit d2123b1cabf29bce8ff561d4a4c1c1d5b42bccad
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:28:09 2016 -0400
Make more stuff final
commit 8453fc4f7850f6a02fb5971c17a942a3e3fd9f7b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 17:26:48 2016 -0400
Javadoc
commit fb16d2fc7016c1e8e1621d481e8781c7ef43326c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 16:14:48 2016 -0400
Rewrite refresh docs
commit 5797d1b1c4d233c0db918c0d08c21731ddccd05e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 15:02:34 2016 -0400
Fix forced_refresh flag
It wasn't being set.
commit 43ce50a1de250a9e073a2ca6cbf55c1b4c74b11b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 14:02:56 2016 -0400
Delay translog sync and flush until after refresh
The sync might have occurred for us during the refresh so we
have less work to do. Maybe.
commit bb2739202e084703baf02cfa58f09517598cf14e
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 13:08:08 2016 -0400
Remove duplication in WritePrimaryResult and WriteReplicaResult
commit 2f579f89b4867a880396f2e7fcffc508449ff2de
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 12:19:05 2016 -0400
Clean up registration of RefreshListeners
commit 87ab6e60ca5ba945bf0fba84784b2bbe53506abf
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 31 11:28:30 2016 -0400
Shorten lock time in RefreshListeners
Also use null to represent no listeners rather than an empty list.
This saves allocating a new ArrayList every refresh cycle on every
index.
commit 0d49d9c5720dadfb67da3fa760397bf6d874601c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 10:46:18 2016 -0400
Flip relationship between RefreshListeners and Engine
Now RefreshListeners comes to Engine from EngineConfig.
commit b2704b8a39382953f8f91a9743e894ee289f7514
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:58 2016 -0400
Remove unused imports
Maybe I added them?
commit 04343a22647f19304d9dc716b3fac9b183227f63
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:37:52 2016 -0400
Javadoc
commit da1e765678890a02d61d8a29aa433274beb5e00c
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 09:26:35 2016 -0400
Reply with non-null
Also move the fsync and flush to before the refresh listener stuff.
commit 5d8eecd0d904b497844b4c81c46477bd6178ed3a
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:58:47 2016 -0400
Remove funky synchronization in AsyncReplicaAction
commit 1ec71eea0f4e1228ae1497d982307be818ef4b65
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 08:01:14 2016 -0400
s/LinkedTransferQueue/ArrayList/
commit 7da36a4ceed2ccf7955138c3b005237fa41efcb4
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:46:38 2016 -0400
More cleanup for RefreshListeners
commit 957e9b77007c32ee75dde152c6622bab065d5993
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 24 07:34:13 2016 -0400
/Consumer<Runnable>/Executor/
commit 4d8bf5d4a70dcc56150c8d8d14165cd23d308b3c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:20:42 2016 -0400
explain
commit 15d948a348089bb2937eec5ac4e96f3ec67dbe32
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 22:17:59 2016 -0400
Better....
commit dc28951d02973fc03b4d51913b5f96de14b75607
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 21:09:20 2016 -0400
Javadocs and compromises
commit 8eebaa89c0a1ee74982fbe0d56d1485ca2ae09db
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 20:52:49 2016 -0400
Take boaz's changes to their logic conclusion and unbreak important stuff like bulk
commit 7056b96ea412f275005b93e3570bcff895859ed5
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:32 2016 -0400
Patch from boaz
commit 87be7eaed09a274cc6a99d1a3da81d2d7bf9dd64
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 23 15:49:13 2016 -0400
Revert "Move async parts of replica operation outside of the lock"
This reverts commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2.
commit 13807ad10b6f5ecd39f98c9f20874f9f352c5bc2
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 22:53:15 2016 -0400
Move async parts of replica operation outside of the lock
commit b8cadcef565908b276484f7f5f988fd58b38d8b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 16:17:20 2016 -0400
Docs
commit 91149e0580233bf79c2273b419fe9374ca746648
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:17:40 2016 -0400
Finally!
commit 1ff50c2faf56665d221f00a18d9ac88745904bf5
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 15:01:53 2016 -0400
Remove Translog#lastWriteLocation
I wasn't being careful enough with locks so it wasn't right anyway.
Instead this builds a synthetic Tranlog.Location when you call
getWriteLocation with much more relaxed equality guarantees. Rather
than being equal to the last Translog.Location returned it is
simply guaranteed to be greater than the last translog returned
and less than the next.
commit 55596ea68b5484490c3637fbad0d95564236478b
Author: Nik Everett <nik9000@gmail.com>
Date: Fri May 20 14:40:06 2016 -0400
Remove listener from shardOperationOnPrimary
Create instead asyncShardOperationOnPrimary which is called after
all of the replica operations are started to handle any async
operations.
commit 3322e26211bf681b37132274ee158ae330afc28b
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 17:20:02 2016 -0400
Increase default maximum number of listeners to 1000
commit 88171a8322a424e624d48960fb4c98dd43e4d671
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:40:57 2016 -0400
Rename test
commit 179c27c4f829f2c6ded65967652cf85adaf2ae52
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 16:35:27 2016 -0400
Move refresh listeners into their own class
They still live at the IndexShard level but they live on their
own in RefreshListeners which interacts with IndexShard using a
couple of callbacks and a registration method. This lets us test
the listeners without standing up an entire IndexShard. We still
test the listeners against an InternalEngine, because the interplay
between InternalEngine, Translog, and RefreshListeners is complex
and important to get right.
commit d8926d5fc1d24b4da8ccff7e0f0907b98c583c41
Author: Nik Everett <nik9000@gmail.com>
Date: Tue May 17 11:02:38 2016 -0400
Move refresh listeners into IndexShard
commit df91cde398eb720143a85a8c6fa19bdc3a74e07d
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 16:01:03 2016 -0400
unused import
commit 066da45b08148b266e4173166662fc1b3f66ed53
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 15:54:11 2016 -0400
Remove RefreshListener interface
Just pass a Translog.Location and a Consumer<Boolean> when registering.
commit b971d6d3301c7522b2e7eb90d5d8dd96a77fa625
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:41:06 2016 -0400
Docs for setForcedRefresh
commit 6c43be821eaf61141d3ec520f988aad3a96a3941
Author: Nik Everett <nik9000@gmail.com>
Date: Mon May 16 14:34:39 2016 -0400
Rename refresh setter and getter
commit e61b7391f91263a4c4d6107bfbc2a828bbcc805c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 22:48:09 2016 -0400
Trigger listeners even when there is no refresh
Each refresh gives us an opportunity to pick up any listeners we may
have left behind.
commit 0c9b0477085c021f503db775640d25668e02f635
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 20:30:06 2016 -0400
REST
commit 8250343240de7e63118c663a230a7a314807a754
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 19:34:22 2016 -0400
Switch to estimated count
We don't need a linear time count of the number of listeners - a volatile
variable is good enough to guess. It probably undercounts more than it
overcounts but it isn't a huge problem.
commit bd531167fe54f1bde6f6d4ddb0a8de5a7bcc18a2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:21:02 2016 -0400
Don't try and set forced refresh on bulk items without a response
NullPointerExceptions are bad. If the entire request fails then the user
has worse problems then "did these force a refresh".
commit bcfded11515af5e0b3c3e36f3c2f73f5cd26512e
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 18:14:20 2016 -0400
Replace LinkedList and synchronized with LinkedTransferQueue
commit 8a80cc70a76375a7593745884cb987535b37ca80
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 17:38:24 2016 -0400
Support for update
commit 1f36966742f851b7328015151ef6fc8f95299af2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:46:06 2016 -0400
Cleanup translog tests
commit 8d121bf35eb265b8a0aee9710afeb1b054a113d4
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 15:40:53 2016 -0400
Cleanup listener implementation
Much more testing too!
commit 2058f4a808762c4588309f21b13b677245832f2c
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:45:55 2016 -0400
Pass back information about whether we refreshed
commit e445cb0cb91ebdbcfdbf566696edb2bf1c84a882
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:03:31 2016 -0400
Javadoc
commit 611cbeeaeb458f4b428bfc43a1ee6652adf4baff
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:01:40 2016 -0400
Move ReplicationResponse
now it is in the same package as its request
commit 9919758b644fd73895fb88cd6a4909a8387eb2e2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 11:00:14 2016 -0400
Oh boy that wasn't working
commit 247cb483c4459dea8e95e0e3bd2e4bf8d452c598
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:29:37 2016 -0400
Basic block_until_refresh exposed to java client
and basic "is it plugged in" style tests.
commit 46c855c9971cb2b748206d2afa6a2d88724be3ba
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 10:11:10 2016 -0400
Move test to own class
commit a5ffd892d0a352ae7e9757f2640fc2a1fa656bf2
Author: Nik Everett <nik9000@gmail.com>
Date: Mon Apr 25 07:44:25 2016 -0400
WIP
commit 213bebb6ece11b85d17e44af9a54fc2e5e332d39
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:35:52 2016 -0400
Add refresh listeners
commit a2bc7f30e6d4857a1224ef5a89909b36c8f33731
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 21:11:55 2016 -0400
Return last written location from refresh
commit 85033a87551da89f36a23d4dfd5016db218e08ee
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 20:28:21 2016 -0400
Never reply to replica actions while you have the operation lock
This last thing was causing periodic test failures because we were
replying while we had the operation lock. Now, we probably could get
away with that in most cases but the tests don't like it and it isn't
a good idea to do network io while you have a lock anyway. So this
prevents it.
commit 1f25cf35e796835b3827b8a4110e09e5de61784c
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:56:18 2016 -0400
Cleanup
commit 52c5f7c3f04710901f503334239a611c0e21c85a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 19:33:00 2016 -0400
Add a listener to shard operations
commit 5b142dc331214c8eef90587144f4b3f959f9eced
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 18:03:52 2016 -0400
Cleanup
commit 3d22b2d7ceb473db339259452a7c4f117ce86069
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:59:55 2016 -0400
Push the listener into shardOperationOnPrimary
commit 34b378943b8185451acf6350f661c0ad33b5836d
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:48:47 2016 -0400
Doc
commit b42b8da968d42cc7414020c7b199606a5dcce50a
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:45:40 2016 -0400
Don't finish early if the primary finishes early
We use a "fake" pending shard that we resolve when the replicas have
all started.
commit 0fc045b56e1e02a48c30383ac50a281d5af7e0b6
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 17:30:06 2016 -0400
Make performOnPrimary asyncS
Instead of returning Tuple<Response, ReplicaRequest> it returns
ReplicaRequest and takes a ActionListener<Response> as an argument.
We call the listener immediately to preserve backwards compatibility
for now.
commit 80119b9a26ede96a865af45904c3ac69d5b19b59
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:51:53 2016 -0400
Factor out common code in shardOperationOnPrimary
commit 0642083676702618f900fa842c08802a04c1a53e
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:32:29 2016 -0400
Factor out common code from shardOperationOnReplica
commit 8bdc415fedaaa9f2d0c555590a13ec4699a7c3f7
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:23:28 2016 -0400
Create ReplicatedMutationRequest
Superclass for index, delete, and bulkShard requests.
commit 0f8fa846a2822c4293df32fed18c9b99660b39ff
Author: Nik Everett <nik9000@gmail.com>
Date: Fri Apr 22 16:10:30 2016 -0400
Create TransportReplicatedMutationAction
It is the superclass of replication actions that mutate data: index, delete,
and shardBulk. shardFlush and shardRefresh are replication actions but they
do not extend TransportReplicatedMutationAction because they don't change
the data, only shuffle it around.
* master: (184 commits)
Add back pending deletes (#18698)
refactor matrix agg documentation from modules to main agg section
Implement ctx.op = "delete" on _update_by_query and _reindex
Close SearchContext if query rewrite failed
Wrap lines at 140 characters (:qa projects)
Remove log file
painless: Add support for the new Java 9 MethodHandles#arrayLength() factory (see https://bugs.openjdk.java.net/browse/JDK-8156915)
More complete exception message in settings tests
Use java from path if JAVA_HOME is not set
Fix uncaught checked exception in AzureTestUtils
[TEST] wait for yellow after setup doc tests (#18726)
Fix recovery throttling to properly handle relocating non-primary shards (#18701)
Fix merge stats rendering in RestIndicesAction (#18720)
[TEST] mute RandomAllocationDeciderTests.testRandomDecisions
Reworked docs for index-shrink API (#18705)
Improve painless compile-time exceptions
Adds UUIDs to snapshots
Add test rethrottle test case for delete-by-query
Do not start scheduled pings until transport start
Adressing review comments
...