** The default script language is now maintained in `Script` class.
* Added `script.legacy.default_lang` setting that controls the default language for scripts that are stored inside documents (for example percolator queries). This defaults to groovy.
** Added `QueryParseContext#getDefaultScriptLanguage()` that manages the default scripting language. Returns always `painless`, unless loading query/search request in legacy mode then the returns what is configured in `script.legacy.default_lang` setting.
** In the aggregation parsing code added `ParserContext` that also holds the default scripting language like `QueryParseContext`. Most parser don't have access to `QueryParseContext`. This is for scripts in aggregations.
* The `lang` script field is always serialized (toXContent).
Closes#20122
* master:
Avoid NPE in LoggingListener
Randomly use Netty 3 plugin in some tests
Skip smoke test client on JDK 9
Revert "Don't allow XContentBuilder#writeValue(TimeValue)"
[docs] Remove coming in 2.0.0
Don't allow XContentBuilder#writeValue(TimeValue)
[doc] Remove leftover from CONSOLE conversion
Parameter improvements to Cluster Health API wait for shards (#20223)
Add 2.4.0 to packaging tests list
Docs: clarify scale is applied at origin+offest (#20242)
When Netty 4 was introduced, it was not the default network
implementation. Some tests were constructed to randomly use Netty 4
instead of the default network implementation. When Netty 4 was made the
default implementation, these tests were not updated. Thus, these tests
are randomly choosing between the default network implementation (Netty
4) and Netty 4. This commit updates these tests to reverse the role of
Netty 3 and Netty 4 so that the randomization is choosing between Netty
3 and the default (again, now Netty 4).
Relates #20265
* master:
Increase visibility of deprecation logger
Skip transport client plugin installed on JDK 9
Explicitly disable Netty key set replacement
percolator: Fail indexing percolator queries containing either a has_child or has_parent query.
Make it possible for Ingest Processors to access AnalysisRegistry
Allow RestClient to send array-based headers
Silence rest util tests until the bogusness can be simplified
Remove unknown HttpContext-based test as it fails unpredictably on different JVMs
Tests: Improve rest suite names and generated test names for docs tests
Add support for a RestClient base path
This commit modifies the call sites that allocate a parameterized
message to use a supplier so that allocations are avoided unless the log
level is fine enough to emit the corresponding log message.
This commit upgrades the Netty dependencies from version 4.1.4 to
version 4.1.5. This upgrade brings several bug fixes including the
removal of a obnoxious and scary-looking log message when unsafe is
explicitly disabled.
Relates #20222
Instead of get, set and remove we do get, remove and then set to avoid type conflicts in IngestDocument.
If the set still fails we try to restore the original field in ingest document.
Closes#19892
Netty3/4 TcpTransport implementations are creating thread factories with a "http_server" thread prefix whereas it should start with "transport_server" and let the "http_server" prefix for the HttpServerTransport implementations.
The network types in use on a cluster can be useful information to have,
so this commit adds aggregate metrics for the network types in use in a
cluster to the cluster stats.
Relates #20144
The Netty 4 HTTP server pipeline tests contains two different test
cases. The general idea behind these tests is to submit some requests to
a Netty 4 HTTP server, one test with pipelining enabled and another test
with pipelining disabled. These requests are submitted to two endpoints,
one with a path like /{id} and another with a path like /slow with a
query string parameter sleep. This parameter tells the request handler
how long to sleep for before replying. The idea is that in the case of
the pipelining enabled tests, the requests should come back exactly in
the order submitted, even with some of the requests hitting the slow
endpoint with random sleep durations; this is the guarantee that
pipelining provides. And in the case of the pipelining disabled tests,
requests were randombly submitted to /{id} and /slow with sleep
parameters starting at 600ms and increasing by 100ms for each slow
request constructed. We would expect the requests to come back with the
all the responses to the /{id} requests first because these requests
will execute instantaneously, and then the responses to the /slow
requests. Further, it was expected that the slow requests would come
back ordered by the length of the sleep, the thinking being that 100ms
should be enough of a difference between each request that we would
avoid any race conditions. Sadly, this is not the case, the threads do
sometimes hit race conditions.
This commit modifies the HTTP server pipelining tests to address this
race condition. The modification is that the query string parameter on
the /slow endpoint is removed in favor of just submitting requests to
the path /slow/{id}, where id just used a marker to distinguish each
request. The server chooses a random sleep of at least 500ms for each
request on the slow path. The assertion here then is that the /{id}
responses arrive first, then then /slow responses. We can not make an
assertion on the order of the responses, but we can assert that we did
see every expected response.
Relates #19845
This change adds a special field named _none_ that allows to disable the retrieval of the stored fields in a search request or in a TopHitsAggregation.
To completely disable stored fields retrieval (including disabling metadata fields retrieval such as _id or _type) use _none_ like this:
````
POST _search
{
"stored_fields": "_none_"
}
````
Reindex intentionally tries to fail the search operation to make sure
that the exception flows back. The exception message changed so we
should catch the appropriate exception.
This test was failing in the presence of transport clients. This turns
off transport clients while I fix the test so it doesn't fail for
everyone in the mean time.
The big change here is cleaning up the `TaskListResponse` so it doesn't
have a breaky `toString` implementation. That was causing the reindex
tests to break.
Also removed `NetworkModule#registerTaskStatus` which is part of the
Plugin API. Use `Plugin#getNamedWriteables` instead.
Parsing a search request is currently split up among a number of
classes, using multiple public static methods, which take multiple
regstries of elements that may appear in the search request like query
parsers and aggregations. This change begins consolidating all this code
by collapsing the registries normally used for parsing search requests
into a single SearchRequestParsers class. It is also made available to
plugin services to enable templating of search requests. Eventually all
of the actual parsing logic should move to the class, and the registries
should be hidden, but for now they are at least co-located to reduce the
number of objects that must be passed around.
Squashes all the subpackages of `org.elasticsearch.rest.action` down to
the following:
* `o.e.rest.action.admin` - Administrative actions
* `o.e.rest.action.cat` - Actions that make tables for `grep`ing
* `o.e.rest.action.document` - Actions that act on documents
* `o.e.rest.action.ingest` - Actions that act on ingest pipelines
* `o.e.rest.action.search` - Actions that search
I'm tempted to merge `search` into `document` but the `document`
package feels fairly complete as is and `Suggest` isn't actually always
about documents either....
I'm also tempted to merge `ingest` into `admin.cluster` because the
latter contains the actions for dealing with stored scripts.
I've moved the `o.e.rest.action.support` into `o.e.rest.action`.
I've also added `package-info.java`s to all packges in `o.e.rest`. I
figure if the package is too small to deserve a `package-info.java` file
then it is too small to deserve to be a package....
Also fixes checkstyle in all moved classes.
The term persisted task was used to indicate that a task should store its results upon its completion. We would like to use this term to indicate that a task can survive restart of nodes instead. This commit removes usages of the term "persist" when it means store results.
As the most complicated `FetchSubPhase` highlighting gets its own package
(`o.e.seach.fetch.subphase.highlight`. No other `FetchSubPhase`s get their
own package. Instead they all reside together in `o.e.search.fetch.subphase`.
Add package descriptions to `o.e.search.fetch` and subpackages.
Previously, we only caught subclasses of Exception, however, there are
some cases when Errors are thrown instead of Exceptions. These two cases
are `assert` and when a class cannot be found.
Without this change, the exception would bubble up to the
`uncaughtExceptionHandler`, which would in turn, exit the JVM
(related: #19923).
A note of difference between regular Java asserts and Groovy asserts,
from http://docs.groovy-lang.org/docs/latest/html/documentation/core-testing-guide.html
"Another important difference from Java is that in Groovy assertions are
enabled by default. It has been a language design decision to remove the
possibility to deactivate assertions."
In the event that a user uses an assert such as:
```groovy
def bar=false; assert bar, "message";
```
The GroovyScriptEngineService throws a NoClassDefFoundError being unable
to find the `java.lang.StringBuffer` class. It is *highly* recommended
that any Groovy scripting user switch to using regular exceptions rather
than unconfiguration Groovy assertions.
Resolves#19806