Currently AbstractHttpServerTransport is in a netty4 module. This is the
incorrect location. This commit moves it out of netty4 module.
Additionally, it moves unit tests that test AbstractHttpServerTransport
logic to server.
The stored scripts API today accepts malformed requests instead of throwing an exception.
This PR deprecates accepting malformed put stored script requests (requests not using the official script format).
Relates to #27612
Currently nio and netty modules use the CompletableFuture class for
managing listeners. This is unfortunate as that class accepts
Throwable. This commit adds a class CompletableContext that wraps
the CompletableFuture but does not accept Throwable. This allows the
modification of netty and nio logic to no longer handle Throwable.
This commit reintroduces 31251c9 and 63a5799. These commits introduced a
memory leak and were reverted. This commit brings those commits back
and fixes the memory leak by removing unnecessary retain method calls.
This reverts commit 31251c9 introduced in #30695.
We suspect this commit is causing the OOME's reported in #30811 and we will use this PR to test this assertion.
Lucene has a new `FeatureField` which gives the ability to record numeric
features as term frequencies. Its main benefit is that it allows to boost
queries with the values of these features and efficiently skip non-competitive
documents at the same time using block-max WAND and indexed impacts.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
This is related to #29500 and #28898. This commit removes the abilitiy
to disable http pipelining. After this commit, any elasticsearch node
will support pipelined requests from a client. Additionally, it extracts
some of the http pipelining work to the server module. This extracted
work is used to implement pipelining for the nio plugin.
=== Char Group Tokenizer
The `char_group` tokenizer breaks text into terms whenever it encounters
a
character which is in a defined set. It is mostly useful for cases where
a simple
custom tokenization is desired, and the overhead of use of the
<<analysis-pattern-tokenizer, `pattern` tokenizer>>
is not acceptable.
=== Configuration
The `char_group` tokenizer accepts one parameter:
`tokenize_on_chars`::
A string containing a list of characters to tokenize the string on.
Whenever a character
from this list is encountered, a new token is started. Also supports
escaped values like `\\n` and `\\f`,
and in addition `\\s` to represent whitespace, `\\d` to represent
digits and `\\w` to represent letters.
Defaults to an empty list.
=== Example output
```The 2 QUICK Brown-Foxes jumped over the lazy dog's bone for $2```
When the configuration `\\s-:<>` is used for `tokenize_on_chars`, the
above sentence would produce the following terms:
```[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone,
for, $2 ]```
The getDate() and getDates() existed prior to 5.x on long fields in
scripting. In 5.x, a new Date type for ScriptDocValues was added. The
getDate() and getDates() methods were left on long fields and added to date
fields to ease the transition. This commit removes those methods for
7.0.
The camel case name `nGram` should be removed in favour of `ngram` and
similar for `edgeNGram` and `edge_ngram`. Before removal, we need to
deprecate the camel case names first. This change adds deprecation
warnings for indices with versions 6.4.0 and higher and logs deprecation
warnings.
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points. It is the scripted version of MovingAvg pipeline agg.
Through custom script contexts, we expose a number of convenience methods:
- MovingFunctions.max()
- MovingFunctions.min()
- MovingFunctions.sum()
- MovingFunctions.unweightedAvg()
- MovingFunctions.linearWeightedAvg()
- MovingFunctions.ewma()
- MovingFunctions.holt()
- MovingFunctions.holtWinters()
- MovingFunctions.stdDev()
The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
Currently the ranking evaluation API accepts the full query syntax for
the queries specified in the evaluation set and executes them via multi
search. This potentially runs costly aggregations and suggestions too.
This change adds checks that forbid using aggregations, suggesters,
highlighters and the explain and profile options in the queries that are
run as part of the ranking evaluation since they are irrelevent in the
context of this API.
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
With this commit we determine the maximum number of buffers that Netty keeps
while accumulating one HTTP request based on the maximum content length (default
1500 bytes, overridable with the system property `es.net.mtu`). Previously, we
kept the default value of 1024 which is too small for bulk requests which leads
to unnecessary copies of byte buffers internally.
Relates #29448
This folds the `:qa:smoke-test-reindex-with-all-modules` project into
`:modules:reindex` by declaring the reindex's integration testing
cluster requires the `parent-join` and `lang-painless` plugins and then
moving all of the integration tests that depended on parent-join and
painless into reindex.
It saves us one cluster start up during the build at the cost of a
little of the reindex module's "purity". Since the reindex module *does*
have unit tests that test scripting without painless I'm fairly ok with
that.
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
Adds two new methods to `RestClient` that take a `Request` object. These
methods will allows us to add more per-request customizable options
without creating more and more and more overloads of the `performRequest`
and `performRequestAsync` methods. These new methods look like:
```
Response performRequest(Request request)
```
and
```
void performRequestAsync(Request request, ResponseListener responseListener)
```
This change doesn't add any actual features but enables adding things like
per request timeouts and per request node selectors. This change *does*
rework the `HighLevelRestClient` and its tests to use these new `Request`
objects and it does update the docs.
We disable the reindex-from-old tests if we're running on windows or in
a directory that contains a space. This adds a warning to the logs when
we do that so that you can tell that it happened. This will be nice to
have when looking at CI and will be a hint to anyone developing locally.
This *mostly* silences `javadoc`'s warning about defaulting to
generating html4 files by enabling generating html5 file for the
projects for which that works. It didn't work in a half dozen projects,
about half of which I've fixed in this PR, entirely by replacing
`<tt>thing</tt>` with `{@code thing}`.
There are a few remaining projects that contain javadoc with invalid
html5. I'll fix those projects in a followup.
We *think* that #28600 is caused by warnings not being collected during
one of the fan out phases of search but we're not 100% sure how this is
happening. This commit drops the number of shards used for the test to 1
so there *isn't* a fan out phase. If this makes the issue go away we'll
have more information.
This folds the `:qa:reindex-from-old` project into the `:modules:reindex`
project. This should speed up the build marginally by removing a single
clsuter start up at the cost of having to wait for old versions of
Elasticsearch to start up when checking reindex's integration tests.
Those don't take that long so this feels worth it.
The ranking evaluation requests so far were not tested against aliases
but they should run regardless of the targeted index is a real index or
an alias. This change adds cases for this to the integration and rest
tests.
The camel case name `htmlStip` should be removed in favour of `html_strip`, but
we need to deprecate it first. This change adds deprecation warnings for indices
with version starting with 6.3.0 and logs deprecation warnings in this cases.
Allow high level java rest client to access details of the metric
calculation by making them accessible across packages. Also renaming the
inner `Breakdown` classes of the evaluation metrics to `Detail` to
better communicate their use.
This commit renames the bulk thread pool to the write thread pool. This
is to better reflect the fact that the underlying thread pool is used to
execute any document write request (single-document index/delete/update
requests, and bulk requests).
With this change, we add support for fallback settings
thread_pool.bulk.* which will be supported until 7.0.0.
We also add a system property so that the display name of the thread
pool remains as "bulk" if needed to avoid breaking users.
Added an api that allows to execute an arbitrary script and a result to be returned.
```
POST /_scripts/painless/_execute
{
"script": {
"source": "params.var1 / params.var2",
"params": {
"var1": 1,
"var2": 1
}
}
}
```
Relates to #27875
This allows the grammar to determine when and what delimiters statements will use by
splitting up the statements into regular statements and delimited statements, those that do
not require a delimiter versus those that do. This allows consumers of the statements to
determine what delimiters the statements will use so that in certain cases semicolons are
not necessary like when there's a closing right bracket.
This change removes the need for semicolon insertion in the lexer, simplifying the existing
lexer quite a bit. It also ensures that there isn't a need to track semicolons being inserted
into places that aren't necessary such as array initializers.