This PR adds timeouts to the named pipe connections of the
autodetect, normalize and data_frame_analyzer processes.
This argument requires the changes of elastic/ml-cpp#1514 in
order to work, so that PR will be merged before this one.
(The controller process already had a different mechanism,
tied to the ES JVM lifetime.)
Backport of #62993
Fix query creation inside sequences with any queries due to lacking a
clause to combine, which lead to an invalid request being created.
Fix#62967
(cherry picked from commit ff59d8823919a6e70928816e5c3687308ebde33f)
Classification feature importance supports various types in the class name:
- string
- boolean
- numerical
The xcontent parsing on the server side and the HLRC side should support and test these types.
* Use compilation as validation for painless role template (#62845)
Role template validation now performs only compilation if the script is painless.
It no longer attempts to execute the script with empty input which is problematic.
The compliation process will catch things like invalid syntax, undefined variables,
which still provide certain level of protection against ill-defined role templates.
Behaviour for Mustache script is unchanged.
* Checkstyle
* [ML] fixing testTwoJobsWithSameRandomizeSeedUseSameTrainingSet tests (#62976)
This fixes the two test failures.
The shard failure seems to be due to the .ml-stats index being in the middle of being created.
50_script_values/Script query fails sometimes
as resulting hits will be ordered differently from expected.
This patch ensures consisten ordering of hits.
Closes#62975
This refactors the loading of monitoring templates slightly so that they aren't loaded over and
over again (from disk) on CS updates. This isn't an important optimization in production for obvious
reasons since it only affects the install stage, but this turned out to cause some slow CS applies
in tests.
Relates #62853
If the transform grouping is a script then exclude the field from the source index
mappings fields caps request. A null object caused an NPE in the serialisation of
FieldCapabilitiesIndexRequest.
As we have decided top level importance for classification is not useful,
it has been removed from the results from the training job. This commit
also removes them from inference.
Backport of #62486
Currently, `finishHim` can either execute on the specified executor
(in the less likely case that the local node request is the last to arrive)
or on a transport thread.
In case of e.g. `org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction`
this leads to an expensive execution that deserializes all mapping metadata in the cluster
running on the transport thread and destabilizing the cluster. In case of this transport
action it was specifically moved to the `MANAGEMENT` thread to avoid the high cost of processing
the stats requests on the nodes during fan-out but that did not cover the final execution
on the node that received the initial request. This PR adds to ability to optionally specify the executor for the final step of the
nodes request execution and uses that to work around the issue for the slow `TransportClusterStatsAction`.
Note: the specific problem that motivated this PR is essentially the same as https://github.com/elastic/elasticsearch/pull/57937 where we moved the execution off the transport and on the management thread as a fix as well.
We support `"""` in `console` snippets to emulate kibana's CONSOLE.
CONSOLE also spits out `"""` when a json field contains a new line or a
double quote. This adds support for those sorts of responses to the
handling of `console-response` snippets.
Revises the current 'How to avoid oversharding' docs to incorporate
information from our [shard sizing blog post][0].
Changes:
* Streamlines introduction
* Adds "Things to remember" section to describe how shards work
* Adds "Guidelines" section based on blog tips
* Creates a "Fix an oversharded cluster" section
[0]: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
Passing FieldMappers to point parsing functions makes trying to build source-only
fields from MappedFieldTypes more complicated. This small refactoring changes
things so that the relevant parsing and factory functions from
AbstractGeometryFieldMapper are instead passed as lambdas to the PointParser
constructor.
make node resolving more robust by ignoring null values. This is a bug in
the usage of this class, however you don't want NPE's in prod. The root cause
might be a corner case. Because silencing the root cause is bad, the assert
causes a fail if assertions are enabled
relates #62847
We have to make sure the applier and not the accept state versions allign here.
Otherwise we can get into the situation where the data node is so slow to process
one version that the next one arrives, gets rejected and the request return with
ack `false` and we fail the assertion that the put mapping request didn't complete.
Closes#62446
JAVA_HOME is set as necessary in packaging tests, depending on whether
it is needed for no-jdk distributions or testing override behavior. We
currently rely on gradle finding java through PATH. However, JAVA_HOME
can sometimes be set by the system itself, which then leaks through to
the packaging test. This commit reworks our handling of JAVA_HOME to
pass it through for gradle, and then explicitly clear it whenever
running shell commands in packaging tests.
This test was disabled with an awaits fix, but the underlying issue has
been worked around, so the test can be re-enabled.
relates #46050
relates #58628
Currently Netty will batch compression an entire HTTP response
regardless of its content size. It allocates a byte array at least of
the same size as the uncompressed content. This causes issues with our
attempts to remove humungous G1GC allocations. This commit resolves the
issue by split responses into 128KB chunks.
This has the side-effect of making large outbound HTTP responses that
are compressed be send as chunked transfer-encoding.
Currently we duplicate our specialized cors logic in all transport
plugins. This is unnecessary as it could be implemented in a single
place. This commit moves the logic to server. Additionally it fixes a
but where we are incorrectly closing http channels on early Cors
responses.
Introduce 64-bit unsigned long field type
This field type supports
- indexing of integer values from [0, 18446744073709551615]
- precise queries (term, range)
- precise sort and terms aggregations
- other aggregations are based on conversion of long values
to double and can be imprecise for large values.
Backport for #60050Closes#32434
This commit adds a mechanism to MapperTestCase that allows implementing
test classes to check that their parameters can be updated, or throw conflict
errors as advertised. Child classes override the registerParameters method
and tell the passed-in UpdateChecker class about their parameters. Simple
conflicts can be checked, using the existing minimal mappings as a base to
compare against, or alternatively a particular initial mapping can be provided
to check edge cases (eg, norms can be updated from true to false, but not
vice versa). Updates are registered with a predicate that checks that the update
has in fact been applied to the resulting FieldMapper.
Fixes#61631
Same as in the normal Netty tests we have to disable the runtime proc
setting in the normal tests task just like we do for the internal cluster tests.
Closes#61919Closes#62298
This commit allows coordinating node to account the memory used to perform partial and final reduce of
aggregations in the request circuit breaker. The search coordinator adds the memory that it used to save
and reduce the results of shard aggregations in the request circuit breaker. Before any partial or final
reduce, the memory needed to reduce the aggregations is estimated and a CircuitBreakingException} is thrown
if exceeds the maximum memory allowed in this breaker.
This size is estimated as roughly 1.5 times the size of the serialized aggregations that need to be reduced.
This estimation can be completely off for some aggregations but it is corrected with the real size after
the reduce completes.
If the reduce is successful, we update the circuit breaker to remove the size of the source aggregations
and replace the estimation with the serialized size of the newly reduced result.
As a follow up we could trigger partial reduces based on the memory accounted in the circuit breaker instead
of relying on a static number of shard responses. A simpler follow up that could be done in the mean time is
to [reduce the default batch reduce size](https://github.com/elastic/elasticsearch/issues/51857) of blocking
search request to a more sane number.
Closes#37182
This adds the ability to fetch java primitives like `long` and `float`
from grok matches rather than their boxed versions. It also allows
customizing the which fields are extracted and how they are extracted.
By default we continue to fetch a `Map<String, Object>` but runtime
fields will be able to catch *just* the fields it is interested
in, and the values will be primitives.
for get trained models include_model_definition is now deprecated.
This commit writes a deprecation warning if that parameter is used and suggests the caller to utilize the replacement
Backport #62825 to 7.x branch.
Today if a data stream is auto created, but an index with same name as the
first backing index already exists then internally that error is ignored,
which then result that later in the execution of a bulk request, the
bulk item fails due to that the data stream hasn't been auto created.
This situation can only occur if an index with same is created that
will be the backing index of a data stream prior to the creation
of the data stream.
Co-authored-by: Dan Hermann <danhermann@users.noreply.github.com>