rewrite config on update if either version is outdated, credentials change,
the update changes the config or deprecated settings are found. Deprecated
settings get migrated to the new format. The upgrade can be easily extended to
do any necessary re-writes.
fixes#56499
backport #57648
For a rolling/mixed cluster upgrade (add new version to existing cluster
then shutdown old instances), the watches that ship by default
with monitoring may not get properly updated to the new version.
Monitoring watches can only get published if the internal state is
marked as dirty. If a node is not master, will also get marked as
clean (e.g. not dirty).
For a mixed cluster upgrade, it is possible for the new node to be
added, not as master, the internal state gets marked as clean so
that no more attempts can be made to publish the watches. This
happens on all new nodes. Once the old nodes are de-commissioned
one of the new version nodes in the cluster gets promoted to master.
However, that new master node (with out intervention like restarting
the node or removing/adding exporters) will never attempt to re-publish
since the internal state was already marked as clean.
This commit adds a cluster state listener to mark the resource dirty
when a node is promoted to master. This will allow the new resource
to be published without any intervention.
Prior to this commit, `cluster.max_shards_per_node` is not correctly handled
when it is set via the YAML config file, only when it is set via the Cluster
Settings API.
This commit refactors how the limit is implemented, both to enable correctly
handling the setting in the YAML and to more effectively centralize the logic
used to enforce the limit. The logic used to apply the limit, as well as the
setting value, has been moved to the new `ShardLimitValidator`.
Moves the source filtering example snippets form the "Request body
search" API docs page to the "Return fields in a search" section of the
"Run a search" page.
Merges the remaining implementation of `significant_terms` into `terms`
so that we can more easilly make them work properly without
`asMultiBucketAggregator` which *should* save memory and speed them up.
Relates #56487
In #55592 and #55416, we deprecated the settings for enabling and disabling
basic license features and turned those settings into no-ops. Since doing so,
we've had feedback that this change may not give users enough time to cleanly
switch from non-ILM index management tools to ILM. If two index managers
operate simultaneously, results could be strange and difficult to
reconstruct. We don't know of any cases where SLM will cause a problem, but we
are restoring that setting as well, to be on the safe side.
This PR is not a strict commit reversion. First, we are keeping the new
xpack.watcher.use_ilm_index_management setting, introduced when
xpack.ilm.enabled was made a no-op, so that users can begin migrating to using
it. Second, the SLM setting was modified in the same commit as a group of other
settings, so I have taken just the changes relating to SLM.
Follow up to #56961:
We can be a little more efficient than just serializing at the IO loop by serializing
only when we flush to a channel. This has the advantage that we don't serialize a long
queue of messages for a channel that isn't writable for a longer period of time (unstable network,
actually writing large volumes of data, etc.).
Also, this further reduces the time for which we hold on to the write buffer for a message,
making allocations because of an empty page cache recycler pool less likely.
* Remove duplicate ssl setup in sql/qa projects
* Fix enforcement of task instances
* Use static data for cert generation
* Move ssl testing logic into a plugin
* Document test cert creation
Today `GET _cluster/health?wait_for_events=...&timeout=...` will wait
indefinitely for the master to process the pending cluster health task,
ignoring the specified timeout. This could take a very long time if the master
is overloaded. This commit fixes this by adding a timeout to the pending
cluster health task.
The create index action name (`indices:admin/create`) can no longer be used to grant privileges to auto create indices and instead the `create_index` builtin privilege should be used.
Relates to #55858
Co-authored-by: Jake Landis <jake.landis@elastic.co>
This PR replaces the marker interface with the method
FieldMapper#parsesArrayValue. I find this cleaner and it will help with the
fields retrieval work (#55363).
The refactor also ensures that only field mappers can declare they parse array
values. Previously other types like ObjectMapper could implement the marker
interface and be passed array values, which doesn't make sense.
The test for `auto_date_histogram` as trying to round `Long.MAX_VALUE`
if there were 0 buckets. That doesn't work.
Also, this replaces all of the class variables created to make
consistent random result when testing `InternalAutoDateHistogram` with
the newer `randomResultsToReduce` which is a little simpler to
understand.
This PR adds a section to the new 'run a search' reference that explains
the options for returning fields. Previously each option was only listed as a
separate request parameter and it was hard to know what was available.
Add `TRIM` function which combines the functionality of both
`LTRIM` and `RTRIM` by stripping both leading and trailing
whitespaces.
Refers to #41195
(cherry picked from commit 6c86c919e12f0c4cb5e39d129aa65ab3e274268f)
The test failed when it was running with 4 replicas and 3 indexing
threads. The recovering replicas can prevent the global checkpoint from
advancing. This commit increases the timeout to 60 seconds for this
suite and the check for no inflight requests.
Closes#57204
We test expected TLS failures by catching SSLException, but other
security providers ( i.e. BCFIPS ) might throw a different one. In
this case, BCFIPS throws org.bouncycastle.tls.TlsFatalAlert
Pulls the way that the `ParentJoinAggregator` collects global ordinals
into a strategy object so it is a little simpler to reason about and
it'll be simpler to save memory by removing `asMultiBucketAggregator` in
the future.
Relates to #56487
SigTerms cannot run on fields that are not searchable, and SigText
cannot run on fields that do not have analyzers. Both of these
situations fail today with an esoteric exception, so this just formalizes
the constraint by throwing an IllegalArgumentException up front.
In practice, the only affected field seems to be the `binary` field,
which is neither searchable or has a default analyzer (e.g. even numeric
and keyword fields have a default analyzer despite not being tokenized)
Adds supported-type tests, and makes some changes to the test itself
to allow testing sigtext (indexing _source).
Also a few tweaks to the test to avoid bad randomization (negative
numbers, etc).
This commit highlights the ability for geo_point fields to be
used in geo_shape queries. It also adds an explicit geo_point
example in the geo_shape query documentation
Closes#56927.
When the `terms` agg runs against strings and uses global ordinals it
has an optimization when it collects segments that only ever have a
single value for the particular string. This is *very* common. But I
broke it in #57241. This fixes that optimization and adds `debug`
information that you can use to see how often we collect segments of
each type. And adds a test to make sure that I don't break the
optimization again.
We also had a specialiation for when there isn't a filter on the terms
to aggregate. I had removed that specialization in #57241 which resulted
in some slow down as well. This adds it back but in a more clear way.
And, hopefully, a way that is marginally faster when there *is* a
filter.
Closes#57407