This commit adjusts the indentation in the CLI scripts to give a clear
visual indication that the line being indented is a continuation of the
previous line.
SSLTrustRestrictionsTests updates the restrictions YML file during the test run to change the set of restrictions. This update was small, but it wasn't atomic.
If the yml file is reloaded while empty or invalid, then it causes all SSL certificates to be considered invalid (until it is reloaded again), which could break the sniffing/administrative client that runs underneath the tests.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the Windows side of that effort and the Bash side was
in a previous commit.
A previous refactoring of the CLI scripts migrated all of the CLI tools
to shell to a common script, elasticsearch-cli. This approach is fine in
Bash where it is easy to tear arguments apart but it doesn't work so
well on Windows where quoting is insane. To avoid having to tear the
arguments apart to separate the first argument to elasticsearch-cli from
the remaining arguments, we instead choose a strategy where we can avoid
tearing the arguments apart. To do this, we will instead pass the main
class by an environment variable and then we can pass the arguments
straight through. This will let us avoid awful quoting issues on
Windows. This is the non-Windows side of that effort and the Windows
side will be in a follow-up.
This is related to #28898. This commit adds the acceptor thread name to
the method checking if this thread is a transport thread. Additionally,
it modifies the nio http transport to use the same worker name as the
netty4 http server transport.
With #30490 we have introduced a new way to provide request options
whenever sending a request using the high-level REST client. Before you
could provide headers as the last argument varargs of each API method,
now you can provide `RequestOptions` that in the future will allow to
provide more options which can be specified per request.
This commit deprecates all of the client methods that accept a `Header`
varargs argument in favour of new methods that accept `RequestOptions`
instead. For some API we don't even go through deprecation given that
they were not released since they were added, hence in that case we can
just move them to the new method.
This is related to #27260. This commit combines the AcceptingSelector
and SocketSelector classes into a single NioSelector. This change
allows the same selector to handle both server and socket channels. This
is valuable as we do not necessarily want a dedicated thread running for
accepting channels.
With this change, this commit removes the configuration for dedicated
accepting selectors for the normal transport class. The accepting
workload for new node connections is likely low, meaning that there is
no need to dedicate a thread to this process.
I pushed a test that `assertBusy`s for a whole hour accidentally. I was
testing something and forgot to revert my local hack but caught it on
backport. This removes it.
This is much more realistic and can find more issues. This causes the
"mixed cluster" tests to be run twice so I had to fix the tests to work
in that case. In most cases I did as little as possible to get them
working but in a few cases I went a little beyond that to make them
easier for me to debug while getting them to work. My test changes:
1. Remove the "basic indexing" tests and replace them with a copy of the
tests used in the OSS. We have no way of sharing code between these two
projects so for now I copy.
2. Skip the a few tests in the "one third" upgraded scenario:
* creating a scroll to be reused when the cluster is fully upgraded
* creating some ml data to be used when the cluster is fully ugpraded
3. Drop many "assert yellow and that the cluster has two nodes"
assertions. These assertions duplicate those made by the wait condition
and they fail now that we have three nodes.
4. Switch many "assert green and that the cluster has two nodes" to 3
nodes. These assertions are unique from the wait condition and, while
I imagine they aren't required in all cases, now is not the time to
find that out. Thus, I made them work.
5. Rework the index audit trail test so it is more obvious that it is
the same test expecting different numbers based on the shape of the
cluster. The conditions for which number are expected are fairly
complex because the index audit trail is shut down until the template
for it is upgraded and the template is upgraded when a master node is
elected that has the new version of the software.
6. Add some more information to debug the index audit trail test because
it helped me figure out what was going on.
I also dropped the `waitCondition` from the `rolling-upgrade-basic`
tests because it wasn't needed.
Closes#25336
Currently the engine is initialized with a hardcoded 256MB of RAM. Elasticsearch
may never use more than that for a given shard, `IndexingMemoryController` only
has the power to flush segments to disk earlier in case multiple shards are
actively indexing and use too much memory.
While this amount of memory is enough for an index with few fields and larger
RAM buffers are not expected to improve indexing speed, this might actually be
little for an index that has many fields.
Kudos to @bleskes for finding it out when looking into a user who was reporting
a **much** slower indexing speed when upgrading from 2.x to 5.6 with an index
that has about 20,000 fields.
In #19749 an extra check was added before writing each blob to ensure that we would not be
overriding an existing blob. Due to S3's weak consistency model, this check was best effort. To
make matters worse, however, this resulted in a HEAD request to be done before every PUT, in
particular also when PUTTING a new object. The approach taken in #19749 worsened our
consistency guarantees for follow-up snapshot actions, as it made it less likely for new files that
had been written to be available for reads.
This commit therefore removes this extra check. Due to the weak consistency model, this check
was a best effort thing anyway, and there's currently no way to prevent accidental overrides on S3.
The native realm's usage stats were previously pulled from the cache,
which only contains the number of users that had authenticated in the
past 20 minutes. This commit changes this so that we pull the current
value from the security index by executing a search request. In order
to support this, the usage stats for realms is now asynchronous so that
we do not block while waiting on the search to complete.
We currently have a specific REST action to retrieve all indices and types mappings, which used internally the get index API. This doesn't seem to be required anymore though as the existing RestGetMappingAction could as well take the requests with no indices and types specified.
This commit removes the RestGetAllMappingsAction in favour of using RestGetMappingAction also for requests that don't specify indices nor types.
The Index Audit trail allows the override of the template index
settings with settings specified on the conf file.
A bug will manifest when such conf file settings are specified
for templates that need to be upgraded. The bug is an endless
upgrade loop because the upgrade, although successful, is
not reckoned as such by the upgrade service.
move `finger_print`, `pattern` and `standard_html_strip` analyzers
to analysis-common module. (both AnalysisProvider and PreBuiltAnalyzerProvider)
Changed PreBuiltAnalyzerProviderFactory to extend from PreConfiguredAnalysisComponent and
changed to make sure that predefined analyzers are always instantiated with the current
ES version and if an instance is requested for a different version then delegate to PreBuiltCache.
This is similar to the behaviour that exists today in AnalysisRegistry.PreBuiltAnalysis and
PreBuiltAnalyzerProviderFactory. (#31095)
Relates to #23658
This commit adds a check that any class in X-Pack that is a feature
aware custom also implements the appropriate mix-in interface in
X-Pack. These interfaces provide a default implementation of
FeatureAware#getRequiredFeature that returns that x-pack is the required
feature. By implementing this interface, this gives a consistent way for
X-Pack feature aware customs to return the appopriate required feature
and this check enforces that all such feature aware customs return the
appropriate required feature.
We should not allow the user to configure index patterns that also match
the index which stores the rollup index.
For example, it is quite natural for a user to specify `metricbeat-*`
as the index pattern, and then store the rollups in `metricbeat-rolled`.
This will start throwing errors as soon as the rollup index is created
because the indexer will try to search it.
Note: this does not prevent the user from matching against existing
rollup indices. That should be prevented by the field-level validation
during job creation.
This is related to #28898. This commit adds cors support to the nio http
transport. Most of the work is copied directly from the netty module
implementation. Additionally, this commit adds tests for the nio http
channel.
When the last indexing operation is completed, we will fire a global
checkpoint sync. Since a global checkpoint sync request is a replication
request, it will acquire an index shard permit on the primary when
executing. If this happens at the same time while we are issuing the
synced-flush, the synced-flush request will fail as it thinks there are
in-flight operations. We can avoid such situation by retrying another
synced-flush if the current request fails due to ongoing operations on
the primary.
Closes#29392
This commit adds a new writeBlobAtomic() method to the BlobContainer
interface that can be implemented by repository implementations which
support atomic writes operations.
When the BlobContainer implementation does not provide a specific
implementation of writeBlobAtomic(), then the writeBlob() method is used.
Related to #30680
This will be necessary for the `docvalue_fields` option to work correctly once
we use the field's doc-value format to format doc-value fields. Binary values
are formatted as base64-encoded strings.
In spite of the existing caching, I have seen a number of nodes hot threads
where one thread had been spending all its cpu on computing the size of a
directory. I am proposing to move the computation of the size of the directory
to `StoreDirectory` in order to skip recomputing the size of the directory if
no changes have been made. This should help with users that have read-only
indices, which is very common for time-based indices.