* Scripting: Add support for booleans in scripts
Since 2.0, booleans have been represented as numeric fields (longs).
However, in scripts, this is odd, since you expect doing a comparison
against a boolean to work. While languages like groovy will auto convert
between booleans and longs, painless does not.
This changes the doc values accessor for boolean fields in scripts to
return Boolean objects instead of Long objects.
closes#20949
* Make Booleans final and remove wrapping of `this` for getValues()
The create request now requires that an ID be present.
Currently the clients hard code a create method, but
we should just add a create REST spec so this method
can be autogenerated.
This commit fixes two issues with the slow log docs:
- clarifies that these settings are per index
- updates index slow log configuration for Log4j 2
Relates #20976
This commit changes the current REST API parser to make it fail and throw an exception when a REST specification file contains a duplicated parameters, or path, or method, or path part.
This commit fixes an issue with the handling of the value "keep-alive"
on the Connection header in the Netty 4 HTTP implementation while
handling an HTTP 1.0 request. The issue was using the wrong equals
method to compare an AsciiString instance and a String instance (they
could never be equal). This commit fixes this to use the correct equals
method to compare for content equality.
This commit fixes an issue with the handling of the value "close" on the
Connection header in the Netty 4 HTTP implementation. The issue was
using the wrong equals method to compare an AsciiString instance and a
String instance (they could never be equal). This commit fixes this to
use the correct equals method to compare for content equality.
Relates #20956
This is important to allow any test to use RandomQueryBuilder#createQuery()
since some of the query builders that are used in this test test the length
of the types array and otherwise will thow NPE if the test is not a subclass
of AbstractQueryTestCase.
Today when logging an unknown or invalid setting, the log message does
not contain the source. This means that if we are archiving such a
setting, we do not specify where the setting is from (an index, and
which index, or a persistent or transient cluster setting). This commit
provides such logging for the end user can better understand the
consequences of the unknown or invalid setting.
Relates #20951
This commit fixes an issue with the configuration for the AwsSdkMetrics
logger; the issue is that the logging configuration had used underscores
instead of periods for the settings key (the perils of lenient settings
parsing).
Relates #20313
Now index and delete methods in index shard share code for
indexing stats. This commit collapses seperate methods for
index and delete operations into a generic execute method
for performing engine write operations. As an added benefit,
this commit cleans up the interface for indexing operation
listener making it more simple and concise to use.
Currently, we treat all write operation exceptions as equals, but in reality
every write operation can cause either an environment failure (i.e. a failure
that should fail the engine e.g. data corruption, lucene tragic events) or
operation failure (i.e. a failure that is transient w.r.t the operation e.g.
parsing exception).
This change bubbles up enironment failures from the engine, after failing the
engine but captures transient operation failures as part of the operation
to be processed appopriately at the transport level.
Today we don't parse alias filters on the coordinating node, we only forward
the alias patters to executing node and resolve it late. This has several problems
like requests that go through filtered aliases are never cached if they use date math,
since the parsing happens very late in the process even without rewriting. It also used
to be processed on every shard while we can only do it once per index on the coordinating node.
Another nice side-effect is that we are never prone to cluster-state updates that change an alias,
all nodes will execute the exact same alias filter since they are process based on the same
cluster state.
Today Elasticsearch limits the number of processors used in computing
thread counts to 32. This was from a time when Elasticsearch created
more threads than it does now and users would run into out of memory
errors. It appears the real cause of these out of memory errors was not
well understood (it's often due to ulimit settings) and so users were
left hitting these out of memory errors on boxes with high core
counts. Today Elasticsearch creates less threads (but still a lot) and
we have a bootstrap check in place to ensure that the relevant ulimit is
not too low.
There are some caveats still to having too many concurrent indexing
threads as it can lead to too many little segments, and it's not a
magical go faster knob if indexing is already bottlenecked by disk, but
this limitation is artificial and surprising to users and so it should
be removed.
This commit also increases the lower bound of the max processes ulimit,
to prepare for a world where Elasticsearch instances might be running
with more the previous cap of 32 processors. With the current settings,
Elasticsearch wants to create roughly 576 + 25 * p / 2 threads, where p
is the number of processors. Add in roughly 7 * p / 8 threads for the GC
threads and a fudge factor, and 4096 should cover us pretty well up to
256 cores.
Relates #20874
Some people apparently never run tests when they change this file.
Neither do they read comments right below the line they change that
they should do the change after all.
It is important that folks understand that snapshot/restore isn't
for archiving. It is appropriate for backup and disaster recovery
but not for archival over long periods of time because of version
incompatibility.
Closes#20866
`AbstractSearchAsyncAction` has only been tested in integration tests.
The infrastructure is rather critical and should be tested on a unit-test
level. This change takes the first step.
This changes the CacheBuilder methods that are used to set expiration times to accept a
TimeValue instead of long. Accepting a long can lead to issues where the incorrect value is
passed in as the time unit is not clearly identified. By using TimeValue the caller no longer
needs to worry about the time unit used by the cache or builder.