This commit adds a new setting `cluster.persistent_tasks.allocation.enable`
that can be used to enable or disable the allocation of persistent tasks.
The setting accepts the values `all` (default) or `none`. When set to
none, the persistent tasks that are created (or that must be reassigned)
won't be assigned to a node but will reside in the cluster state with
a no "executor node" and a reason describing why it is not assigned:
```
"assignment" : {
"executor_node" : null,
"explanation" : "persistent task [foo/bar] cannot be assigned [no
persistent task assignments are allowed due to cluster settings]"
}
```
We discussed and agreed to include the synced-flush change in 6.3.0+ but
not in 5.6.9. We will re-evaluate the urgency and importance of the
issue then decide which versions that the change should be included.
This commit reenables the Javadoc tasks on JDK 10. To reenable these
tasks, we have to workaround a bug in JDK 10 which trips on some deeply
nested anonymous classes that we have in the codebase (and are fine
as-is, this is not a problem with this code). The workaround is to
remove the compiled classes from the classpath. This has been reported
upstream and the workaround was suggested there (see the code comment).
The assumption here is that we will no longer be making a release from
the 6.1 branch. Since we assume that all versions on this branch are
actually released, we do not want to leave behind any versions that
would require a snapshot build. We do have a test that verifies that all
released versions are present here, so if another release is performed
from the 6.1 branch, that test will fail and we will know to add the
version constant at that time.
This will reject mapping updates to the `_default_` mapping with 7.x indices
and still emit a deprecation warning with 6.x indices.
Relates #15613
Supersedes #28248
I misunderstood how the bwc versions works. If we backport to 5.x, we
need to backport to all supported 6.*. This commit corrects the BWC
versions for PreSyncedFlushResponse.
Relates #29103
* Remove BytesArray and BytesReference usage from XContentFactory
This removes the usage of `BytesArray` and `BytesReference` from
`XContentFactory`. Instead, a regular `byte[]` should be passed. To assist with
this a helper has been added to `XContentHelper` that will preserve the offset
and length from the underlying BytesReference.
This is part of ongoing work to separate the XContent parts from ES so they can
be factored into their own jar.
Relates to #28504
* Add pluggable XContentBuilder writers and human readable writers
This adds the ability to use SPI to plug in writers for XContentBuilder. By
implementing the XContentBuilderProvider class we can allow Elasticsearch to
plug in different ways to encode types to JSON.
Important caveat for this, we should always try to have the class implement
`ToXContentFragment` first, however, in the case of classes from our
dependencies (think Joda classes or Lucene classes) we need a way to specify
writers for these classes.
This also makes the human-readable field writers generic and pluggable, so that
we no longer need to tie XContentBuilder to things like `TimeValue` and
`ByteSizeValue`. Contained as part of this moves all the TimeValue human
readable fields to the new `humanReadableField` method. A future commit will
move the `ByteSizeValue` calls over to this method.
Relates to #28504
We need to configure the Java 9 checkstyle task to depend on the
checkstyle configuration task or the task could run before the
checkstyle conf has been copied leading to runtime failures. We have to
do this after projects have been evaluated because the configuration of
these tasks can occur before the Java 9 source set has been added to a
project.
This commit fixes the directory name bundled plugins are added under
within a meta plugin to be the configured name of the bundled plugin,
instead of the project name.
This commit creates the copyRestSpec task for rest integ tests
immediately on creation of the RestIntegTestTask instead of lazily in
afterEvaluate. This allows other projects to add additional rest specs
to be copied, instead of needing to create another parallel copy task.
`$_path` is used by documentation tests to ignore a value from a
response, for example:
```
[source,js]
----
{
"count": 1,
"datafeeds": [
{
"datafeed_id": "datafeed-total-requests",
"state": "started",
"node": {
...
"attributes": {
"ml.machine_memory": "17179869184",
"ml.max_open_jobs": "20",
"ml.enabled": "true"
}
},
"assignment_explanation": ""
}
]
}
----
// TESTRESPONSE[s/"17179869184"/$body.$_path/]
```
That example shows `17179869184` in the compiled docs but when it runs
the tests generated by that doc it ignores `17179869184` and asserts
instead that there is a value in that field. This is required because we
can't predict things like "how many milliseconds will this take?" and
"how much memory will this take?".
Before this change it was impossible to use `$_path` when any component
of the path contained a `.`. This fixes the `$_path` evaluator to
properly escape `.`.
Closes#28770
I did a little digging. It looks like IOException is thrown when the other
side closes its connection while we're waiting on our buffer to fill up. We
totally expect that in this test. It feels to me like we should throw a
`ConnectionClosedException` but upstream does not agree:
https://issues.apache.org/jira/browse/HTTPASYNC-134
While we *could* catch the exception and transform it ourselves that
seems like a bigger change than is merited at this point.
Closes#29136
Currently we store the indices specified in the request URL together with all
the other ranking evaluation specification in RankEvalSpec. This is not ideal
since e.g. the indices are not rendered to xContent and so cannot be parsed
back. Instead we should keep them in RankEvalRequest.
This removes the `Text` and `Geopoint` special handling from `XContentBuilder`.
Instead, these classes now implement `ToXContentFragment` and render themselves
accordingly.
This allows us to further decouple XContentBuilder from Elasticsearch-specific
classes so it can be factored into a standalone lib at a later time.
Relates to #28504
This modifies xcontent serialization of Exceptions to contain suppressed
exceptions. If there are any suppressed exceptions they are included in
the exception response by default. The reasoning here is that they are
fairly rare but when they exist they almost always add extra useful
information. Take, for example, the response when you specify two broken
ingest pipelines:
```
{
"error" : {
"root_cause" : ...snip...
"type" : "parse_exception",
"reason" : "[field] required property is missing",
"header" : {
"processor_type" : "set",
"property_name" : "field"
},
"suppressed" : [
{
"type" : "parse_exception",
"reason" : "[field] required property is missing",
"header" : {
"processor_type" : "convert",
"property_name" : "field"
}
}
]
},
"status" : 400
}
```
Moreover, when suppressed exceptions come from 500 level errors should
give us more useful debugging information.
Closes#23392
After elastic/elasticsearch#29109, the `needsReassignment` method has
been moved to the PersistentTasksClusterService. This commit fixes
some compilation in tests I introduced.
This commit consists of small code cleanups and refactorings in the
persistent tasks framework. Most changes are in
PersistentTasksClusterService where some methods have been renamed
or merged together, documentation has been added, unused code removed
in order to improve readability of the code.
Update allocation awareness docs
Today, the docs imply that if multiple attributes are specified the the
whole combination of values is considered as a single entity when
performing allocation. In fact, each attribute is considered separately. This
change fixes this discrepancy.
It also replaces the use of the term "awareness zone" with "zone or domain", and
reformats some paragraphs to the right width.
Fixes#29105
This is a follow up to a previous change which set the error file path
for the package distributions. The observation here is that we always
set the working directory of Elasticsearch to the root of the
installation (i.e., Elasticsearch home). Therefore, we can specify the
error file path relative to this directory and default it to the logs
directory, similar to the package distributions.
This is a follow up to a previous change which set the heap dump path
for the package distributions. The observation here is that we always
set the working directory of Elasticsearch to to the root of
installation (i.e., Elasticsearch home). Therefore, we can specify the
heap dump path relative to this directory and default it to the data
directory, similar to the package distributions.
The method Translog#getMinGenerationForSeqNo does not modify the current
translog but only access, it therefore should acquire the readLock
instead of writeLock.
Adds SSLHandshakeException to the list of Exceptions that are
specifically rethrown from the async thread so its type is preserved.
This should make it easier to debug synchronous calls with ssl issues.
When upgrading via the RPM package, we can run into a problem where
the keystore fails to be created. This arises because the %post script
on RPM runs after the new package files are installed but before the
removal of the old package files. This means that the contents of the
lib folder can contain files from the old package and the new package
and thus running the create keystore tool can encounter JAR hell
issues and fail. To solve this, we move creating the keystore to the
%posttrans script which runs after the old package files are
removed. We only need to do this on the RPM package, so we add a
switch in the shared post-install script.
Today we report thread pool info using a common object. This means that
we use a shared set of terminology that is not consistent with the
terminology used to the configure thread pools. This holds in particular
for the minimum and maximum number of threads in the thread pool where
we use the following terminology:
thread pool info | fixed | scaling
min core size
max max size
This commit changes the display of thread pool info to be dependent on
the type of the thread pool so that we can align the terminology in the
output of thread pool info with the terminology used to configure a
thread pool.
A new engine now can have more than one empty translog since #28676.
This cause #testShouldPeriodicallyFlush failed because in the test we
asssume an engine should have one empty translog. This commit takes into
account the extra translog size of a new engine.
In the past the Low Level REST Client was super careful not to wrap
any exceptions that it throws from synchronous calls so that callers can
catch the exceptions and work with them. The trouble with that is that
the exceptions are originally thrown on the async thread pool and then
transfered back into calling thread. That means that the stack trace of
the exception doesn't have the calling method which is *super* *ultra*
confusing.
This change always wraps exceptions transferred from the async thread
pool so that the stack trace of the thrown exception contains the
caller's stack. It tries to preserve the type of the throw exception but
this is quite a fiddly thing to get right. We have to catch every type
of exception that we want to preserve, wrap with the same type and
rethrow. I've preserved the types of all exceptions that we had tests
mentioning but no other exceptions. The other exceptions are either
wrapped in `IOException` or `RuntimeException`.
Closes#28399