At current, we do not feel there is enough of a reason to shade the low
level rest client. It caused problems with commons logging and IDE's
during the brief time it was used. We did not know exactly how many
users will need this, and decided that leaving shading out until we
gather more information is best. Users can still shade the jar
themselves. For information and feeback, see issue #26366.
Closes#26328
This reverts commit 3a20922046.
This reverts commit 2c271f0f22.
This reverts commit 9d10dbea39.
This reverts commit e816ef89a2.
We currently add the apache license/notice for elasticsearch to any
plugin that uses our ES plugin gradle plugin. However, each plugin
should be able to use their own license. This commit adds a licenseFile
and noticeFile property to the root of project's using BuildPlugin,
which is added to jar files for that project.
In some cases our Windows builds fail due to long path names that arise
from a combination of long build job names plus long sub-project
names. While newer versions of Windows can handle long paths, invoking
batch scripts longer than 260 characters via cmd.exe is still
problematic. This leads to failing integration tests because we can not
run the commands to install plugins, create the keystore, and start the
node. This commit handles this by converting all paths on Windows used
to start an Elasticsearch node to short path names.
Relates #26365
This commit removes the keystore creation on elasticsearch startup, and
instead adds a plugin property which indicates the plugin needs the
keystore to exist. It does still make sure the keystore.seed exists on
ES startup, but through an "upgrade" method that loading the keystore in
Bootstrap calls.
closes#26309
This commit makes the security code aware of the Java 9 FilePermission changes (see #21534) and allows us to remove the `jdk.io.permissionsUseCanonicalPath` system property.
The client sniffer depends on the low-level REST client, while the Java high-level REST client and the transport client depend on Elasticsearch itself. Javadoc are not that useful unless they have links to the Elasticsearch classes in the latter case, and to the low-level REST client in the sniffer javadoc. This commit adds those links.
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes#26233
The secure settings tool reads from stdIn and we use a closure to
provide a value for this. Yet, we evaluate they value too late and end up
with the last provided value for all keys.
The environment variable CONF_DIR was previously inconsistently used in
our packaging to customize the location of Elasticsearch configuration
files. The importance of this environment variable has increased
starting in 6.0.0 as it's now used consistently to ensure Elasticsearch
and all secondary scripts (e.g., elasticsearch-keystore) all use the
same configuration. The name CONF_DIR is there for legacy reasons yet
it's too generic. This commit renames CONF_DIR to ES_PATH_CONF.
Relates #26197
The build was ignoring suffixes like "beta1" and "rc1" on the version numbers which was causing the backwards compatibility packaging tests to fail because they expected to be upgrading from 6.0.0 even though they were actually upgrading from 6.0.0-beta1. This adds the suffixes to the information that the build scrapes from Version.java. It then uses those suffixes when it resolves artifacts build from the bwc branch and for testing.
Closes#26017
Compiling all of elasticsearch classes in one jvm, which is shared with
all of the loaded classes of gradle, can trip gc overhead limits. This
commit re-enables forking javac.
This commit updates the version for master to 7.0.0-alpha1. It also adds
the 6.1 version constant, and fixes many tests, as well as marking some
as awaits fix.
Closes#25893Closes#25870
Some REST tests can rapid-fire script compilations that exceed the
default script compilations per minute. Rather than subjecting ourselves
to spurious failures because of the limit being too low, we opt for a
larger limit here.
With Gradle 4.1 and newer JDK versions, we can finally invoke Gradle directly using a JDK9 JAVA_HOME without requiring a JDK8 to "bootstrap" the build. As the thirdPartyAudit task runs within the JVM that Gradle runs in, it needs to be adapted now to be JDK9 aware.
This commit also changes the `JavaCompile` tasks to only fork if necessary (i.e. when Gradle's JVM and JAVA_HOME's JVM differ).
Today we expose `IndexFieldDataService` outside of IndexService to do maintenance
or lookup field data in different ways. Yet, we have a streamlined way to access IndexFieldData
via `QueryShardContext` that should encapsulate all access to it. This also ensures that we control all other functionality like cache clearing etc.
This change also removes the `recycler` option from `ClearIndicesCacheRequest` this option is a no-op and should have been removed long ago.
Stored fields were still being accessed for nested inner hits even if the _source was not requested.
This was done to figure out the id of the root document. However this is already known higher up the stack.
So instead this change adds the id to the nested search context, so that it is no longer required to be fetched via the stored fields.
In case the _source is large and no source is requested then hot threads like these ones would still appear:
```
100.3% (501.3ms out of 500ms) cpu usage by thread 'elasticsearch[AfXKKfq][search][T#6]'
2/10 snapshots sharing following 22 elements
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:352)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
and:
```
8/10 snapshots sharing following 27 elements
org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:135)
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:138)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.fillBuffer(CompressingStoredFieldsReader.java:531)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$BlockState$1.readBytes(CompressingStoredFieldsReader.java:550)
org.apache.lucene.store.DataInput.readBytes(DataInput.java:87)
org.apache.lucene.store.DataInput.skipBytes(DataInput.java:350)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.skipField(CompressingStoredFieldsReader.java:246)
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:601)
org.apache.lucene.index.CodecReader.document(CodecReader.java:88)
org.apache.lucene.index.FilterLeafReader.document(FilterLeafReader.java:411)
org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:347)
org.elasticsearch.search.fetch.FetchPhase.createNestedSearchHit(FetchPhase.java:219)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:150)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.fetch.subphase.InnerHitsFetchSubPhase.hitsExecute(InnerHitsFetchSubPhase.java:73)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:166)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:422)
```
This commit removes all external dependencies from the rest client jar
and shades them in an 'org.elasticsearch.client' package within the jar
using shadowJar gradle plugin. All projects that depended on the
existing jar have been converted to using the 'org.elasticsearch.client'
package prefixes to interact with the rest client.
Closes#25208
Using `sh` means we used whatever default the system has, which is `dash` on
Ubuntu, even though our startup script is written for bash (see the shebang).
This commit introduces the elasticsearch-env script. The purpose of this
script is threefold:
- vastly simplify the various scripts used in Elasticsearch
- provide a script that can be included in other scripts in the
Elasticsearch ecosystem (e.g., plugins)
- correctly establish the environment for all scripts (e.g., so that
users can run `elasticsearch-keystore` from a package distribution
without having to worry about setting `CONF_DIR` first, otherwise the
keystore would be created in the wrong location)
Relates #25815
This commit removes the environment variable ES_JVM_OPTIONS that allows
the jvm.options file to sit separately from the rest of the config
directory. Instead, we use the CONF_DIR environment variable for custom
configuration location just as we do for the other configuration files.
Relates #25679
This commit does two things:
- bumps the version from 6.0.0-alpha3 to 6.0.0-beta1
- renames the 6.0.0-alpha3 version constant to 6.0.0-beta1
Relates #25621
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.
Relates #25600
This commit introduces a nio based tcp transport into framework for
testing.
Currently Elasticsearch uses a simple blocking tcp transport for
testing purposes (MockTcpTransport). This diverges from production
where our current transport (netty) is non-blocking.
The point of this commit is to introduce a testing variant that more
closely matches the behavior of production instances.
Today we load plugins reflectively, looking for constructors that
conform to specific signatures. This commit tightens the reflective
operations here, not allowing plugins to have ambiguous constructors.
Relates #25405
This commit removes path.conf as a valid setting and replaces it with a
command-line flag for specifying a non-default path for configuration.
Relates #25392
You can continue a test started in a previous snippet by marking the
next snippet with `// TEST[continued]`. The trouble is, if you mark the
first snippet in a file with `// TEST[continued]` you'd get difficult
to predict behavior because you'd continue the test started in another
file. This will usually create a test that fails the build but it isn't
easy to track down what you did wrong. This commit catches this
scenario up front and fails the build with a useful error message.
Similarly, if you put `// TEST[continued]` directly after a
`// TESTSETUP` section then the docs tests will fail to run but the
error message did not point you to the `// TEST[continued]` snippet.
This commit catches this scenario up front as well and fails the build
with a useful error message.
The following token filters were moved: stemmer, stemmer_override, kstem, dictionary_decompounder, hyphenation_decompounder, reverse, elision and truncate.
Relates to #23658
Most notable changes:
- better update concurrency: LUCENE-7868
- TopDocs.totalHits is now a long: LUCENE-7872
- QueryBuilder does not remove the boolean query around multi-term synonyms:
LUCENE-7878
- removal of Fields: LUCENE-7500
For the `TopDocs.totalHits` change, this PR relies on the fact that the encoding
of vInts and vLongs are compatible: you can write and read with any of them as
long as the value can be represented by a positive int.
Removes the `assemble` task from the `build` task when we have
removed `assemble` from the project. We removed `assemble` from
projects that aren't published so our releases will be faster. But
That broke CI because CI builds with `gradle precommit build` and,
it turns out, that `build` includes `check` and `assemble`. With
this change CI will only run `check` for projects without an
`assemble`.
Removes the `assemble` task from projects that are not published.
This should speed up `gradle assemble` by skipping projects that
don't need to be built. Which is useful because `gradle assemble`
is how we cut releases.
This snapshot has faster range queries on range fields (LUCENE-7828), more
accurate norms (LUCENE-7730) and the ability to use fake term frequencies
(LUCENE-7854).
This change removes the `postings` highlighter. This highlighter has been removed from Lucene master (7.x) because it behaves
exactly like the `unified` highlighter when index_options is set to `offsets`:
https://issues.apache.org/jira/browse/LUCENE-7815
It also makes the `unified` highlighter the default choice for highlighting a field (if `type` is not provided).
The strategy used internally by this highlighter remain the same as before, it checks `term_vectors` first, then `postings` and ultimately it re-analyzes the text.
Ultimately it rewrites the docs so that the options that the `unified` highlighter cannot handle are clearly marked as such.
There are few features that the `unified` highlighter is not able to handle which is why the other highlighters (`plain` and `fvh`) are still available.
I'll open separate issues for these features and we'll deprecate the `fvh` and `plain` highlighters when full support for these features have been added to the `unified`.
Pins the random testing seed at build start rather than letting
it vary with every randomized testing invocation. This is useful
for projects where random decisions in one randomized testing run
can effect the outcome of a second randomized testing run such as
the full cluster restart tests.
The goal isn't for tests to be able to assume that random decision
will be the same in both tests. It is more to make sure that the
seed printed when a test fails reproduces the appropriate random
decisions. And pinning the seed at startup should do just that.
This works by taking the key passed as a system property if one
is passed, otherwise picking a random long and getting it into
appropriate key format. The build just calls
`new Random().nextLong()` to get the seed while randomized testing
uses a Murmur3 hash of `System.nanoTime`.
This adds an option to `ClusterConfiguration` to preserve the
`shared` directory when starting up a new cluster and switches
the `qa:full-cluster-restart` tests to use it rather than
disable the clean shared task.
Relates to #24846