In the ClusterPrivilegeTests class, the code was resetting the node
after each test and failures were seen in CI that were HTTP 401 when
a 403 was expected. This commit removes the resetting of the node
between tests as this was not necessary.
Additionally, there is an issue in the SecuritySingleNodeTestCase where
the rest client was not torn down afterstopping a node and starting a
new node. This means the client used in other tests would not be
connected to the right cluster. This change resolves this by tearing
down the rest client after the old node is torn down.
relates elastic/x-pack-elasticsearch#4383
Original commit: elastic/x-pack-elasticsearch@2f81a4b2e2
Fails the build if any subprojects of `:libs` have dependencies in `:libs`
except for `:libs:elasticsearch-core`.
Since we now have three places where we resolve project substitutions
I've added `dependencyToProject` to `project.ext` in all projects. It
resolves both `project` style dependencies and "external" style (like
"org.elasticsearch:elasticsearch-core:${version}") dependencies to
`Project`s using the `projectSubstitutions`. I use this new function all
three places where resovle project substitutions.
Finally this pulls `apply plugin: 'elasticsearch.build'` out of
`libs/*/build.gradle` and into a subprojects clause in
`libs/build.gradle`. I do this entirely so that I can call
`tasks.precommit.dependsOn checkDependencies` without waiting for the
subprojects to be evaluated or worrying about whether or not they have
`precommit` set up in a normal way.
The following is the current behaviour, tested now through a specific
test.
The low-level REST client doesn't add a leading wildcard when not
provided, unless a `pathPrefix` is configured in which case a trailing
slash will be automatically added when concatenating the prefix and the
provided uri.
Also when configuring a pathPrefix, if it doesn't start with a '/' it
will be modified by adding the missing leading '/'.
CRUD: Parsing changes for UpdateRequest (#29293)
Use `ObjectParser` to parse `UpdateRequest` so we reject unknown fields
and drop support for the `_fields` parameter because it was deprecated
in 5.x.
Currently there is a hardcoded check against 10000, which
is the default value of the max_result_window setting. This
is a relic of the past. Removing this hardcoded validation
means we respect the setting so that a user may alter it
when appropriate.
relates elastic/x-pack-elasticsearch#3672
Original commit: elastic/x-pack-elasticsearch@9c9c5bab89
The default percentiles values and the default highlighter per- and
post-tags are currently publicly accessible and can be altered any time.
This change prevents this by restricting field access.
- Changes in build SAML SP metadata to support multiple
encryption keys.
- Changes in Saml metadata command to support the use of
protected keystores.
- Changes to export and set proper usage type in key
descriptors of SP saml metadata XML.
- Changes in SAML realm to create chaining key info
credential resolver backed by Collection of encryption
keys as per SP configuration.
- Unit tests and test enhancements
relates elastic/x-pack-elasticsearch#3980,elastic/x-pack-elasticsearch#4293
Original commit: elastic/x-pack-elasticsearch@e02ebcc9e6
Today when reading an operation from the current generation fails
tragically we attempt to close the translog. However, by invoking close
before releasing the read lock we end up in self-deadlock because
closing tries to acquire the write lock and the read lock can not be
upgraded to a write lock. To avoid this, we move the close invocation
outside of the try-with-resources that acquired the read lock. As an
extra guard against this, we document the problem and add an assertion
that we are not trying to invoke close while holding the read lock.
This commit is a minor cleanup of a code block in NodeInfo.groovy. We
remove an unused variable, make the formatting of the code consistent,
and cast a property that is typed as an Object to a String to avoid an
annoying IDE warning.
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.
Original commit: elastic/x-pack-elasticsearch@804a11c243
Some build tasks require older JDKs. For example, the BWC build tasks
for older versions of Elasticsearch require older JDKs. It is onerous to
require these be configured when merely compiling Elasticsearch, the
requirement that they be strictly set to appropriate values should only
be enforced if these tasks are going to be executed. To address this, we
lazy configure these tasks.
Rather than checking a substring match, now that
VersionProperties#elasticsearch is a strongly-typed instance of Version,
we can use the Version#isSnapshot convenience method. This commit
switches the root build file to do this.
Today we have a nodeVersion property on the NodeInfo class that we use
to carry around information about a standalone node that we will start
during tests. This property is a String which we usually end up parsing
to a Version anyway to do various checks on it. This can end up
happening a lot during configuration so it would be more efficient and
safer to have this already be strongly-typed as a Version and parsed
from a String only once for each instance of NodeInfo. Therefore, this
commit makes NodeInfo#nodeVersion strongly-typed as a Version.
There are some scenarios where the license on a source file is one that
is compatible with our projects yet we do not want to add the license to
the list of approved license headers (to keep the number of files with
that compatible license contained). This commit adds the ability to
exclude a file from the license check.
This commit sets the BWC builds to use the version of the JDK that is
appropriate for the indvidual version of Elasticsearch under test.
Original commit: elastic/x-pack-elasticsearch@967a497a20
Today we have JAVA_HOME for the compiler Java home and RUNTIME_JAVA_HOME
for the test Java home. However, when we compile BWC nodes and run them,
neither of these Java homes might be the version that was suitable for
that BWC node (e.g., 5.6 requires JDK 8 to compile and to run). This
commit adds support for the environment variables JAVA\d+_HOME and uses
the appropriate Java home based on the version of the node being
started. We even do this for reindex-from-old which requires JDK 7 for
these very old nodes. Note that these environment variables are not
required if not running BWC tests, and they are strictly required if
running BWC tests.
SYS TYPE returns an integer instead of a boolean (the bug was caused
by reading the ODBC spec which refers to the wire representation instead
of the JDBC one which uses primitives)
Original commit: elastic/x-pack-elasticsearch@f9fe64ab0d
Improve grammar to allow use of ? as an alternative to STRING
through-out all commands
Add various parsing tests checking the ? usage for SYS commands
Original commit: elastic/x-pack-elasticsearch@d0d1feeb4c
The BWC builds always fetch the latest from the elastic/elasticsearch
repository for the BWC branches. Yet, there are use-cases for using the
local checkout without fetching the latest. This commit enables these
use-cases by adding a tests.bwc.git.fetch.latest property to skip the
fetches.
This change adds a client that is connected to a remote cluster.
This allows plugins and internal structures to invoke actions on
remote clusters just like a if it's a local cluster. The remote
cluster must be configured via the cross cluster search infrastructure.
This adds 2 testcases that test if a shard goes idle
pending (uncommitted) segments are committed and unreferenced
files will be freed.
Relates to #29482
The IndexPrivilegeTests have been notoriously slow for years.
@polyfractal identified the primary issue, which is that these tests
were running against an internal cluster with 1 or 2 data nodes and had
the number of replicas set to 1 for indices by default and the methods
in the test would perform a wait for green. This wait for green would
take the full thirty seconds when there was a single data node as the
index could never reach green health due to an unassigned replica. This
could have been caught earlier by asserting the request did not timeout
but this assertion was not present.
This change does a few things to address the issues above. The first is
that these tests now extend SecuritySingleNodeTestCase, which is a new
class that extends ESSingleNodeTestCase and contains the necessary
logic for the setup and teardown of security; much of which is based
off of SecurityIntegTestCase. This means that these tests always run
against a single node cluster and have a much simpler setup. The
default index template for these tests applies settings so that indices
are created with a single shard and no replicas.
Assertions have been added to ensure the health checks with a wait for
green status have not timed out. A handcoded wait for snapshots to
finish has been replaced with an assertBusy call. Finally, the BadApple
annotation has been removed from the test.
Relates elastic/x-pack-elasticsearch#324
Original commit: elastic/x-pack-elasticsearch@572919273d
Control max size and count of warning headers
Add a static persistent cluster level setting
"http.max_warning_header_count" to control the maximum number of
warning headers in client HTTP responses.
Defaults to unbounded.
Add a static persistent cluster level setting
"http.max_warning_header_size" to control the maximum total size of
warning headers in client HTTP responses.
Defaults to unbounded.
With every warning header that exceeds these limits,
a message will be logged in the main ES log,
and any more warning headers for this response will be
ignored.
Unlike the `indices.create`, `indices.get_mapping` and `indices.put_mapping`
APIs, the index APIs do not need the `include_type_name` option, they can work
work with and without types withouth knowing whether types are being used.
Internally, `_doc` is used as a type if no type is provided, like for the
`indices.put_mapping` API.
Historically, the bootstrap checks used 2048 as the minimum limit for
the maximum number of threads. This limit was guided by the fact that
the number of processors was artificially capped at 32. This limit was
removed in 6.0.0 and the minimum limit was raised to 4096 to accommodate
this. However, the docs were not updated and this commit addresses that
miss.
This changes JDBC so it can be released. It bundles the
`sql-shared-client` and `sql-proto` jars into the jar for the jdbc client.
It also Generates a pom for the jdbc driver when you run `gradle assemble`
on it. This will allow us to release the jdbc driver.
It also adds a zip distribution of the jdbc driver with all of its
dependencies bundled in the zip. It'd be nice to bundle all of the jdbc
driver's dependencies in the jar but we can't quite do that yet. So, for
now, to help folks using BI tools use the JDBC driver, we build a zip.
Original commit: elastic/x-pack-elasticsearch@9c668231d4
This change adds the current primary term to the header of the current
translog file. Having a term in a translog header is a prerequisite step
that allows us to trim translog operations given the max valid seq# for
that term.
This commit also updates tests to conform the primary term invariant
which guarantees that all translog operations in a translog file have
its terms at most the term stored in the translog header.
* Add a helper method to get a random java.util.TimeZone
This adds a helper method to ESTestCase that returns a randomized
`java.util.TimeZone`. This can be used when transitioning code from Joda to the
JDK's time classes.
Rewrote the GROUP BY to use composite aggregation instead of terms
(and everything that comes with it) but instead rely on composite aggregation
This not only works better but simplifies the code complexity since
composite is a straight, two-level tree:
1. root/group-by/composite-keys
2. (metric) aggregations
This removes a lot of complexity from all stages that involve creating,
assembling and especially parsing the results.
By moving to composite agg, the aggregation/GROUP BY are now pageable
so the consumer/listener had to be extended to include a dedicated
cursor and specific (bucket) extractors inline with the scroll requests.
While at it, also improved the support for implicit GROUP BY by
formalizing it (previously it supported only counts and no other
agg).
In addition:
Fixed a JDBC bug that caused incorrect timeout to be passed
Improved the returned RowSet a bit and add better naming
Pick up @Nullable move from core
Make sure to specify the TimeZone for DateTimeHistogram extraction
Add missing javadoc
To avoid delegating NamedWriteableRegistry (NWR) and to keep the scope
clean, SQL writeables now handle their own serialization, keeping the
boundary between the Elasticsearch's NWR in place.
Pass NamedWriteableRegistry only when looking at the next page
To keep in line with the existing patter and simplify the code
bureaucracy, the deserialization happens directly.
Since the SearchSourceBuilder deserialization happens explicitly (and
it's otherwise opaque), the declarative invocation isn't necessary
anymore.
Add a bit more randomization in tests
Original commit: elastic/x-pack-elasticsearch@f5af046386