I previously added a helper that started a MockLogAppender to ensure it
was never added to a Logger before it was started. I subsequently found
the opposite case in RolloverIT.java where the appender was stopped
before it was closed, therefore creating a race where a concurrently
running test in the same JVM could cause a logging failure. This seems
like a really easy mistake to make when writing a test or introduce when
refactoring a test. I've made a change to use try-with-resources to
ensure that proper setup and teardown is done. This should make it much
harder to introduce this particular test bug in the future.
Unfortunately, it did involve touching a lot of files. The changes here
are purely structural to leverage try-with-resources; no testing logic
has been changed.
Signed-off-by: Andrew Ross <andrross@amazon.com>
This commit removes LegacyESVersion.V_6_3_x constants including all
pre-release versions and bug fixes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit fixes repository-azure hanging on basic operations. This will be reverted
once it's fixed upstream in the Azure library.
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
This commit removes LegacyESVersion.V_6_2_x constants including all
pre-release versions and bug fixes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit removes LegacyESVersion.V_6_1_x constants including all
pre-release versions and bug fixes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This PR removes LegacyESVersion.V_6_0_* constants including all pre-release
versions and bug fixes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
In preparation for removing all LegacyESVersion support by 3.0; this commit
largely refactors the LegacyESVersion test logic from the OpenSearch Version
test logic into an independent test class. This PR also updates Version.fromString
to ensure a proper legacy version is returned when major is > 3 (to support
legacy yaml test and build scripts).
Note that bwc w/ legacy versions are still supported so some cross compatibility
testing is retained in the Version test class.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Zen1 discovery was deprecated in Legacy 7.x for eventual removal. OpenSearch 1.x
carries this deprecation. This commit completely removes all support for Zen1
discovery in favor of Zen2.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
I observed a test failure with the message
'Attempted to append to non-started appender mock' from an assertion in
`OpenSearchTestCase::after`. I believe this indicates that a
MockLogAppender (which is named "mock") was added as an appender to the
static logging context and some other test in the same JVM happened to
cause a logging statement to hit that appender and cause an error, which
then caused an unrelated test to fail (because they share static state
with the logger). Almost all usages of MockLogAppender start it
immediately after creation. I found a few that did not and fixed those.
I also made a static helper in MockLogAppender to start it upon
creation.
Signed-off-by: Andrew Ross <andrross@amazon.com>
As part of the commit 2ebd0e04, we added a new method to the EnginePlugin to provide a
custom TranslogDeletionPolicy. This commit makes minTranslogGenRequired method
abstract in this class for implementation by child classes. The default implementation
is provided by DefaultTranslogDeletionPolicy.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit adds an extension point to EngineConfig through EnginePlugin using
a new EngineConfigFactory mechanism. EnginePlugin provides interface methods to
override configurations in EngineConfig. The EngineConfigFactory produces a new
instance of the EngineConfig using these overrides. Defaults are used absent
overridden configurations.
This serves as a mechanism to override Engine configurations (e.g., CodecService,
TranslogConfig) enabling Plugins to have higher fidelity for changing Engine
behavior without having to override the entire Engine (which is only permitted for
a single plugin).
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Modernize and consolidate JDKs usage across all stages of the build. Use JDK-17 as bundled JDK distribution to run tests
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Using -Djava.security.egd=file:/dev/urandom explicitly for cli tests
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
Shard level indexing pressure improves the current Indexing Pressure framework which performs memory accounting at node level and rejects the requests. This takes a step further to have rejections based on the memory accounting at shard level along with other key performance factors like throughput and last successful requests.
**Key features**
- Granular tracking of indexing tasks performance, at every shard level, for each node role i.e. coordinator, primary and replica.
- Smarter rejections by discarding the requests intended only for problematic index or shard, while still allowing others to continue (fairness in rejection).
- Rejections thresholds governed by combination of configurable parameters (such as memory limits on node) and dynamic parameters (such as latency increase, throughput degradation).
- Node level and shard level indexing pressure statistics exposed through stats api.
- Integration of Indexing pressure stats with Plugins for for metric visibility and auto-tuning in future.
- Control knobs to tune to the key performance thresholds which control rejections, to address any specific requirement or issues.
- Control knobs to run the feature in shadow-mode or enforced-mode. In shadow-mode only internal rejection breakdown metrics will be published while no actual rejections will be performed.
The changes were divided into small manageable chunks as part of the following PRs against a feature branch.
- Add Shard Indexing Pressure Settings. #716
- Add Shard Indexing Pressure Tracker. #717
- Refactor IndexingPressure to allow extension. #718
- Add Shard Indexing Pressure Store #838
- Add Shard Indexing Pressure Memory Manager #945
- Add ShardIndexingPressure framework level construct and Stats #1015
- Add Indexing Pressure Service which acts as orchestrator for IP #1084
- Add plumbing logic for IndexingPressureService in Transport Actions. #1113
- Add shard indexing pressure metric/stats via rest end point. #1171
- Add shard indexing pressure integration tests. #1198
Signed-off-by: Saurabh Singh <sisurab@amazon.com>
Co-authored-by: Saurabh Singh <sisurab@amazon.com>
Co-authored-by: Rabi Panda <adnapibar@gmail.com>
* Drop mocksocket in favour of custom security manager checks (tests only)
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Slightly relaxed host checks to allow all local addresses
Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
* Refactor the logic to control the format for code coverage report and rename the system property
* Remove outdated code of giving JaCoCo files permission when Java security manager enabled
Signed-off-by: Tianli Feng <ftianli@amazon.com>
* Changes to support retrieval of operations from translog based on specified range
Signed-off-by: Sai Kumar <karanas@amazon.com>
* Addressed CR comments
Signed-off-by: Sai Kumar <karanas@amazon.com>
* Added testcases for internal engine
Signed-off-by: Sai Kumar <karanas@amazon.com>
The version framework only added support for OpenSearch 1.x bwc with legacy
clusters. This commit adds support for v2.0 which will be the last version with
bwc support for legacy clusters (v7.10)
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit stages the branch to the next 1.0.1 patch release. BWC testing needs
this even if the next revision is never actually released.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Part 1: Support for cancel_after_timeinterval parameter in search and msearch request
This commit introduces the new request level parameter to configure the timeout interval after which
a search request will be cancelled. For msearch request the parameter is supported both at parent
request and at sub child search requests. If it is provided at parent level and child search request
doesn't have it then the parent level value is set at such child request. The parent level msearch
is not used to cancel the parent request as it may be tricky to come up with correct value in cases
when child search request can have different runtimes
TEST: Added test for ser/de with new parameter
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Part 2: Support for cancel_after_timeinterval parameter in search and msearch request
This commit adds the handling of the new request level parameter and schedule cancellation task. It
also adds a cluster setting to set a global cancellation timeout for search request which will be
used in absence of request level timeout.
TEST: Added new tests in SearchCancellationIT
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Address Review feedback for Part 1
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Address review feedback for Part 2
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Update CancellableTask to remove the cancelOnTimeout boolean flag
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Replace search.cancellation.timeout cluster setting with search.enforce_server.timeout.cancellation to control if cluster level cancel_after_time_interval should take precedence over request level cancel_after_time_interval value
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
* Removing the search.enforce_server.timeout.cancellation cluster setting and just keeping search.cancel_after_time_interval setting with request level parameter taking the precedence.
Signed-off-by: Sorabh Hamirwasia <sohami.apache@gmail.com>
Co-authored-by: Sorabh Hamirwasia <hsorabh@amazon.com>
* Lower build requirement from Java 14+ to Java 11+
Avoid use of -Werror -Xlint:all, which may change significantly across
java releases (new warnings could be added). Instead, just list the
warnings individually.
Workaround JDK 11 compiler bug (JDK-8209058) that only impacts test fixture
code in the build itself.
Signed-off-by: Robert Muir <rmuir@apache.org>
* Disable warning around -source 7 -release 7 for java version checker
The java version checker triggers some default warnings because it
targets java7:
```
> Task :distribution:tools:java-version-checker:compileJava FAILED
warning: [options] source value 7 is obsolete and will be removed in a future release
warning: [options] target value 7 is obsolete and will be removed in a future release
warning: [options] To suppress warnings about obsolete options, use -Xlint:-options.
error: warnings found and -Werror specified
```
Suppress this warning explicitly for this module.
Signed-off-by: Robert Muir <rmuir@apache.org>
* more java14 -> java11 cleanup
Signed-off-by: Robert Muir <rmuir@apache.org>
Co-authored-by: Robert Muir <rmuir@apache.org>
Fixes the cat.health yaml failures when running in a bwc mixed cluster with
legacy (pre 1.0.0) nodes.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Version checks are incorrectly returning versions < 1.0.0.
Signed-off-by: dblock <dblock@amazon.com>
* Removed V_7_10_3 which has not been released as of time of the fork.
Signed-off-by: dblock <dblock@amazon.com>
* Update check for current version to get unreleased versions.
- no unreleased version if the current version is "1.0.0"
- add unit tests for OpenSearch 1.0.0 with legacy ES versions.
- update VersionUtils to include all legacy ES versions as released.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Co-authored-by: Rabi Panda <adnapibar@gmail.com>
This commit fixes mixedCluster and rolling upgrades by spoofing OpenSearch
version 1.0.0 as Legacy version 7.10.2. With this commit an OpenSearch 1.x node
can join a legacy (<= 7.10.2) cluster and rolling upgrades work as expected.
Mixed clusters will not work beyond the duration of the upgrade since shards
cannot be replicated from upgraded nodes to nodes running older versions.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Co-authored-by: Shweta Thareja <tharejas@amazon.com>
* An allocation constraint mechanism, that de-prioritizes nodes from getting picked for allocation if they breach certain constraints
Signed-off-by: Ashwin Pankaj <appankaj@amazon.com>
* Adds a gradle plugin to validate missing javadocs
Use `./gradlew missingJavadoc` to validate missing javadocs.
Currently this task fails because several modules are missing
appropriate javadocs. Once added, this should pass.
Also, precommit PomValidation check currently fails with missing Javadoc
plugin, that needs to be fixed -
https://github.com/opensearch-project/OpenSearch/issues/449
Thus keeping this in a separate feature branch.
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Fix Javadoc errors in module `client/rest` (#685)
* Fix Javadoc errors in client/rest module
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Add package info file in client/rest module
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Fix typos
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Add exception documentation to Javadoc
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Fixes precommit task configuration failures due to newly added missin… (#707)
* Fixes precommit task configuration failures due to newly added missingJavadoc task
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Fixes javadoc task errors due to PR#685
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Updated CONTRIBUTING.md for info on javadocs
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Correcting licenses and naming
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Correcting version info
Signed-off-by: Himanshu Setia <setiah@amazon.com>
Co-authored-by: Gregor Zurowski <gregor@zurowski.org>
This commit adds support for data streams by adding a DataStreamFieldMapper, and making timestamp
field name configurable. Backwards compatibility is supported.
Signed-off-by: Ketan Verma <ketan9495@gmail.com>
Hadoop 2.8.5 has been reported to have CVEs (https://bugzilla.redhat.com/show_bug.cgi?id=1883549). We need to upgrade this to 2.10.1. This also updates the hadoop-minicluster version to 2.10.1 as well. This upgrade also brings in two additional dependencies, woodstox-core and stax2-api that are added along with the sha1s, licenses and notices.
Also upgrade guava to the latest as per the CVE https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
The `hadoop-minicluster:2.8.5` which is used for integration tests, has dependencies which have been flagged for having security vulnerabilities. Update this to the latest version `3.3.0`
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
Instead of snapshot delete of stale indices being a single threaded operation this commit makes
it a multithreaded operation and delete multiple stale indices in parallel using SNAPSHOT
threadpool's workers.
Signed-off-by: Piyush Daftary <piyush.besu@gmail.com>