* Adds a gradle plugin to validate missing javadocs
Use `./gradlew missingJavadoc` to validate missing javadocs.
Currently this task fails because several modules are missing
appropriate javadocs. Once added, this should pass.
Also, precommit PomValidation check currently fails with missing Javadoc
plugin, that needs to be fixed -
https://github.com/opensearch-project/OpenSearch/issues/449
Thus keeping this in a separate feature branch.
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Fix Javadoc errors in module `client/rest` (#685)
* Fix Javadoc errors in client/rest module
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Add package info file in client/rest module
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Fix typos
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Add exception documentation to Javadoc
Signed-off-by: Gregor Zurowski <gregor@zurowski.org>
* Fixes precommit task configuration failures due to newly added missin… (#707)
* Fixes precommit task configuration failures due to newly added missingJavadoc task
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Fixes javadoc task errors due to PR#685
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Updated CONTRIBUTING.md for info on javadocs
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Correcting licenses and naming
Signed-off-by: Himanshu Setia <setiah@amazon.com>
* Correcting version info
Signed-off-by: Himanshu Setia <setiah@amazon.com>
Co-authored-by: Gregor Zurowski <gregor@zurowski.org>
Hadoop 2.8.5 has been reported to have CVEs (https://bugzilla.redhat.com/show_bug.cgi?id=1883549). We need to upgrade this to 2.10.1. This also updates the hadoop-minicluster version to 2.10.1 as well. This upgrade also brings in two additional dependencies, woodstox-core and stax2-api that are added along with the sha1s, licenses and notices.
Also upgrade guava to the latest as per the CVE https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
The `hadoop-minicluster:2.8.5` which is used for integration tests, has dependencies which have been flagged for having security vulnerabilities. Update this to the latest version `3.3.0`
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit fixes some partial rename issues and as a result fixes the failing secure repository-hdfs tests.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
This commit adds the SPDX Apache-2.0 license header along with an additional
copyright header for all modifications.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Refactor the code in the `libs/core` module and any references to those in the entire code base. The refactoring is done as part of the renaming to OpenSearch work.
Signed-off-by: Rabi Panda <adnapibar@gmail.com>
* [Rename] o.e.common subpackages round 1
This commit refactors the following subpackages of o.e.common:
* o.e.common.joda
* o.e.common.lease
* o.e.common.metrics
* o.e.common.network
* o.e.common.path
* o.e.common.recycling
* o.e.common.regex
* o.e.common.rounding
* o.e.common.text
* o.e.common.time
* o.e.common.transport
to the o.opensearch namespace. All references throughout the codebase have been
refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
* fix imports 1
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the following packages:
* o.e.common.geo
* o.e.common.hash
* o.e.common.io
into the o.opensearch.common namespace. All references throughout the codebase
have been refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors the following packages:
* o.e.common.blobstore
* o.e.common.breaker
* o.e.common.bytes
to the o.opensearch.common namespace. All references throughout the codebase
have been refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This commit refactors classes under o.e.common to o.opensearch.common. All
references throughout the codebase have also been refactored.
Signed-off-by: Nicholas Knize <nknize@amazon.com>
This pull request adds a new set of APIs that allows tracking the number of requests performed
by the different registered repositories.
In order to avoid losing data, the repository statistics are archived after the repository is closed for
a configurable retention period `repositories.stats.archive.retention_period`. The API exposes the
statistics for the active repositories as well as the modified/closed repositories.
Backport of #60371
This commit adds external test modules. These are modules meant for
external systems to test edge cases in elasticsearch, but only within
snapshots. They are not meant to be used in production, so protections
are also added from their accidental inclusion in release builds.
Note that this commit does not actually add any new modules, it only
adds the infrastructure for the new modules, under
`test/external-modules`.
This commit changes the value for client name canonicalization to true
in the krb5.conf template file. This is done as a means to workaround
JDK-8246193 which has made it into some builds of JDK8.
Closes#61050
Same as https://github.com/elastic/elasticsearch/pull/43288 for GCS.
We don't need to do the bucket exists check before using the repo, that just needlessly
increases the necessary permissions for using the GCS repository.
In order to ensure that we do not write a broken piece of `RepositoryData`
because the phyiscal repository generation was moved ahead more than one step
by erroneous concurrent writing to a repository we must check whether or not
the current assumed repository generation exists in the repository physically.
Without this check we run the risk of writing on top of stale cached repository data.
Relates #56911
* Replace compile configuration usage with api (#58451)
- Use java-library instead of plugin to allow api configuration usage
- Remove explicit references to runtime configurations in dependency declarations
- Make test runtime classpath input for testing convention
- required as java library will by default not have build jar file
- jar file is now explicit input of the task and gradle will ensure its properly build
* Fix compile usages in 7.x branch
* Fix GCS Mock Behavior for Missing Bucket
We were throwing a 500 instead of a 404 for a missing bucket.
This would make yaml tests needlessly wait for multiple seconds, retrying
the 500 response with backoff, in the test checking behavior for missing buckets.
A few relatively obvious issues here:
* We cannot run the different IT runs (large blob setting one and normal integ run) concurrently
* We need to set the dependency tasks up correctly for the large blob run so that it works in isolation
* We can't use the `localAddress` for the location header of the resumable upload
(this breaks in YAML tests because GCS is using a loopback port forward for the initial request and the
local address will be chosen as the actual Docker container host)
Closes#57026
Two spots that allow for some optimization:
* We are often creating a composite reference of just a single item in
the transport layer => special cased via static constructor to make sure we never do that
* Also removed the pointless case of an empty composite bytes ref
* `ByteBufferReference` is practically always created from a heap buffer these days so there
is no point of dealing with all the bounds checks and extra references to sliced buffers from that
and we can just use the underlying array directly
To read from GCS repositories we're currently using Google SDK's official BlobReadChannel,
which issues a new request every 2MB (default chunk size for BlobReadChannel) using range
requests, and fully downloads the chunk before exposing it to the returned InputStream. This
means that the SDK issues an awfully high number of requests to download large blobs.
Increasing the chunk size is not an option, as that will mean that an awfully high amount of
heap memory will be consumed by the download process.
The Google SDK does not provide the right abstractions for a streaming download. This PR
uses the lower-level primitives of the SDK to implement a streaming download, similar to what
S3's SDK does.
Also closes#55505
Adds ranged read support for GCS repositories in order to enable searchable snapshot support
for GCS.
As part of this PR, I've extracted some of the test infrastructure to make sure that
GoogleCloudStorageBlobContainerRetriesTests and S3BlobContainerRetriesTests are covering
similar test (as I saw those diverging in what they cover)
We have some Dockerfiles that reference Ubuntu 19.04, which is not an LTS
version and has now appears to have been retired from the Ubuntu repositories.
Switch to 18.04, which is the current long-term support version. This
also requires a switch from OpenJDK 12 to 11.
Also change a usage of 16.04 to 18.04, for consistency.
We have some Dockerfiles that reference Ubuntu 19.04, which is not an LTS
version and has now appears to have been retired from the Ubuntu repositories.
Switch to 18.04, which is the current long-term support version. Also change a
usage of 16.04 to 18.04, for consistency.
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
This is a backport of #54803 for 7.x.
This pull request cherry picks the squashed commit from #54803 with the additional commits:
6f50c92 which adjusts master code to 7.x
a114549 to mute a failing ILM test (#54818)
48cbca1 and 50186b2 that cleans up and fixes the previous test
aae12bb that adds a missing feature flag (#54861)
6f330e3 that adds missing serialization bits (#54864)
bf72c02 that adjust the version in YAML tests
a51955f that adds some plumbing for the transport client used in integration tests
Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Andrei Dan <andrei.dan@elastic.co>
This reverts commit 23cccf088810b8416ed278571352393cc2de9523.
Unfortunately SAS token auth still doesn't work with bulk deletes so we can't use them yet.
Closes#54080
Upgrading the GCS SDK to the most recent version.
Adjusting (i.e. improving) the REST mock accordingly.
This should significantly boost performance by pulling in
https://github.com/googleapis/java-core/issues/86 in some cases.
We were not correctly respecting the download range which lead
to the GCS SDK client closing the connection at times.
Also, fixes another instance of failing to drain the request fully before sending the response headers.
Closes#51446
There is an open JDK bug that is causing an assertion in the JDK's
http server to trip if we don't drain the request body before sending response headers.
See https://bugs.openjdk.java.net/browse/JDK-8180754
Working around this issue here by always draining the request at the beginning of the handler.
Fixes#51446
This test was still very GC heavy in Java 8 runs in particular
which seems to slow down request processing to the point of timeouts
in some runs.
This PR completely removes the large number of O(MB) `byte[]` allocations
that were happening in the mock http handler which cuts the allocation rate
by about a factor of 5 in my local testing for the GC heavy `testSnapshotWithLargeSegmentFiles`
run.
Closes#51446Closes#50754
This solves half of the problem in #46813 by moving the S3
tests to using the shared minio fixture so we at least have
some non-3rd-party, constantly running coverage on these tests.
* Faster and Simpler GCS REST Mock
I reworked the GCS mock a little to use less copying+allocation,
log the full request body on failure to read a multi-part request
and generally be a little simpler and easy to follow to track down
the remaining issues that are causing almost daily failures from this
class's multi-part request parsing that can't be reproduced locally.
* Fix GCS Mock Broken Handling of some Blobs
We were incorrectly handling blobs starting in `\r\n` which broke
tests randomly when blobs started on these.
Relates #49429
* Better Logging GCS Blobstore Mock
Two things:
1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of
returning a `400` and failing the tests that way here since we can't reproduce these 400s locally.
2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests.
Relates #49429
Batch deletes get a response for every delete request, not just those that actually hit an existing blob.
The fact that we only responded for existing blobs leads to a degenerate response that throws a parse exception if a batch delete only contains non-existant blobs.
Removing a lot of needless buffering and array creation
to reduce the significant memory usage of tests using this.
The incoming stream from the `exchange` is already buffered
so there is no point in adding a ton of additional buffers
everywhere.
Same as #49518 pretty much but for GCS.
Fixing a few more spots where input stream can get closed
without being fully drained and adding assertions to make sure
it's always drained.
Moved the no-close stream wrapper to production code utilities since
there's a number of spots in production code where it's also useful
(will reuse it there in a follow-up).