* [Upgrade] Lucene 9.1.0-snapshot-ea989fe8f30
Upgrades from Lucene 9.0.0 to 9.1.0-snapshot-ea989fe8f30 in preparation for
9.1.0 GA.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Add spanishplural token filter
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* fix KNOWN_TOKENIZERS
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Removes include_type_name from the high level reset client along with relevant
deprecated methods in IndicesClient. All tests are updated to remove the
parameter from the rest requests along with various toXContent methods that are
no longer required.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
Removes type support from DocWrite Request and Response, all derived classes,
and all places used.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Remove type param from yml and integ test files
Signed-off-by: Suraj Singh <surajrider@gmail.com>
* Remove type mapping specific update API
Signed-off-by: Suraj Singh <surajrider@gmail.com>
Removes type support from the get and mget transport and rest actions along
with removal from ShardGetService.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Remove doc type specific indexing APIs and relevant changes
Signed-off-by: Suraj Singh <surajrider@gmail.com>
* Remove type param from yml and integ test files
Signed-off-by: Suraj Singh <surajrider@gmail.com>
Remove the deprecated sync flush API which was replaced by sequence
number and retention lease mechanisms and no longer used in 2.0.0.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit rebases the versioning to OpenSearch 1.0.0
Co-authored-by: Rabi Panda <adnapibar@gmail.com>
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit adds the SPDX Apache-2.0 license header along with an additional
copyright header for all modifications.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
This commit refactors the remaining o.e.index and o.e.test packages in the
test/fixtures module. References throughout the codebase are also refactored.
Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
* Rename directory elasticsearch to opensearch
Rename EvilElasticsearchCliTests EvilOpenSearchCliTests
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename org.elasticsearch to org.opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename waitForElasticsearch to waitForOpenSearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename OpensearchNode to OpenSearchNode
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename "elasticsearch.version" to "opensearch.version"
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearchVersionString to opensearchVersionString
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch.yml to opensearch.yml
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename runElasticsearchTests to runOpenSearchTests
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearchVersion to opensearchVersion
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch to opensearch in gradle files
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ElasticsearchAssertions to OpenSearchAssertions
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename folder share/elasticsearch to share/opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch to opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch-service-x64 to opensearch-service-x64
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch-service.bat to opensearch-service.bat
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename Elasticsearch to Opensearch
Rename elasticsearch to opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ELASTIC_PASSWORD_FILE to OPENSEARCH_PASSWORD_FILE
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ELASTICSEARCH_PASSWORD to OPENSEARCH_PASSWORD
Rename elasticsearch to opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch to opensearch
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch.log to opensearch.log
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename es-repo to opensearch-repo
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ESTestCase to OpenSearchTestCase
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ESRestTestCase to OpenSearchRestTestCase
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch to opensearch
Rename "Starts ElasticSearch" to "Starts OpenSearch"
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename ESElasticsearchCliTestCase to BaseOpenSearchCliTestCase
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename "elasticsearch:test" to "opensearch:test"
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename test91ElasticsearchShardCliPackaging to test91OpenSearchShardCliPackaging
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch.toString to opensearch.toString
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename elasticsearch.pid to opensearch.pid
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename "Opensearch" to "OpenSearch"
Rename "elasticsearch" to "opensearch"
Signed-off-by: Harold Wang <harowang@amazon.com>
* Rename Elasticsearch to OpenSearch
Remove unecessary dot after opensearch.
Signed-off-by: Harold Wang <harowang@amazon.com>
* [Rename] Rename qa folder
Signed-off-by: Harold Wang <harowang@amazon.com>
* Remove the dot in the end of "package org.opensearch."
Signed-off-by: Harold Wang <harowang@amazon.com>
* Add semicolon
Signed-off-by: Harold Wang <harowang@amazon.com>
This PR adds deprecation warnings when accessing System Indices via the REST layer. At this time, these warnings are only enabled for Snapshot builds by default, to allow projects external to Elasticsearch additional time to adjust their access patterns.
Deprecation warnings will be triggered by all REST requests which access registered System Indices, except for purpose-specific APIs which access System Indices as an implementation detail a few specific APIs which will continue to allow access to system indices by default:
- `GET _cluster/health`
- `GET {index}/_recovery`
- `GET _cluster/allocation/explain`
- `GET _cluster/state`
- `POST _cluster/reroute`
- `GET {index}/_stats`
- `GET {index}/_segments`
- `GET {index}/_shard_stores`
- `GET _cat/[indices,aliases,health,recovery,shards,segments]`
Deprecation warnings for accessing system indices take the form:
```
this request accesses system indices: [.some_system_index], but in a future major version, direct access to system indices will be prevented by default
```
This commit adds the `index.routing.allocation.prefer._tier` setting to the
`DataTierAllocationDecider`. This special-purpose allocation setting lets a user specify a
preference-based list of tiers for an index to be assigned to. For example, if the setting were set
to:
```
"index.routing.allocation.prefer._tier": "data_hot,data_warm,data_content"
```
If the cluster contains any nodes with the `data_hot` role, the decider will only allow them to be
allocated on the `data_hot` node(s). If there are no `data_hot` nodes, but there are `data_warm` and
`data_content` nodes, then the index will be allowed to be allocated on `data_warm` nodes.
This allows us to specify an index's preference for tier(s) without causing the index to be
unassigned if no nodes of a preferred tier are available.
Subsequent work will change the ILM migration to make additional use of this setting.
Relates to #60848
This commit adds the functionality to allocate newly created indices on nodes in the "hot" tier by
default when they are created.
This does not break existing behavior, as nodes with the `data` role are considered to be part of
the hot tier. Users that separate their deployments by using the `data_hot` (and `data_warm`,
`data_cold`, `data_frozen`) roles will have their data allocated on the hot tier nodes now by
default.
This change is a little more complicated than changing the default value for
`index.routing.allocation.include._tier` from null to "data_hot". Instead, this adds the ability to
have a plugin inject a setting into the builder for a newly created index. This has the benefit of
allowing this setting to be visible as part of the settings when retrieving the index, for example:
```
// Create an index
PUT /eggplant
// Get an index
GET /eggplant?flat_settings
```
Returns the default settings now of:
```json
{
"eggplant" : {
"aliases" : { },
"mappings" : { },
"settings" : {
"index.creation_date" : "1597855465598",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "1",
"index.provided_name" : "eggplant",
"index.routing.allocation.include._tier" : "data_hot",
"index.uuid" : "6ySG78s9RWGystRipoBFCA",
"index.version.created" : "8000099"
}
}
}
```
After the initial setting of this setting, it can be treated like any other index level setting.
This new setting is *not* set on a new index if any of the following is true:
- The index is created with an `index.routing.allocation.include.<anything>` setting
- The index is created with an `index.routing.allocation.exclude.<anything>` setting
- The index is created with an `index.routing.allocation.require.<anything>` setting
- The index is created with a null `index.routing.allocation.include._tier` value
- The index was created from an existing source metadata (shrink, clone, split, etc)
Relates to #60848
Converting AllFieldMapper to parametrized form ended up not being run through BWC
testing, resulting in an incorrect implementation being committed. This commit fixes
the serialization, and adds unit tests as well as unmuting the BWC test that uncovered
the bug.
Fixes#60986
This commit introduces a new thread pool, `system_read`, which is
intended for use by system indices for all read operations (get and
search). The `system_read` pool is a fixed thread pool with a maximum
number of threads equal to lesser of half of the available processors
or 5. Given the combination of both get and read operations in this
thread pool, the queue size has been set to 2000. The motivation for
this change is to allow system read operations to be serviced in spite
of the number of user searches.
In order to avoid a significant performance hit due to pattern matching
on all search requests, a new metadata flag is added to mark indices
as system or non-system. Previously created system indices will have
flag added to their metadata upon upgrade to a version with this
capability.
Additionally, this change also introduces a new class, `SystemIndices`,
which encapsulates logic around system indices. Currently, the class
provides a method to check if an index is a system index and a method
to find a matching index descriptor given the name of an index.
Relates #50251
Relates #37867
Backport of #57936
These tests sometimes install a template so they can be compatible with older versions, but they run
amok of the occasionally installed "global" template which changes the default number of shards.
This commit adds `allowedWarnings` and allows these warnings to be present, but doesn't fail if they
are not (since the global template is only randomly installed).
Resolves#58807Resolves#58258
If the recovery source is on an old node (before 7.2), then the recovery
target won't have the safe commit after phase1 because the recovery
source does not send the global checkpoint in the clean_files step. And
if the recovery fails and retries, then the recovery stage won't
transition properly. If a sync_id is used in peer recovery, then the
clean_files step won't be executed to move the stage to TRANSLOG.
Relates ##7187
Closes#57708
This template was added for 7.0 for what I am guessing is a BWC issue related to deprecation
warnings. It unfortunately seems to cause failures because templates for these tests are not cleared
after the test (because these are upgrade tests).
Resolves#56363
As the datastream information is stored in the `ClusterState.Metadata` we exposed
the `Metadata` to the `AsyncWaitStep#evaluateCondition` method in order for
the steps to be able to identify when a managed index is part of a DataStream.
If a managed index is part of a DataStream the rollover target is the DataStream
name and the highest generation index is the write index (ie. the rolled index).
(cherry picked from commit 6b410dfb78f3676fce1b7401f1628c1ca6fbd45a)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
I've noticed that a lot of our tests are using deprecated static methods
from the Hamcrest matchers. While this is not a big deal in any
objective sense, it seems like a small good thing to reduce compilation
warnings and be ready for a new release of the matcher library if we
need to upgrade. I've also switched a few other methods in tests that
have drop-in replacements.
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
If an index was created in version 6 and contain a date field with a joda-style pattern it should still be allowed to search and insert document into it.
Those created in 6 but date pattern starts with 8, should be considered as java style.
Today, the replica allocator uses peer recovery retention leases to
select the best-matched copies when allocating replicas of indices with
soft-deletes. We can employ this mechanism for indices without
soft-deletes because the retaining sequence number of a PRRL is the
persisted global checkpoint (plus one) of that copy. If the primary and
replica have the same retaining sequence number, then we should be able
to perform a noop recovery. The reason is that we must be retaining
translog up to the local checkpoint of the safe commit, which is at most
the global checkpoint of either copy). The only limitation is that we
might not cancel ongoing file-based recoveries with PRRLs for noop
recoveries. We can't make the translog retention policy comply with
PRRLs. We also have this problem with soft-deletes if a PRRL is about to
expire.
Relates #45136
Relates #46959
We need to make sure that the global checkpoints and peer recovery
retention leases were advanced to the max_seq_no and synced; otherwise,
we can risk expiring some peer recovery retention leases because of the
file-based recovery threshold.
Relates #49448
Fixes the muted test "testAutoExpandIndicesDuringRollingUpgrade". We can't wait in the test for
the index to be green, as we have put a filter exclusion into place that prevents all shards from
being allocated after a node rejoins. Instead we check whether the correct auto-expansion has
taken place.
Closes#50426