OpenSearch/docs/build.gradle

1245 lines
43 KiB
Groovy
Raw Normal View History

import static org.elasticsearch.gradle.testclusters.TestDistribution.DEFAULT
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
apply plugin: 'elasticsearch.docs-test'
/* List of files that have snippets that will not work until platinum tests can occur ... */
buildRestTests.expectedUnconvertedCandidates = [
'reference/ml/anomaly-detection/transforms.asciidoc',
'reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc',
'reference/ml/anomaly-detection/apis/get-bucket.asciidoc',
'reference/ml/anomaly-detection/apis/get-category.asciidoc',
'reference/ml/anomaly-detection/apis/get-influencer.asciidoc',
'reference/ml/anomaly-detection/apis/get-job-stats.asciidoc',
'reference/ml/anomaly-detection/apis/get-overall-buckets.asciidoc',
'reference/ml/anomaly-detection/apis/get-record.asciidoc',
'reference/ml/anomaly-detection/apis/get-snapshot.asciidoc',
'reference/ml/anomaly-detection/apis/post-data.asciidoc',
'reference/ml/anomaly-detection/apis/revert-snapshot.asciidoc',
'reference/ml/anomaly-detection/apis/update-snapshot.asciidoc',
]
testClusters.integTest {
if (singleNode().testDistribution == DEFAULT) {
setting 'xpack.license.self_generated.type', 'trial'
}
// enable regexes in painless so our tests don't complain about example snippets that use them
setting 'script.painless.regex.enabled', 'true'
[7.x] Add Snapshot Lifecycle Management (#44382) * Add Snapshot Lifecycle Management (#43934) * Add SnapshotLifecycleService and related CRUD APIs This commit adds `SnapshotLifecycleService` as a new service under the ilm plugin. This service handles snapshot lifecycle policies by scheduling based on the policies defined schedule. This also includes the get, put, and delete APIs for these policies Relates to #38461 * Make scheduledJobIds return an immutable set * Use Object.equals for SnapshotLifecyclePolicy * Remove unneeded TODO * Implement ToXContentFragment on SnapshotLifecyclePolicyItem * Copy contents of the scheduledJobIds * Handle snapshot lifecycle policy updates and deletions (#40062) (Note this is a PR against the `snapshot-lifecycle-management` feature branch) This adds logic to `SnapshotLifecycleService` to handle updates and deletes for snapshot policies. Policies with incremented versions have the old policy cancelled and the new one scheduled. Deleted policies have their schedules cancelled when they are no longer present in the cluster state metadata. Relates to #38461 * Take a snapshot for the policy when the SLM policy is triggered (#40383) (This is a PR for the `snapshot-lifecycle-management` branch) This commit fills in `SnapshotLifecycleTask` to actually perform the snapshotting when the policy is triggered. Currently there is no handling of the results (other than logging) as that will be added in subsequent work. This also adds unit tests and an integration test that schedules a policy and ensures that a snapshot is correctly taken. Relates to #38461 * Record most recent snapshot policy success/failure (#40619) Keeping a record of the results of the successes and failures will aid troubleshooting of policies and make users more confident that their snapshots are being taken as expected. This is the first step toward writing history in a more permanent fashion. * Validate snapshot lifecycle policies (#40654) (This is a PR against the `snapshot-lifecycle-management` branch) With the commit, we now validate the content of snapshot lifecycle policies when the policy is being created or updated. This checks for the validity of the id, name, schedule, and repository. Additionally, cluster state is checked to ensure that the repository exists prior to the lifecycle being added to the cluster state. Part of #38461 * Hook SLM into ILM's start and stop APIs (#40871) (This pull request is for the `snapshot-lifecycle-management` branch) This change allows the existing `/_ilm/stop` and `/_ilm/start` APIs to also manage snapshot lifecycle scheduling. When ILM is stopped all scheduled jobs are cancelled. Relates to #38461 * Add tests for SnapshotLifecyclePolicyItem (#40912) Adds serialization tests for SnapshotLifecyclePolicyItem. * Fix improper import in build.gradle after master merge * Add human readable version of modified date for snapshot lifecycle policy (#41035) * Add human readable version of modified date for snapshot lifecycle policy This small change changes it from: ``` ... "modified_date": 1554843903242, ... ``` To ``` ... "modified_date" : "2019-04-09T21:05:03.242Z", "modified_date_millis" : 1554843903242, ... ``` Including the `"modified_date"` field when the `?human` field is used. Relates to #38461 * Fix test * Add API to execute SLM policy on demand (#41038) This commit adds the ability to perform a snapshot on demand for a policy. This can be useful to take a snapshot immediately prior to performing some sort of maintenance. ```json PUT /_ilm/snapshot/<policy>/_execute ``` And it returns the response with the generated snapshot name: ```json { "snapshot_name" : "production-snap-2019.04.09-rfyv3j9qreixkdbnfuw0ug" } ``` Note that this does not allow waiting for the snapshot, and the snapshot could still fail. It *does* record this information into the cluster state similar to a regularly trigged SLM job. Relates to #38461 * Add next_execution to SLM policy metadata (#41221) * Add next_execution to SLM policy metadata This adds the next time a snapshot lifecycle policy will be executed when retriving a policy's metadata, for example: ```json GET /_ilm/snapshot?human { "production" : { "version" : 1, "modified_date" : "2019-04-15T21:16:21.865Z", "modified_date_millis" : 1555362981865, "policy" : { "name" : "<production-snap-{now/d}>", "schedule" : "*/30 * * * * ?", "repository" : "repo", "config" : { "indices" : [ "foo-*", "important" ], "ignore_unavailable" : true, "include_global_state" : false } }, "next_execution" : "2019-04-15T21:16:30.000Z", "next_execution_millis" : 1555362990000 }, "other" : { "version" : 1, "modified_date" : "2019-04-15T21:12:19.959Z", "modified_date_millis" : 1555362739959, "policy" : { "name" : "<other-snap-{now/d}>", "schedule" : "0 30 2 * * ?", "repository" : "repo", "config" : { "indices" : [ "other" ], "ignore_unavailable" : false, "include_global_state" : true } }, "next_execution" : "2019-04-16T02:30:00.000Z", "next_execution_millis" : 1555381800000 } } ``` Relates to #38461 * Fix and enhance tests * Figured out how to Cron * Change SLM endpoint from /_ilm/* to /_slm/* (#41320) This commit changes the endpoint for snapshot lifecycle management from: ``` GET /_ilm/snapshot/<policy> ``` to: ``` GET /_slm/policy/<policy> ``` It mimics the ILM path only using `slm` instead of `ilm`. Relates to #38461 * Add initial documentation for SLM (#41510) * Add initial documentation for SLM This adds the initial documentation for snapshot lifecycle management. It also includes the REST spec API json files since they're sort of documentation. Relates to #38461 * Add `manage_slm` and `read_slm` roles (#41607) * Add `manage_slm` and `read_slm` roles This adds two more built in roles - `manage_slm` which has permission to perform any of the SLM actions, as well as stopping, starting, and retrieving the operation status of ILM. `read_slm` which has permission to retrieve snapshot lifecycle policies as well as retrieving the operation status of ILM. Relates to #38461 * Add execute to the test * Fix ilm -> slm typo in test * Record SLM history into an index (#41707) It is useful to have a record of the actions that Snapshot Lifecycle Management takes, especially for the purposes of alerting when a snapshot fails or has not been taken successfully for a certain amount of time. This adds the infrastructure to record SLM actions into an index that can be queried at leisure, along with a lifecycle policy so that this history does not grow without bound. Additionally, SLM automatically setting up an index + lifecycle policy leads to `index_lifecycle` custom metadata in the cluster state, which some of the ML tests don't know how to deal with due to setting up custom `NamedXContentRegistry`s. Watcher would cause the same problem, but it is already disabled (for the same reason). * High Level Rest Client support for SLM (#41767) * High Level Rest Client support for SLM This commit add HLRC support for SLM. Relates to #38461 * Fill out documentation tests with tags * Add more callouts and asciidoc for HLRC * Update javadoc links to real locations * Add security test testing SLM cluster privileges (#42678) * Add security test testing SLM cluster privileges This adds a test to `PermissionsIT` that uses the `manage_slm` and `read_slm` cluster privileges. Relates to #38461 * Don't redefine vars * Add Getting Started Guide for SLM (#42878) This commit adds a basic Getting Started Guide for SLM. * Include SLM policy name in Snapshot metadata (#43132) Keep track of which SLM policy in the metadata field of the Snapshots taken by SLM. This allows users to more easily understand where the snapshot came from, and will enable future SLM features such as retention policies. * Fix compilation after master merge * [TEST] Move exception wrapping for devious exception throwing Fixes an issue where an exception was created from one line and thrown in another. * Fix SLM for the change to AcknowledgedResponse * Add Snapshot Lifecycle Management Package Docs (#43535) * Fix compilation for transport actions now that task is required * Add a note mentioning the privileges needed for SLM (#43708) * Add a note mentioning the privileges needed for SLM This adds a note to the top of the "getting started with SLM" documentation mentioning that there are two built-in privileges to assist with creating roles for SLM users and administrators. Relates to #38461 * Mention that you can create snapshots for indices you can't read * Fix REST tests for new number of cluster privileges * Mute testThatNonExistingTemplatesAreAddedImmediately (#43951) * Fix SnapshotHistoryStoreTests after merge * Remove overridden newResponse functions that have been removed * Fix compilation for backport * Fix get snapshot output parsing in test * [DOCS] Add redirects for removed autogen anchors (#44380) * Switch <tt>...</tt> in javadocs for {@code ...}
2019-07-16 09:37:13 -04:00
setting 'path.repo', "${buildDir}/cluster/shared/repo"
Closure configFile = {
extraConfigFile it, file("src/test/cluster/config/$it")
}
configFile 'analysis/example_word_list.txt'
configFile 'analysis/hyphenation_patterns.xml'
configFile 'analysis/synonym.txt'
configFile 'analysis/stemmer_override.txt'
configFile 'userdict_ja.txt'
configFile 'userdict_ko.txt'
configFile 'KeywordTokenizer.rbbi'
extraConfigFile 'hunspell/en_US/en_US.aff', project(":server").file('src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.aff')
extraConfigFile 'hunspell/en_US/en_US.dic', project(":server").file('src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.dic')
// Whitelist reindexing from the local node so we can test it.
setting 'reindex.remote.whitelist', '127.0.0.1:*'
}
// build the cluster with all plugins
project.rootProject.subprojects.findAll { it.parent.path == ':plugins' }.each { subproj ->
/* Skip repositories. We just aren't going to be able to test them so it
* doesn't make sense to waste time installing them. */
if (subproj.path.startsWith(':plugins:repository-')) {
return
}
// FIXME
subproj.afterEvaluate { // need to wait until the project has been configured
testClusters.integTest {
plugin file(subproj.bundlePlugin.archiveFile)
}
tasks.integTest.dependsOn subproj.bundlePlugin
}
}
buildRestTests.docs = fileTree(projectDir) {
// No snippets in here!
exclude 'build.gradle'
// That is where the snippets go, not where they come from!
exclude 'build'
// Just syntax examples
exclude 'README.asciidoc'
// Broken code snippet tests
exclude 'reference/graph/explore.asciidoc'
}
listSnippets.docs = buildRestTests.docs
Closure setupTwitter = { String name, int count ->
buildRestTests.setups[name] = '''
- do:
indices.create:
index: twitter
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
user:
type: keyword
doc_values: true
date:
type: date
likes:
type: long
- do:
bulk:
index: twitter
refresh: true
body: |'''
for (int i = 0; i < count; i++) {
String user, text
if (i == 0) {
user = 'kimchy'
text = 'trying out Elasticsearch'
} else {
user = 'test'
text = "some message with the number $i"
}
buildRestTests.setups[name] += """
{"index":{"_id": "$i"}}
{"user": "$user", "message": "$text", "date": "2009-11-15T14:12:12", "likes": $i}"""
}
}
setupTwitter('twitter', 5)
setupTwitter('big_twitter', 120)
setupTwitter('huge_twitter', 1200)
buildRestTests.setups['host'] = '''
# Fetch the http host. We use the host of the master because we know there will always be a master.
- do:
cluster.state: {}
- set: { master_node: master }
- do:
nodes.info:
Cross Cluster Search: make remote clusters optional (#27182) Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that. This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory. Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile. The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped: "_clusters" : { "total" : 3, "successful" : 2, "skipped" : 1 } Such section won't be part of the response if no clusters have been skipped. The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 05:41:47 -05:00
metric: [ http, transport ]
- set: {nodes.$master.http.publish_address: host}
Cross Cluster Search: make remote clusters optional (#27182) Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that. This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory. Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile. The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped: "_clusters" : { "total" : 3, "successful" : 2, "skipped" : 1 } Such section won't be part of the response if no clusters have been skipped. The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 05:41:47 -05:00
- set: {nodes.$master.transport.publish_address: transport_host}
'''
buildRestTests.setups['node'] = '''
# Fetch the node name. We use the host of the master because we know there will always be a master.
- do:
cluster.state: {}
- is_true: master_node
- set: { master_node: node_name }
'''
// Used by scripted metric docs
buildRestTests.setups['ledger'] = '''
- do:
indices.create:
index: ledger
body:
settings:
number_of_shards: 2
number_of_replicas: 1
mappings:
properties:
type:
type: keyword
amount:
type: double
- do:
bulk:
index: ledger
refresh: true
body: |
{"index":{}}
{"date": "2015/01/01 00:00:00", "amount": 200, "type": "sale", "description": "something"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "amount": 10, "type": "expense", "decription": "another thing"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "amount": 150, "type": "sale", "description": "blah"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "cost of blah"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "advertisement"}'''
// Used by aggregation docs
buildRestTests.setups['sales'] = '''
- do:
indices.create:
index: sales
body:
settings:
number_of_shards: 2
number_of_replicas: 1
mappings:
properties:
type:
type: keyword
- do:
bulk:
index: sales
refresh: true
body: |
{"index":{}}
{"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "t-shirt"}
{"index":{}}
{"date": "2015/01/01 00:00:00", "price": 150, "promoted": true, "rating": 5, "type": "bag"}
{"index":{}}
{"date": "2015/02/01 00:00:00", "price": 50, "promoted": false, "rating": 1, "type": "hat"}
{"index":{}}
{"date": "2015/02/01 00:00:00", "price": 10, "promoted": true, "rating": 4, "type": "t-shirt"}
{"index":{}}
{"date": "2015/03/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"}
{"index":{}}
{"date": "2015/03/01 00:00:00", "price": 175, "promoted": false, "rating": 2, "type": "t-shirt"}'''
// Dummy bank account data used by getting-started.asciidoc
buildRestTests.setups['bank'] = '''
- do:
indices.create:
index: bank
body:
settings:
number_of_shards: 5
number_of_routing_shards: 5
- do:
bulk:
index: bank
refresh: true
body: |
#bank_data#
'''
/* Load the actual accounts only if we're going to use them. This complicates
* dependency checking but that is a small price to pay for not building a
* 400kb string every time we start the build. */
File accountsFile = new File("$projectDir/src/test/resources/accounts.json")
buildRestTests.inputs.file(accountsFile)
buildRestTests.doFirst {
String accounts = accountsFile.getText('UTF-8')
// Indent like a yaml test needs
accounts = accounts.replaceAll('(?m)^', ' ')
buildRestTests.setups['bank'] =
buildRestTests.setups['bank'].replace('#bank_data#', accounts)
}
// Used by index boost doc
buildRestTests.setups['index_boost'] = '''
- do:
indices.create:
index: index1
- do:
indices.create:
index: index2
- do:
indices.put_alias:
index: index1
name: alias1
'''
// Used by sampler and diversified-sampler aggregation docs
buildRestTests.setups['stackoverflow'] = '''
- do:
indices.create:
index: stackoverflow
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
author:
type: keyword
tags:
type: keyword
- do:
bulk:
index: stackoverflow
refresh: true
body: |'''
// Make Kibana strongly connected to elasticsearch and logstash
// Make Kibana rarer (and therefore higher-ranking) than JavaScript
// Make JavaScript strongly connected to jquery and angular
// Make Cabana strongly connected to elasticsearch but only as a result of a single author
for (int i = 0; i < 150; i++) {
buildRestTests.setups['stackoverflow'] += """
{"index":{}}
{"author": "very_relevant_$i", "tags": ["elasticsearch", "kibana"]}"""
}
for (int i = 0; i < 50; i++) {
buildRestTests.setups['stackoverflow'] += """
{"index":{}}
{"author": "very_relevant_$i", "tags": ["logstash", "kibana"]}"""
}
for (int i = 0; i < 200; i++) {
buildRestTests.setups['stackoverflow'] += """
{"index":{}}
{"author": "partially_relevant_$i", "tags": ["javascript", "jquery"]}"""
}
for (int i = 0; i < 200; i++) {
buildRestTests.setups['stackoverflow'] += """
{"index":{}}
{"author": "partially_relevant_$i", "tags": ["javascript", "angular"]}"""
}
for (int i = 0; i < 50; i++) {
buildRestTests.setups['stackoverflow'] += """
{"index":{}}
{"author": "noisy author", "tags": ["elasticsearch", "cabana"]}"""
}
buildRestTests.setups['stackoverflow'] += """
"""
// Used by significant_text aggregation docs
buildRestTests.setups['news'] = '''
- do:
indices.create:
index: news
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
source:
type: keyword
content:
type: text
- do:
bulk:
index: news
refresh: true
body: |'''
// Make h5n1 strongly connected to bird flu
for (int i = 0; i < 100; i++) {
buildRestTests.setups['news'] += """
{"index":{}}
{"source": "very_relevant_$i", "content": "bird flu h5n1"}"""
}
for (int i = 0; i < 100; i++) {
buildRestTests.setups['news'] += """
{"index":{}}
{"source": "filler_$i", "content": "bird dupFiller "}"""
}
for (int i = 0; i < 100; i++) {
buildRestTests.setups['news'] += """
{"index":{}}
{"source": "filler_$i", "content": "flu dupFiller "}"""
}
for (int i = 0; i < 20; i++) {
buildRestTests.setups['news'] += """
{"index":{}}
{"source": "partially_relevant_$i", "content": "elasticsearch dupFiller dupFiller dupFiller dupFiller pozmantier"}"""
}
for (int i = 0; i < 10; i++) {
buildRestTests.setups['news'] += """
{"index":{}}
{"source": "partially_relevant_$i", "content": "elasticsearch logstash kibana"}"""
}
buildRestTests.setups['news'] += """
"""
// Used by some aggregations
buildRestTests.setups['exams'] = '''
- do:
indices.create:
index: exams
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
grade:
type: byte
- do:
bulk:
index: exams
refresh: true
body: |
{"index":{}}
{"grade": 100, "weight": 2}
{"index":{}}
{"grade": 50, "weight": 3}'''
buildRestTests.setups['stored_example_script'] = '''
# Simple script to load a field. Not really a good example, but a simple one.
- do:
put_script:
id: "my_script"
body: { "script": { "lang": "painless", "source": "doc[params.field].value" } }
- match: { acknowledged: true }
'''
buildRestTests.setups['stored_scripted_metric_script'] = '''
- do:
put_script:
id: "my_init_script"
body: { "script": { "lang": "painless", "source": "state.transactions = []" } }
- match: { acknowledged: true }
- do:
put_script:
id: "my_map_script"
body: { "script": { "lang": "painless", "source": "state.transactions.add(doc.type.value == 'sale' ? doc.amount.value : -1 * doc.amount.value)" } }
- match: { acknowledged: true }
- do:
put_script:
id: "my_combine_script"
body: { "script": { "lang": "painless", "source": "double profit = 0;for (t in state.transactions) { profit += t; } return profit" } }
- match: { acknowledged: true }
- do:
put_script:
id: "my_reduce_script"
body: { "script": { "lang": "painless", "source": "double profit = 0;for (a in states) { profit += a; } return profit" } }
- match: { acknowledged: true }
'''
// Used by analyze api
buildRestTests.setups['analyze_sample'] = '''
- do:
indices.create:
index: analyze_sample
body:
settings:
number_of_shards: 1
number_of_replicas: 0
analysis:
normalizer:
my_normalizer:
type: custom
filter: [lowercase]
mappings:
properties:
obj1.field1:
type: text'''
// Used by percentile/percentile-rank aggregations
buildRestTests.setups['latency'] = '''
- do:
indices.create:
index: latency
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
load_time:
type: long
- do:
bulk:
index: latency
refresh: true
body: |'''
for (int i = 0; i < 100; i++) {
def value = i
if (i % 10) {
value = i*10
}
buildRestTests.setups['latency'] += """
{"index":{}}
{"load_time": "$value"}"""
}
// Used by iprange agg
buildRestTests.setups['iprange'] = '''
- do:
indices.create:
index: ip_addresses
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
ip:
type: ip
- do:
bulk:
index: ip_addresses
refresh: true
body: |'''
for (int i = 0; i < 255; i++) {
buildRestTests.setups['iprange'] += """
{"index":{}}
{"ip": "10.0.0.$i"}"""
}
for (int i = 0; i < 5; i++) {
buildRestTests.setups['iprange'] += """
{"index":{}}
{"ip": "9.0.0.$i"}"""
buildRestTests.setups['iprange'] += """
{"index":{}}
{"ip": "11.0.0.$i"}"""
buildRestTests.setups['iprange'] += """
{"index":{}}
{"ip": "12.0.0.$i"}"""
}
2018-06-22 18:40:25 -04:00
// Used by SQL because it looks SQL-ish
buildRestTests.setups['library'] = '''
- do:
indices.create:
Update the default for include_type_name to false. (#37285) * Default include_type_name to false for get and put mappings. * Default include_type_name to false for get field mappings. * Add a constant for the default include_type_name value. * Default include_type_name to false for get and put index templates. * Default include_type_name to false for create index. * Update create index calls in REST documentation to use include_type_name=true. * Some minor clean-ups around the get index API. * In REST tests, use include_type_name=true by default for index creation. * Make sure to use 'expression == false'. * Clarify the different IndexTemplateMetaData toXContent methods. * Fix FullClusterRestartIT#testSnapshotRestore. * Fix the ml_anomalies_default_mappings test. * Fix GetFieldMappingsResponseTests and GetIndexTemplateResponseTests. We make sure to specify include_type_name=true during xContent parsing, so we continue to test the legacy typed responses. XContent generation for the typeless responses is currently only covered by REST tests, but we will be adding unit test coverage for these as we implement each typeless API in the Java HLRC. This commit also refactors GetMappingsResponse to follow the same appraoch as the other mappings-related responses, where we read include_type_name out of the xContent params, instead of creating a second toXContent method. This gives better consistency in the response parsing code. * Fix more REST tests. * Improve some wording in the create index documentation. * Add a note about types removal in the create index docs. * Fix SmokeTestMonitoringWithSecurityIT#testHTTPExporterWithSSL. * Make sure to mention include_type_name in the REST docs for affected APIs. * Make sure to use 'expression == false' in FullClusterRestartIT. * Mention include_type_name in the REST templates docs.
2019-01-14 16:08:01 -05:00
include_type_name: true
2018-06-22 18:40:25 -04:00
index: library
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
book:
properties:
name:
type: text
fields:
keyword:
type: keyword
author:
type: text
fields:
keyword:
type: keyword
release_date:
type: date
page_count:
type: short
- do:
bulk:
index: library
type: book
refresh: true
body: |
{"index":{"_id": "Leviathan Wakes"}}
{"name": "Leviathan Wakes", "author": "James S.A. Corey", "release_date": "2011-06-02", "page_count": 561}
{"index":{"_id": "Hyperion"}}
{"name": "Hyperion", "author": "Dan Simmons", "release_date": "1989-05-26", "page_count": 482}
{"index":{"_id": "Dune"}}
{"name": "Dune", "author": "Frank Herbert", "release_date": "1965-06-01", "page_count": 604}
{"index":{"_id": "Dune Messiah"}}
{"name": "Dune Messiah", "author": "Frank Herbert", "release_date": "1969-10-15", "page_count": 331}
{"index":{"_id": "Children of Dune"}}
{"name": "Children of Dune", "author": "Frank Herbert", "release_date": "1976-04-21", "page_count": 408}
{"index":{"_id": "God Emperor of Dune"}}
{"name": "God Emperor of Dune", "author": "Frank Herbert", "release_date": "1981-05-28", "page_count": 454}
{"index":{"_id": "Consider Phlebas"}}
{"name": "Consider Phlebas", "author": "Iain M. Banks", "release_date": "1987-04-23", "page_count": 471}
{"index":{"_id": "Pandora's Star"}}
{"name": "Pandora's Star", "author": "Peter F. Hamilton", "release_date": "2004-03-02", "page_count": 768}
{"index":{"_id": "Revelation Space"}}
{"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585}
{"index":{"_id": "A Fire Upon the Deep"}}
{"name": "A Fire Upon the Deep", "author": "Vernor Vinge", "release_date": "1992-06-01", "page_count": 613}
{"index":{"_id": "Ender's Game"}}
{"name": "Ender's Game", "author": "Orson Scott Card", "release_date": "1985-06-01", "page_count": 324}
{"index":{"_id": "1984"}}
{"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328}
{"index":{"_id": "Fahrenheit 451"}}
{"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227}
{"index":{"_id": "Brave New World"}}
{"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268}
{"index":{"_id": "Foundation"}}
{"name": "Foundation", "author": "Isaac Asimov", "release_date": "1951-06-01", "page_count": 224}
{"index":{"_id": "The Giver"}}
{"name": "The Giver", "author": "Lois Lowry", "release_date": "1993-04-26", "page_count": 208}
{"index":{"_id": "Slaughterhouse-Five"}}
{"name": "Slaughterhouse-Five", "author": "Kurt Vonnegut", "release_date": "1969-06-01", "page_count": 275}
{"index":{"_id": "The Hitchhiker's Guide to the Galaxy"}}
{"name": "The Hitchhiker's Guide to the Galaxy", "author": "Douglas Adams", "release_date": "1979-10-12", "page_count": 180}
{"index":{"_id": "Snow Crash"}}
{"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470}
{"index":{"_id": "Neuromancer"}}
{"name": "Neuromancer", "author": "William Gibson", "release_date": "1984-07-01", "page_count": 271}
{"index":{"_id": "The Handmaid's Tale"}}
{"name": "The Handmaid's Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311}
{"index":{"_id": "Starship Troopers"}}
{"name": "Starship Troopers", "author": "Robert A. Heinlein", "release_date": "1959-12-01", "page_count": 335}
{"index":{"_id": "The Left Hand of Darkness"}}
{"name": "The Left Hand of Darkness", "author": "Ursula K. Le Guin", "release_date": "1969-06-01", "page_count": 304}
{"index":{"_id": "The Moon is a Harsh Mistress"}}
{"name": "The Moon is a Harsh Mistress", "author": "Robert A. Heinlein", "release_date": "1966-04-01", "page_count": 288}
'''
buildRestTests.setups['sensor_rollup_job'] = '''
- do:
indices.create:
index: sensor-1
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
timestamp:
type: date
temperature:
type: long
voltage:
type: float
node:
type: keyword
- do:
raw:
method: PUT
path: _rollup/job/sensor
body: >
{
"index_pattern": "sensor-*",
"rollup_index": "sensor_rollup",
"cron": "*/30 * * * * ?",
"page_size" :1000,
"groups" : {
"date_histogram": {
"field": "timestamp",
"fixed_interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node"]
}
},
"metrics": [
{
"field": "temperature",
"metrics": ["min", "max", "sum"]
},
{
"field": "voltage",
"metrics": ["avg"]
}
]
}
'''
buildRestTests.setups['sensor_started_rollup_job'] = '''
- do:
indices.create:
index: sensor-1
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
timestamp:
type: date
temperature:
type: long
voltage:
type: float
node:
type: keyword
- do:
bulk:
index: sensor-1
refresh: true
body: |
{"index":{}}
{"timestamp": 1516729294000, "temperature": 200, "voltage": 5.2, "node": "a"}
{"index":{}}
{"timestamp": 1516642894000, "temperature": 201, "voltage": 5.8, "node": "b"}
{"index":{}}
{"timestamp": 1516556494000, "temperature": 202, "voltage": 5.1, "node": "a"}
{"index":{}}
{"timestamp": 1516470094000, "temperature": 198, "voltage": 5.6, "node": "b"}
{"index":{}}
{"timestamp": 1516383694000, "temperature": 200, "voltage": 4.2, "node": "c"}
{"index":{}}
{"timestamp": 1516297294000, "temperature": 202, "voltage": 4.0, "node": "c"}
- do:
raw:
method: PUT
path: _rollup/job/sensor
body: >
{
"index_pattern": "sensor-*",
"rollup_index": "sensor_rollup",
"cron": "* * * * * ?",
"page_size" :1000,
"groups" : {
"date_histogram": {
"field": "timestamp",
"fixed_interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node"]
}
},
"metrics": [
{
"field": "temperature",
"metrics": ["min", "max", "sum"]
},
{
"field": "voltage",
"metrics": ["avg"]
}
]
}
- do:
raw:
method: POST
path: _rollup/job/sensor/_start
'''
buildRestTests.setups['sensor_index'] = '''
- do:
indices.create:
index: sensor-1
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
timestamp:
type: date
temperature:
type: long
voltage:
type: float
node:
type: keyword
load:
type: double
net_in:
type: long
net_out:
type: long
hostname:
type: keyword
datacenter:
type: keyword
'''
buildRestTests.setups['sensor_prefab_data'] = '''
- do:
indices.create:
index: sensor-1
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
timestamp:
type: date
temperature:
type: long
voltage:
type: float
node:
type: keyword
- do:
indices.create:
index: sensor_rollup
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
node.terms.value:
type: keyword
temperature.sum.value:
type: double
temperature.max.value:
type: double
temperature.min.value:
type: double
timestamp.date_histogram.time_zone:
type: keyword
timestamp.date_histogram.interval:
type: keyword
timestamp.date_histogram.timestamp:
type: date
timestamp.date_histogram._count:
type: long
voltage.avg.value:
type: double
voltage.avg._count:
type: long
_rollup.id:
type: keyword
_rollup.version:
type: long
_meta:
_rollup:
sensor:
cron: "* * * * * ?"
rollup_index: "sensor_rollup"
index_pattern: "sensor-*"
timeout: "20s"
page_size: 1000
groups:
date_histogram:
delay: "7d"
field: "timestamp"
fixed_interval: "60m"
time_zone: "UTC"
terms:
fields:
- "node"
id: sensor
metrics:
- field: "temperature"
metrics:
- min
- max
- sum
- field: "voltage"
metrics:
- avg
- do:
bulk:
index: sensor_rollup
refresh: true
body: |
{"index":{}}
{"node.terms.value":"b","temperature.sum.value":201.0,"temperature.max.value":201.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":201.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.800000190734863,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516640400000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
{"index":{}}
{"node.terms.value":"c","temperature.sum.value":200.0,"temperature.max.value":200.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":200.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":4.199999809265137,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516381200000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
{"index":{}}
{"node.terms.value":"a","temperature.sum.value":202.0,"temperature.max.value":202.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":202.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.099999904632568,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516554000000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
{"index":{}}
{"node.terms.value":"a","temperature.sum.value":200.0,"temperature.max.value":200.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":200.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.199999809265137,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516726800000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
{"index":{}}
{"node.terms.value":"b","temperature.sum.value":198.0,"temperature.max.value":198.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":198.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":5.599999904632568,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516467600000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
{"index":{}}
{"node.terms.value":"c","temperature.sum.value":202.0,"temperature.max.value":202.0,"timestamp.date_histogram.time_zone":"UTC","temperature.min.value":202.0,"timestamp.date_histogram._count":1,"timestamp.date_histogram.interval":"1h","_rollup.computed":["temperature.sum","temperature.min","voltage.avg","temperature.max","node.terms","timestamp.date_histogram"],"voltage.avg.value":4.0,"node.terms._count":1,"_rollup.version":1,"timestamp.date_histogram.timestamp":1516294800000,"voltage.avg._count":1.0,"_rollup.id":"sensor"}
'''
buildRestTests.setups['sample_job'] = '''
- do:
ml.put_job:
job_id: "sample_job"
body: >
{
"description" : "Very basic job",
"analysis_config" : {
"bucket_span":"10m",
"detectors" :[
{
"function": "count"
}
]},
"data_description" : {
"time_field":"timestamp",
"time_format": "epoch_ms"
}
}
'''
buildRestTests.setups['farequote_index'] = '''
- do:
indices.create:
index: farequote
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
metric:
properties:
time:
type: date
responsetime:
type: float
airline:
type: keyword
doc_count:
type: integer
'''
buildRestTests.setups['farequote_data'] = buildRestTests.setups['farequote_index'] + '''
- do:
bulk:
index: farequote
type: metric
refresh: true
body: |
{"index": {"_id":"1"}}
{"airline":"JZA","responsetime":990.4628,"time":"2016-02-07T00:00:00+0000", "doc_count": 5}
{"index": {"_id":"2"}}
{"airline":"JBU","responsetime":877.5927,"time":"2016-02-07T00:00:00+0000", "doc_count": 23}
{"index": {"_id":"3"}}
{"airline":"KLM","responsetime":1355.4812,"time":"2016-02-07T00:00:00+0000", "doc_count": 42}
'''
buildRestTests.setups['farequote_job'] = buildRestTests.setups['farequote_data'] + '''
- do:
ml.put_job:
job_id: "farequote"
body: >
{
"analysis_config": {
"bucket_span": "60m",
"detectors": [{
"function": "mean",
"field_name": "responsetime",
"by_field_name": "airline"
}],
"summary_count_field_name": "doc_count"
},
"data_description": {
"time_field": "time"
}
}
'''
buildRestTests.setups['farequote_datafeed'] = buildRestTests.setups['farequote_job'] + '''
- do:
ml.put_datafeed:
datafeed_id: "datafeed-farequote"
body: >
{
"job_id":"farequote",
"indexes":"farequote"
}
'''
buildRestTests.setups['server_metrics_index'] = '''
- do:
indices.create:
index: server-metrics
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
metric:
properties:
timestamp:
type: date
total:
type: long
'''
buildRestTests.setups['server_metrics_data'] = buildRestTests.setups['server_metrics_index'] + '''
- do:
bulk:
index: server-metrics
type: metric
refresh: true
body: |
{"index": {"_id":"1177"}}
{"timestamp":"2017-03-23T13:00:00","total":40476}
{"index": {"_id":"1178"}}
{"timestamp":"2017-03-23T13:00:00","total":15287}
{"index": {"_id":"1179"}}
{"timestamp":"2017-03-23T13:00:00","total":-776}
{"index": {"_id":"1180"}}
{"timestamp":"2017-03-23T13:00:00","total":11366}
{"index": {"_id":"1181"}}
{"timestamp":"2017-03-23T13:00:00","total":3606}
{"index": {"_id":"1182"}}
{"timestamp":"2017-03-23T13:00:00","total":19006}
{"index": {"_id":"1183"}}
{"timestamp":"2017-03-23T13:00:00","total":38613}
{"index": {"_id":"1184"}}
{"timestamp":"2017-03-23T13:00:00","total":19516}
{"index": {"_id":"1185"}}
{"timestamp":"2017-03-23T13:00:00","total":-258}
{"index": {"_id":"1186"}}
{"timestamp":"2017-03-23T13:00:00","total":9551}
{"index": {"_id":"1187"}}
{"timestamp":"2017-03-23T13:00:00","total":11217}
{"index": {"_id":"1188"}}
{"timestamp":"2017-03-23T13:00:00","total":22557}
{"index": {"_id":"1189"}}
{"timestamp":"2017-03-23T13:00:00","total":40508}
{"index": {"_id":"1190"}}
{"timestamp":"2017-03-23T13:00:00","total":11887}
{"index": {"_id":"1191"}}
{"timestamp":"2017-03-23T13:00:00","total":31659}
'''
buildRestTests.setups['server_metrics_job'] = buildRestTests.setups['server_metrics_data'] + '''
- do:
ml.put_job:
job_id: "total-requests"
body: >
{
"description" : "Total sum of requests",
"analysis_config" : {
"bucket_span":"10m",
"detectors" :[
{
"detector_description": "Sum of total",
"function": "sum",
"field_name": "total"
}
]},
"data_description" : {
"time_field":"timestamp",
"time_format": "epoch_ms"
}
}
'''
buildRestTests.setups['server_metrics_datafeed'] = buildRestTests.setups['server_metrics_job'] + '''
- do:
ml.put_datafeed:
datafeed_id: "datafeed-total-requests"
body: >
{
"job_id":"total-requests",
"indexes":"server-metrics"
}
'''
buildRestTests.setups['server_metrics_openjob'] = buildRestTests.setups['server_metrics_datafeed'] + '''
- do:
ml.open_job:
job_id: "total-requests"
'''
buildRestTests.setups['server_metrics_startdf'] = buildRestTests.setups['server_metrics_openjob'] + '''
- do:
ml.start_datafeed:
datafeed_id: "datafeed-total-requests"
'''
buildRestTests.setups['calendar_outages'] = '''
- do:
ml.put_calendar:
calendar_id: "planned-outages"
'''
buildRestTests.setups['calendar_outages_addevent'] = buildRestTests.setups['calendar_outages'] + '''
- do:
ml.post_calendar_events:
calendar_id: "planned-outages"
body: >
{ "description": "event 1", "start_time": "2017-12-01T00:00:00Z", "end_time": "2017-12-02T00:00:00Z", "calendar_id": "planned-outages" }
'''
buildRestTests.setups['calendar_outages_openjob'] = buildRestTests.setups['server_metrics_openjob'] + '''
- do:
ml.put_calendar:
calendar_id: "planned-outages"
'''
buildRestTests.setups['calendar_outages_addjob'] = buildRestTests.setups['server_metrics_openjob'] + '''
- do:
ml.put_calendar:
calendar_id: "planned-outages"
body: >
{
"job_ids": ["total-requests"]
}
'''
buildRestTests.setups['calendar_outages_addevent'] = buildRestTests.setups['calendar_outages_addjob'] + '''
- do:
ml.post_calendar_events:
calendar_id: "planned-outages"
body: >
{ "events" : [
{ "description": "event 1", "start_time": "1513641600000", "end_time": "1513728000000"},
{ "description": "event 2", "start_time": "1513814400000", "end_time": "1513900800000"},
{ "description": "event 3", "start_time": "1514160000000", "end_time": "1514246400000"}
]}
'''
// used by median absolute deviation aggregation
buildRestTests.setups['reviews'] = '''
- do:
indices.create:
index: reviews
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
product:
type: keyword
rating:
type: long
- do:
bulk:
index: reviews
refresh: true
body: |
{"index": {"_id": "1"}}
{"product": "widget-foo", "rating": 1}
{"index": {"_id": "2"}}
{"product": "widget-foo", "rating": 5}
'''
buildRestTests.setups['remote_cluster'] = buildRestTests.setups['host'] + '''
- do:
cluster.put_settings:
body:
persistent:
cluster.remote.remote_cluster.seeds: $transport_host
'''
buildRestTests.setups['remote_cluster_and_leader_index'] = buildRestTests.setups['remote_cluster'] + '''
- do:
indices.create:
index: leader_index
body:
settings:
index.number_of_replicas: 0
index.number_of_shards: 1
index.soft_deletes.enabled: true
'''
buildRestTests.setups['seats'] = '''
- do:
indices.create:
index: seats
body:
settings:
number_of_shards: 1
number_of_replicas: 0
mappings:
properties:
theatre:
type: keyword
cost:
type: long
row:
type: long
number:
type: long
sold:
type: boolean
- do:
bulk:
index: seats
refresh: true
body: |
{"index":{"_id": "1"}}
{"theatre": "Skyline", "cost": 37, "row": 1, "number": 7, "sold": false}
{"index":{"_id": "2"}}
{"theatre": "Graye", "cost": 30, "row": 3, "number": 5, "sold": false}
{"index":{"_id": "3"}}
{"theatre": "Graye", "cost": 33, "row": 2, "number": 6, "sold": false}
{"index":{"_id": "4"}}
{"theatre": "Skyline", "cost": 20, "row": 5, "number": 2, "sold": false}'''
buildRestTests.setups['kibana_sample_data_ecommerce'] = '''
- do:
indices.create:
index: kibana_sample_data_ecommerce
body:
settings:
number_of_shards: 1
number_of_replicas: 0
'''
buildRestTests.setups['simple_kibana_continuous_pivot'] = buildRestTests.setups['kibana_sample_data_ecommerce'] + '''
- do:
raw:
method: PUT
path: _data_frame/transforms/simple-kibana-ecomm-pivot
body: >
{
"source": {
"index": "kibana_sample_data_ecommerce",
"query": {
"term": {
"geoip.continent_name": {
"value": "Asia"
}
}
}
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id"
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
},
"description": "Maximum priced ecommerce data",
"dest": {
"index": "kibana_sample_data_ecommerce_transform",
"pipeline": "add_timestamp_pipeline"
},
"frequency": "5m",
"sync": {
"time": {
"field": "order_date",
"delay": "60s"
}
}
}
'''
buildRestTests.setups['setup_logdata'] = '''
- do:
indices.create:
index: logdata
body:
settings:
number_of_shards: 1
number_of_replicas: 1
mappings:
properties:
grade:
type: byte
- do:
bulk:
index: logdata
refresh: true
body: |
{"index":{}}
{"grade": 100, "weight": 2}
{"index":{}}
{"grade": 50, "weight": 3}
'''
buildRestTests.setups['logdata_job'] = buildRestTests.setups['setup_logdata'] + '''
- do:
ml.put_data_frame_analytics:
id: "loganalytics"
body: >
{
"source": {
"index": "logdata"
},
"dest": {
"index": "logdata_out"
},
"analysis": {
"outlier_detection": {}
}
}
'''
[7.x] Add Snapshot Lifecycle Management (#44382) * Add Snapshot Lifecycle Management (#43934) * Add SnapshotLifecycleService and related CRUD APIs This commit adds `SnapshotLifecycleService` as a new service under the ilm plugin. This service handles snapshot lifecycle policies by scheduling based on the policies defined schedule. This also includes the get, put, and delete APIs for these policies Relates to #38461 * Make scheduledJobIds return an immutable set * Use Object.equals for SnapshotLifecyclePolicy * Remove unneeded TODO * Implement ToXContentFragment on SnapshotLifecyclePolicyItem * Copy contents of the scheduledJobIds * Handle snapshot lifecycle policy updates and deletions (#40062) (Note this is a PR against the `snapshot-lifecycle-management` feature branch) This adds logic to `SnapshotLifecycleService` to handle updates and deletes for snapshot policies. Policies with incremented versions have the old policy cancelled and the new one scheduled. Deleted policies have their schedules cancelled when they are no longer present in the cluster state metadata. Relates to #38461 * Take a snapshot for the policy when the SLM policy is triggered (#40383) (This is a PR for the `snapshot-lifecycle-management` branch) This commit fills in `SnapshotLifecycleTask` to actually perform the snapshotting when the policy is triggered. Currently there is no handling of the results (other than logging) as that will be added in subsequent work. This also adds unit tests and an integration test that schedules a policy and ensures that a snapshot is correctly taken. Relates to #38461 * Record most recent snapshot policy success/failure (#40619) Keeping a record of the results of the successes and failures will aid troubleshooting of policies and make users more confident that their snapshots are being taken as expected. This is the first step toward writing history in a more permanent fashion. * Validate snapshot lifecycle policies (#40654) (This is a PR against the `snapshot-lifecycle-management` branch) With the commit, we now validate the content of snapshot lifecycle policies when the policy is being created or updated. This checks for the validity of the id, name, schedule, and repository. Additionally, cluster state is checked to ensure that the repository exists prior to the lifecycle being added to the cluster state. Part of #38461 * Hook SLM into ILM's start and stop APIs (#40871) (This pull request is for the `snapshot-lifecycle-management` branch) This change allows the existing `/_ilm/stop` and `/_ilm/start` APIs to also manage snapshot lifecycle scheduling. When ILM is stopped all scheduled jobs are cancelled. Relates to #38461 * Add tests for SnapshotLifecyclePolicyItem (#40912) Adds serialization tests for SnapshotLifecyclePolicyItem. * Fix improper import in build.gradle after master merge * Add human readable version of modified date for snapshot lifecycle policy (#41035) * Add human readable version of modified date for snapshot lifecycle policy This small change changes it from: ``` ... "modified_date": 1554843903242, ... ``` To ``` ... "modified_date" : "2019-04-09T21:05:03.242Z", "modified_date_millis" : 1554843903242, ... ``` Including the `"modified_date"` field when the `?human` field is used. Relates to #38461 * Fix test * Add API to execute SLM policy on demand (#41038) This commit adds the ability to perform a snapshot on demand for a policy. This can be useful to take a snapshot immediately prior to performing some sort of maintenance. ```json PUT /_ilm/snapshot/<policy>/_execute ``` And it returns the response with the generated snapshot name: ```json { "snapshot_name" : "production-snap-2019.04.09-rfyv3j9qreixkdbnfuw0ug" } ``` Note that this does not allow waiting for the snapshot, and the snapshot could still fail. It *does* record this information into the cluster state similar to a regularly trigged SLM job. Relates to #38461 * Add next_execution to SLM policy metadata (#41221) * Add next_execution to SLM policy metadata This adds the next time a snapshot lifecycle policy will be executed when retriving a policy's metadata, for example: ```json GET /_ilm/snapshot?human { "production" : { "version" : 1, "modified_date" : "2019-04-15T21:16:21.865Z", "modified_date_millis" : 1555362981865, "policy" : { "name" : "<production-snap-{now/d}>", "schedule" : "*/30 * * * * ?", "repository" : "repo", "config" : { "indices" : [ "foo-*", "important" ], "ignore_unavailable" : true, "include_global_state" : false } }, "next_execution" : "2019-04-15T21:16:30.000Z", "next_execution_millis" : 1555362990000 }, "other" : { "version" : 1, "modified_date" : "2019-04-15T21:12:19.959Z", "modified_date_millis" : 1555362739959, "policy" : { "name" : "<other-snap-{now/d}>", "schedule" : "0 30 2 * * ?", "repository" : "repo", "config" : { "indices" : [ "other" ], "ignore_unavailable" : false, "include_global_state" : true } }, "next_execution" : "2019-04-16T02:30:00.000Z", "next_execution_millis" : 1555381800000 } } ``` Relates to #38461 * Fix and enhance tests * Figured out how to Cron * Change SLM endpoint from /_ilm/* to /_slm/* (#41320) This commit changes the endpoint for snapshot lifecycle management from: ``` GET /_ilm/snapshot/<policy> ``` to: ``` GET /_slm/policy/<policy> ``` It mimics the ILM path only using `slm` instead of `ilm`. Relates to #38461 * Add initial documentation for SLM (#41510) * Add initial documentation for SLM This adds the initial documentation for snapshot lifecycle management. It also includes the REST spec API json files since they're sort of documentation. Relates to #38461 * Add `manage_slm` and `read_slm` roles (#41607) * Add `manage_slm` and `read_slm` roles This adds two more built in roles - `manage_slm` which has permission to perform any of the SLM actions, as well as stopping, starting, and retrieving the operation status of ILM. `read_slm` which has permission to retrieve snapshot lifecycle policies as well as retrieving the operation status of ILM. Relates to #38461 * Add execute to the test * Fix ilm -> slm typo in test * Record SLM history into an index (#41707) It is useful to have a record of the actions that Snapshot Lifecycle Management takes, especially for the purposes of alerting when a snapshot fails or has not been taken successfully for a certain amount of time. This adds the infrastructure to record SLM actions into an index that can be queried at leisure, along with a lifecycle policy so that this history does not grow without bound. Additionally, SLM automatically setting up an index + lifecycle policy leads to `index_lifecycle` custom metadata in the cluster state, which some of the ML tests don't know how to deal with due to setting up custom `NamedXContentRegistry`s. Watcher would cause the same problem, but it is already disabled (for the same reason). * High Level Rest Client support for SLM (#41767) * High Level Rest Client support for SLM This commit add HLRC support for SLM. Relates to #38461 * Fill out documentation tests with tags * Add more callouts and asciidoc for HLRC * Update javadoc links to real locations * Add security test testing SLM cluster privileges (#42678) * Add security test testing SLM cluster privileges This adds a test to `PermissionsIT` that uses the `manage_slm` and `read_slm` cluster privileges. Relates to #38461 * Don't redefine vars * Add Getting Started Guide for SLM (#42878) This commit adds a basic Getting Started Guide for SLM. * Include SLM policy name in Snapshot metadata (#43132) Keep track of which SLM policy in the metadata field of the Snapshots taken by SLM. This allows users to more easily understand where the snapshot came from, and will enable future SLM features such as retention policies. * Fix compilation after master merge * [TEST] Move exception wrapping for devious exception throwing Fixes an issue where an exception was created from one line and thrown in another. * Fix SLM for the change to AcknowledgedResponse * Add Snapshot Lifecycle Management Package Docs (#43535) * Fix compilation for transport actions now that task is required * Add a note mentioning the privileges needed for SLM (#43708) * Add a note mentioning the privileges needed for SLM This adds a note to the top of the "getting started with SLM" documentation mentioning that there are two built-in privileges to assist with creating roles for SLM users and administrators. Relates to #38461 * Mention that you can create snapshots for indices you can't read * Fix REST tests for new number of cluster privileges * Mute testThatNonExistingTemplatesAreAddedImmediately (#43951) * Fix SnapshotHistoryStoreTests after merge * Remove overridden newResponse functions that have been removed * Fix compilation for backport * Fix get snapshot output parsing in test * [DOCS] Add redirects for removed autogen anchors (#44380) * Switch <tt>...</tt> in javadocs for {@code ...}
2019-07-16 09:37:13 -04:00
// Used by snapshot lifecycle management docs
buildRestTests.setups['setup-repository'] = '''
- do:
snapshot.create_repository:
repository: my_repository
body:
type: fs
settings:
location: buildDir/cluster/shared/repo
'''