2016-04-29 10:42:03 -04:00
|
|
|
/*
|
|
|
|
* Licensed to Elasticsearch under one or more contributor
|
|
|
|
* license agreements. See the NOTICE file distributed with
|
|
|
|
* this work for additional information regarding copyright
|
|
|
|
* ownership. Elasticsearch licenses this file to you under
|
|
|
|
* the Apache License, Version 2.0 (the "License"); you may
|
|
|
|
* not use this file except in compliance with the License.
|
|
|
|
* You may obtain a copy of the License at
|
|
|
|
*
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing,
|
|
|
|
* software distributed under the License is distributed on an
|
|
|
|
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
|
|
* KIND, either express or implied. See the License for the
|
|
|
|
* specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
|
|
|
|
2016-05-05 16:46:40 -04:00
|
|
|
apply plugin: 'elasticsearch.docs-test'
|
2016-04-29 10:42:03 -04:00
|
|
|
|
Build: Rework integ test setup and shutdown to ensure stop runs when desired (#23304)
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
2017-02-22 15:43:15 -05:00
|
|
|
integTestCluster {
|
|
|
|
/* Enable regexes in painless so our tests don't complain about example
|
|
|
|
* snippets that use them. */
|
|
|
|
setting 'script.painless.regex.enabled', 'true'
|
|
|
|
Closure configFile = {
|
|
|
|
extraConfigFile it, "src/test/cluster/config/$it"
|
2016-05-13 16:15:51 -04:00
|
|
|
}
|
2017-04-02 11:15:26 -04:00
|
|
|
configFile 'analysis/example_word_list.txt'
|
|
|
|
configFile 'analysis/hyphenation_patterns.xml'
|
2017-03-22 16:30:52 -04:00
|
|
|
configFile 'analysis/synonym.txt'
|
2017-03-22 17:56:53 -04:00
|
|
|
configFile 'analysis/stemmer_override.txt'
|
Build: Rework integ test setup and shutdown to ensure stop runs when desired (#23304)
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
2017-02-22 15:43:15 -05:00
|
|
|
configFile 'userdict_ja.txt'
|
2018-05-04 14:46:13 -04:00
|
|
|
configFile 'userdict_ko.txt'
|
Build: Rework integ test setup and shutdown to ensure stop runs when desired (#23304)
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
2017-02-22 15:43:15 -05:00
|
|
|
configFile 'KeywordTokenizer.rbbi'
|
2018-01-11 13:30:43 -05:00
|
|
|
extraConfigFile 'hunspell/en_US/en_US.aff', '../server/src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.aff'
|
|
|
|
extraConfigFile 'hunspell/en_US/en_US.dic', '../server/src/test/resources/indices/analyze/conf_dir/hunspell/en_US/en_US.dic'
|
Build: Rework integ test setup and shutdown to ensure stop runs when desired (#23304)
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
2017-02-22 15:43:15 -05:00
|
|
|
// Whitelist reindexing from the local node so we can test it.
|
|
|
|
setting 'reindex.remote.whitelist', '127.0.0.1:*'
|
2018-08-01 11:58:49 -04:00
|
|
|
|
|
|
|
// TODO: remove this for 7.0, this exists to allow the doc examples in 6.x to continue using the defaults
|
|
|
|
systemProperty 'es.scripting.use_java_time', 'false'
|
2016-05-13 16:15:51 -04:00
|
|
|
}
|
|
|
|
|
2018-07-04 23:24:01 -04:00
|
|
|
// remove when https://github.com/elastic/elasticsearch/issues/31305 is fixed
|
|
|
|
if (rootProject.ext.compilerJavaVersion.isJava11()) {
|
|
|
|
integTestRunner {
|
|
|
|
systemProperty 'tests.rest.blacklist', [
|
|
|
|
'plugins/ingest-attachment/line_164',
|
|
|
|
'plugins/ingest-attachment/line_117'
|
|
|
|
].join(',')
|
|
|
|
}
|
|
|
|
}
|
2017-02-15 18:13:27 -05:00
|
|
|
// Build the cluster with all plugins
|
2016-07-15 17:34:21 -04:00
|
|
|
|
2016-05-13 16:15:51 -04:00
|
|
|
project.rootProject.subprojects.findAll { it.parent.path == ':plugins' }.each { subproj ->
|
|
|
|
/* Skip repositories. We just aren't going to be able to test them so it
|
|
|
|
* doesn't make sense to waste time installing them. */
|
|
|
|
if (subproj.path.startsWith(':plugins:repository-')) {
|
|
|
|
return
|
|
|
|
}
|
2016-07-15 17:34:21 -04:00
|
|
|
subproj.afterEvaluate { // need to wait until the project has been configured
|
Build: Rework integ test setup and shutdown to ensure stop runs when desired (#23304)
Gradle's finalizedBy on tasks only ensures one task runs after another,
but not immediately after. This is problematic for our integration tests
since it allows multiple project's integ test clusters to be
simultaneously. While this has not been a problem thus far (gradle 2.13
happened to keep the finalizedBy tasks close enough that no clusters
were running in parallel), with gradle 3.3 the task graph generation has
changed, and numerous clusters may be running simultaneously, causing
memory pressure, and thus generally slower tests, or even failure if the
system has a limited amount of memory (eg in a vagrant host).
This commit reworks how integ tests are configured. It adds an
`integTestCluster` extension to gradle which is equivalent to the current
`integTest.cluster` and moves the rest test runner task to
`integTestRunner`. The `integTest` task is then just a dummy task,
which depends on the cluster runner task, as well as the cluster stop
task. This means running `integTest` in one project will both run the
rest tests, and shut down the cluster, before running `integTest` in
another project.
2017-02-22 15:43:15 -05:00
|
|
|
integTestCluster {
|
|
|
|
plugin subproj.path
|
2016-05-13 16:15:51 -04:00
|
|
|
}
|
2016-05-05 16:46:40 -04:00
|
|
|
}
|
2016-04-29 10:42:03 -04:00
|
|
|
}
|
|
|
|
|
2016-05-05 16:46:40 -04:00
|
|
|
buildRestTests.docs = fileTree(projectDir) {
|
|
|
|
// No snippets in here!
|
|
|
|
exclude 'build.gradle'
|
|
|
|
// That is where the snippets go, not where they come from!
|
|
|
|
exclude 'build'
|
2018-05-09 09:23:10 -04:00
|
|
|
// Just syntax examples
|
|
|
|
exclude 'README.asciidoc'
|
2016-04-29 10:42:03 -04:00
|
|
|
}
|
|
|
|
|
2018-06-27 15:41:44 -04:00
|
|
|
listSnippets.docs = buildRestTests.docs
|
|
|
|
|
2016-05-05 16:46:40 -04:00
|
|
|
Closure setupTwitter = { String name, int count ->
|
|
|
|
buildRestTests.setups[name] = '''
|
2016-07-22 18:51:36 -04:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
indices.create:
|
|
|
|
index: twitter
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-03-02 06:43:20 -05:00
|
|
|
properties:
|
|
|
|
user:
|
|
|
|
type: keyword
|
|
|
|
doc_values: true
|
|
|
|
date:
|
|
|
|
type: date
|
|
|
|
likes:
|
|
|
|
type: long
|
2016-04-29 10:42:03 -04:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: twitter
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |'''
|
2016-04-29 10:42:03 -04:00
|
|
|
for (int i = 0; i < count; i++) {
|
2016-05-10 18:29:56 -04:00
|
|
|
String user, text
|
|
|
|
if (i == 0) {
|
|
|
|
user = 'kimchy'
|
|
|
|
text = 'trying out Elasticsearch'
|
|
|
|
} else {
|
|
|
|
user = 'test'
|
|
|
|
text = "some message with the number $i"
|
|
|
|
}
|
2016-05-05 16:46:40 -04:00
|
|
|
buildRestTests.setups[name] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{"_id": "$i"}}
|
|
|
|
{"user": "$user", "message": "$text", "date": "2009-11-15T14:12:12", "likes": $i}"""
|
2016-04-29 10:42:03 -04:00
|
|
|
}
|
|
|
|
}
|
2016-05-05 16:46:40 -04:00
|
|
|
setupTwitter('twitter', 5)
|
|
|
|
setupTwitter('big_twitter', 120)
|
2016-09-01 17:08:18 -04:00
|
|
|
setupTwitter('huge_twitter', 1200)
|
2016-05-19 12:40:22 -04:00
|
|
|
|
|
|
|
buildRestTests.setups['host'] = '''
|
|
|
|
# Fetch the http host. We use the host of the master because we know there will always be a master.
|
|
|
|
- do:
|
|
|
|
cluster.state: {}
|
|
|
|
- set: { master_node: master }
|
|
|
|
- do:
|
|
|
|
nodes.info:
|
Cross Cluster Search: make remote clusters optional (#27182)
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.
This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.
Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.
The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:
"_clusters" : {
"total" : 3,
"successful" : 2,
"skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.
The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 05:41:47 -05:00
|
|
|
metric: [ http, transport ]
|
2016-05-19 12:40:22 -04:00
|
|
|
- is_true: nodes.$master.http.publish_address
|
|
|
|
- set: {nodes.$master.http.publish_address: host}
|
Cross Cluster Search: make remote clusters optional (#27182)
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.
This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.
Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.
The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:
"_clusters" : {
"total" : 3,
"successful" : 2,
"skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.
The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
2017-11-21 05:41:47 -05:00
|
|
|
- set: {nodes.$master.transport.publish_address: transport_host}
|
2016-05-19 12:40:22 -04:00
|
|
|
'''
|
2016-08-12 18:42:19 -04:00
|
|
|
|
2017-06-02 03:46:38 -04:00
|
|
|
buildRestTests.setups['node'] = '''
|
|
|
|
# Fetch the node name. We use the host of the master because we know there will always be a master.
|
|
|
|
- do:
|
|
|
|
cluster.state: {}
|
|
|
|
- is_true: master_node
|
|
|
|
- set: { master_node: node_name }
|
|
|
|
'''
|
|
|
|
|
2016-11-15 11:45:54 -05:00
|
|
|
// Used by scripted metric docs
|
|
|
|
buildRestTests.setups['ledger'] = '''
|
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
indices.create:
|
|
|
|
index: ledger
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 2
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-03-02 06:43:20 -05:00
|
|
|
properties:
|
|
|
|
type:
|
|
|
|
type: keyword
|
|
|
|
amount:
|
|
|
|
type: double
|
2016-11-15 11:45:54 -05:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: ledger
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "amount": 200, "type": "sale", "description": "something"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "amount": 10, "type": "expense", "decription": "another thing"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "amount": 150, "type": "sale", "description": "blah"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "cost of blah"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "amount": 50, "type": "expense", "description": "advertisement"}'''
|
2016-11-15 11:45:54 -05:00
|
|
|
|
2017-02-07 14:17:54 -05:00
|
|
|
// Used by aggregation docs
|
2016-08-12 18:42:19 -04:00
|
|
|
buildRestTests.setups['sales'] = '''
|
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
indices.create:
|
|
|
|
index: sales
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 2
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-03-02 06:43:20 -05:00
|
|
|
properties:
|
|
|
|
type:
|
|
|
|
type: keyword
|
2016-08-12 18:42:19 -04:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: sales
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "t-shirt"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/01/01 00:00:00", "price": 150, "promoted": true, "rating": 5, "type": "bag"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/02/01 00:00:00", "price": 50, "promoted": false, "rating": 1, "type": "hat"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/02/01 00:00:00", "price": 10, "promoted": true, "rating": 4, "type": "t-shirt"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/03/01 00:00:00", "price": 200, "promoted": true, "rating": 1, "type": "hat"}
|
|
|
|
{"index":{}}
|
|
|
|
{"date": "2015/03/01 00:00:00", "price": 175, "promoted": false, "rating": 2, "type": "t-shirt"}'''
|
2016-10-02 23:16:21 -04:00
|
|
|
|
|
|
|
// Dummy bank account data used by getting-started.asciidoc
|
|
|
|
buildRestTests.setups['bank'] = '''
|
2017-11-23 03:48:54 -05:00
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: bank
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 5
|
|
|
|
number_of_routing_shards: 5
|
2016-10-02 23:16:21 -04:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: bank
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |
|
2016-10-02 23:16:21 -04:00
|
|
|
#bank_data#
|
|
|
|
'''
|
|
|
|
/* Load the actual accounts only if we're going to use them. This complicates
|
|
|
|
* dependency checking but that is a small price to pay for not building a
|
|
|
|
* 400kb string every time we start the build. */
|
|
|
|
File accountsFile = new File("$projectDir/src/test/resources/accounts.json")
|
|
|
|
buildRestTests.inputs.file(accountsFile)
|
|
|
|
buildRestTests.doFirst {
|
|
|
|
String accounts = accountsFile.getText('UTF-8')
|
|
|
|
// Indent like a yaml test needs
|
2017-03-02 06:43:20 -05:00
|
|
|
accounts = accounts.replaceAll('(?m)^', ' ')
|
2016-10-02 23:16:21 -04:00
|
|
|
buildRestTests.setups['bank'] =
|
|
|
|
buildRestTests.setups['bank'].replace('#bank_data#', accounts)
|
|
|
|
}
|
Add RangeFieldMapper for numeric and date range types
Lucene 6.2 added index and query support for numeric ranges. This commit adds a new RangeFieldMapper for indexing numeric (int, long, float, double) and date ranges and creating appropriate range and term queries. The design is similar to NumericFieldMapper in that it uses a RangeType enumerator for implementing the logic specific to each type. The following range types are supported by this field mapper: int_range, float_range, long_range, double_range, date_range.
Lucene does not provide a DocValue field specific to RangeField types so the RangeFieldMapper implements a CustomRangeDocValuesField for handling doc value support.
When executing a Range query over a Range field, the RangeQueryBuilder has been enhanced to accept a new relation parameter for defining the type of query as one of: WITHIN, CONTAINS, INTERSECTS. This provides support for finding all ranges that are related to a specific range in a desired way. As with other spatial queries, DISJOINT can be achieved as a MUST_NOT of an INTERSECTS query.
2016-09-16 09:50:56 -04:00
|
|
|
|
2016-11-07 03:20:06 -05:00
|
|
|
// Used by index boost doc
|
|
|
|
buildRestTests.setups['index_boost'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: index1
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: index2
|
|
|
|
|
|
|
|
- do:
|
|
|
|
indices.put_alias:
|
|
|
|
index: index1
|
|
|
|
name: alias1
|
|
|
|
'''
|
2017-01-31 04:45:25 -05:00
|
|
|
// Used by sampler and diversified-sampler aggregation docs
|
|
|
|
buildRestTests.setups['stackoverflow'] = '''
|
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
indices.create:
|
|
|
|
index: stackoverflow
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-03-02 06:43:20 -05:00
|
|
|
properties:
|
|
|
|
author:
|
|
|
|
type: keyword
|
|
|
|
tags:
|
|
|
|
type: keyword
|
2017-01-31 04:45:25 -05:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: stackoverflow
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |'''
|
2017-02-07 13:33:00 -05:00
|
|
|
|
2017-01-31 04:45:25 -05:00
|
|
|
// Make Kibana strongly connected to elasticsearch and logstash
|
|
|
|
// Make Kibana rarer (and therefore higher-ranking) than Javascript
|
|
|
|
// Make Javascript strongly connected to jquery and angular
|
|
|
|
// Make Cabana strongly connected to elasticsearch but only as a result of a single author
|
|
|
|
|
|
|
|
for (int i = 0; i < 150; i++) {
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
|
|
|
{"author": "very_relevant_$i", "tags": ["elasticsearch", "kibana"]}"""
|
2017-01-31 04:45:25 -05:00
|
|
|
}
|
|
|
|
for (int i = 0; i < 50; i++) {
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
|
|
|
{"author": "very_relevant_$i", "tags": ["logstash", "kibana"]}"""
|
2017-01-31 04:45:25 -05:00
|
|
|
}
|
|
|
|
for (int i = 0; i < 200; i++) {
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
|
|
|
{"author": "partially_relevant_$i", "tags": ["javascript", "jquery"]}"""
|
2017-01-31 04:45:25 -05:00
|
|
|
}
|
|
|
|
for (int i = 0; i < 200; i++) {
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
|
|
|
{"author": "partially_relevant_$i", "tags": ["javascript", "angular"]}"""
|
2017-01-31 04:45:25 -05:00
|
|
|
}
|
|
|
|
for (int i = 0; i < 50; i++) {
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
|
|
|
{"author": "noisy author", "tags": ["elasticsearch", "cabana"]}"""
|
2017-01-31 04:45:25 -05:00
|
|
|
}
|
|
|
|
buildRestTests.setups['stackoverflow'] += """
|
|
|
|
"""
|
2017-05-24 08:46:43 -04:00
|
|
|
// Used by significant_text aggregation docs
|
|
|
|
buildRestTests.setups['news'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: news
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-05-24 08:46:43 -04:00
|
|
|
properties:
|
|
|
|
source:
|
|
|
|
type: keyword
|
|
|
|
content:
|
|
|
|
type: text
|
|
|
|
- do:
|
|
|
|
bulk:
|
|
|
|
index: news
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-05-24 08:46:43 -04:00
|
|
|
refresh: true
|
|
|
|
body: |'''
|
|
|
|
|
|
|
|
// Make h5n1 strongly connected to bird flu
|
|
|
|
|
|
|
|
for (int i = 0; i < 100; i++) {
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"source": "very_relevant_$i", "content": "bird flu h5n1"}"""
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 100; i++) {
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"source": "filler_$i", "content": "bird dupFiller "}"""
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 100; i++) {
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"source": "filler_$i", "content": "flu dupFiller "}"""
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 20; i++) {
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"source": "partially_relevant_$i", "content": "elasticsearch dupFiller dupFiller dupFiller dupFiller pozmantier"}"""
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 10; i++) {
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"source": "partially_relevant_$i", "content": "elasticsearch logstash kibana"}"""
|
|
|
|
}
|
|
|
|
buildRestTests.setups['news'] += """
|
|
|
|
"""
|
2017-02-07 15:59:40 -05:00
|
|
|
|
2017-02-07 13:33:00 -05:00
|
|
|
// Used by some aggregations
|
|
|
|
buildRestTests.setups['exams'] = '''
|
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
indices.create:
|
|
|
|
index: exams
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-03-02 06:43:20 -05:00
|
|
|
properties:
|
|
|
|
grade:
|
|
|
|
type: byte
|
2017-02-07 13:33:00 -05:00
|
|
|
- do:
|
2017-03-02 06:43:20 -05:00
|
|
|
bulk:
|
|
|
|
index: exams
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-03-02 06:43:20 -05:00
|
|
|
refresh: true
|
|
|
|
body: |
|
|
|
|
{"index":{}}
|
2018-07-23 18:33:15 -04:00
|
|
|
{"grade": 100, "weight": 2}
|
2017-03-02 06:43:20 -05:00
|
|
|
{"index":{}}
|
2018-07-23 18:33:15 -04:00
|
|
|
{"grade": 50, "weight": 3}'''
|
2017-05-17 17:42:25 -04:00
|
|
|
|
|
|
|
buildRestTests.setups['stored_example_script'] = '''
|
|
|
|
# Simple script to load a field. Not really a good example, but a simple one.
|
|
|
|
- do:
|
|
|
|
put_script:
|
|
|
|
id: "my_script"
|
2017-06-09 11:29:25 -04:00
|
|
|
body: { "script": { "lang": "painless", "source": "doc[params.field].value" } }
|
2017-05-17 17:42:25 -04:00
|
|
|
- match: { acknowledged: true }
|
|
|
|
'''
|
|
|
|
|
|
|
|
buildRestTests.setups['stored_scripted_metric_script'] = '''
|
|
|
|
- do:
|
|
|
|
put_script:
|
|
|
|
id: "my_init_script"
|
2017-06-09 11:29:25 -04:00
|
|
|
body: { "script": { "lang": "painless", "source": "params._agg.transactions = []" } }
|
2017-05-17 17:42:25 -04:00
|
|
|
- match: { acknowledged: true }
|
2017-07-05 06:30:19 -04:00
|
|
|
|
2017-05-17 17:42:25 -04:00
|
|
|
- do:
|
|
|
|
put_script:
|
|
|
|
id: "my_map_script"
|
2017-06-09 11:29:25 -04:00
|
|
|
body: { "script": { "lang": "painless", "source": "params._agg.transactions.add(doc.type.value == 'sale' ? doc.amount.value : -1 * doc.amount.value)" } }
|
2017-05-17 17:42:25 -04:00
|
|
|
- match: { acknowledged: true }
|
|
|
|
|
|
|
|
- do:
|
|
|
|
put_script:
|
|
|
|
id: "my_combine_script"
|
2017-06-09 11:29:25 -04:00
|
|
|
body: { "script": { "lang": "painless", "source": "double profit = 0;for (t in params._agg.transactions) { profit += t; } return profit" } }
|
2017-05-17 17:42:25 -04:00
|
|
|
- match: { acknowledged: true }
|
|
|
|
|
|
|
|
- do:
|
|
|
|
put_script:
|
|
|
|
id: "my_reduce_script"
|
2017-06-09 11:29:25 -04:00
|
|
|
body: { "script": { "lang": "painless", "source": "double profit = 0;for (a in params._aggs) { profit += a; } return profit" } }
|
2017-05-17 17:42:25 -04:00
|
|
|
- match: { acknowledged: true }
|
|
|
|
'''
|
2017-07-04 06:16:56 -04:00
|
|
|
|
|
|
|
// Used by analyze api
|
|
|
|
buildRestTests.setups['analyze_sample'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: analyze_sample
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 0
|
|
|
|
analysis:
|
|
|
|
normalizer:
|
|
|
|
my_normalizer:
|
|
|
|
type: custom
|
|
|
|
filter: [lowercase]
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-07-04 06:16:56 -04:00
|
|
|
properties:
|
|
|
|
obj1.field1:
|
|
|
|
type: text'''
|
2017-08-02 17:47:27 -04:00
|
|
|
|
|
|
|
// Used by percentile/percentile-rank aggregations
|
|
|
|
buildRestTests.setups['latency'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: latency
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-08-02 17:47:27 -04:00
|
|
|
properties:
|
|
|
|
load_time:
|
|
|
|
type: long
|
|
|
|
- do:
|
|
|
|
bulk:
|
|
|
|
index: latency
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-08-02 17:47:27 -04:00
|
|
|
refresh: true
|
|
|
|
body: |'''
|
|
|
|
|
|
|
|
|
|
|
|
for (int i = 0; i < 100; i++) {
|
|
|
|
def value = i
|
|
|
|
if (i % 10) {
|
|
|
|
value = i*10
|
|
|
|
}
|
|
|
|
buildRestTests.setups['latency'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"load_time": "$value"}"""
|
|
|
|
}
|
2017-08-03 17:17:02 -04:00
|
|
|
|
|
|
|
// Used by iprange agg
|
|
|
|
buildRestTests.setups['iprange'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: ip_addresses
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
2017-12-14 11:47:53 -05:00
|
|
|
_doc:
|
2017-08-03 17:17:02 -04:00
|
|
|
properties:
|
|
|
|
ip:
|
|
|
|
type: ip
|
|
|
|
- do:
|
|
|
|
bulk:
|
|
|
|
index: ip_addresses
|
2017-12-14 11:47:53 -05:00
|
|
|
type: _doc
|
2017-08-03 17:17:02 -04:00
|
|
|
refresh: true
|
|
|
|
body: |'''
|
|
|
|
|
|
|
|
|
|
|
|
for (int i = 0; i < 255; i++) {
|
|
|
|
buildRestTests.setups['iprange'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"ip": "10.0.0.$i"}"""
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 5; i++) {
|
|
|
|
buildRestTests.setups['iprange'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"ip": "9.0.0.$i"}"""
|
|
|
|
buildRestTests.setups['iprange'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"ip": "11.0.0.$i"}"""
|
|
|
|
buildRestTests.setups['iprange'] += """
|
|
|
|
{"index":{}}
|
|
|
|
{"ip": "12.0.0.$i"}"""
|
2017-08-16 15:12:44 -04:00
|
|
|
}
|
2018-06-22 18:40:25 -04:00
|
|
|
// Used by SQL because it looks SQL-ish
|
|
|
|
buildRestTests.setups['library'] = '''
|
|
|
|
- do:
|
|
|
|
indices.create:
|
|
|
|
index: library
|
|
|
|
body:
|
|
|
|
settings:
|
|
|
|
number_of_shards: 1
|
|
|
|
number_of_replicas: 1
|
|
|
|
mappings:
|
|
|
|
book:
|
|
|
|
properties:
|
|
|
|
name:
|
|
|
|
type: text
|
|
|
|
fields:
|
|
|
|
keyword:
|
|
|
|
type: keyword
|
|
|
|
author:
|
|
|
|
type: text
|
|
|
|
fields:
|
|
|
|
keyword:
|
|
|
|
type: keyword
|
|
|
|
release_date:
|
|
|
|
type: date
|
|
|
|
page_count:
|
|
|
|
type: short
|
|
|
|
- do:
|
|
|
|
bulk:
|
|
|
|
index: library
|
|
|
|
type: book
|
|
|
|
refresh: true
|
|
|
|
body: |
|
|
|
|
{"index":{"_id": "Leviathan Wakes"}}
|
|
|
|
{"name": "Leviathan Wakes", "author": "James S.A. Corey", "release_date": "2011-06-02", "page_count": 561}
|
|
|
|
{"index":{"_id": "Hyperion"}}
|
|
|
|
{"name": "Hyperion", "author": "Dan Simmons", "release_date": "1989-05-26", "page_count": 482}
|
|
|
|
{"index":{"_id": "Dune"}}
|
|
|
|
{"name": "Dune", "author": "Frank Herbert", "release_date": "1965-06-01", "page_count": 604}
|
|
|
|
{"index":{"_id": "Dune Messiah"}}
|
|
|
|
{"name": "Dune Messiah", "author": "Frank Herbert", "release_date": "1969-10-15", "page_count": 331}
|
|
|
|
{"index":{"_id": "Children of Dune"}}
|
|
|
|
{"name": "Children of Dune", "author": "Frank Herbert", "release_date": "1976-04-21", "page_count": 408}
|
|
|
|
{"index":{"_id": "God Emperor of Dune"}}
|
|
|
|
{"name": "God Emperor of Dune", "author": "Frank Herbert", "release_date": "1981-05-28", "page_count": 454}
|
|
|
|
{"index":{"_id": "Consider Phlebas"}}
|
|
|
|
{"name": "Consider Phlebas", "author": "Iain M. Banks", "release_date": "1987-04-23", "page_count": 471}
|
|
|
|
{"index":{"_id": "Pandora's Star"}}
|
|
|
|
{"name": "Pandora's Star", "author": "Peter F. Hamilton", "release_date": "2004-03-02", "page_count": 768}
|
|
|
|
{"index":{"_id": "Revelation Space"}}
|
|
|
|
{"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585}
|
|
|
|
{"index":{"_id": "A Fire Upon the Deep"}}
|
|
|
|
{"name": "A Fire Upon the Deep", "author": "Vernor Vinge", "release_date": "1992-06-01", "page_count": 613}
|
|
|
|
{"index":{"_id": "Ender's Game"}}
|
|
|
|
{"name": "Ender's Game", "author": "Orson Scott Card", "release_date": "1985-06-01", "page_count": 324}
|
|
|
|
{"index":{"_id": "1984"}}
|
|
|
|
{"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328}
|
|
|
|
{"index":{"_id": "Fahrenheit 451"}}
|
|
|
|
{"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227}
|
|
|
|
{"index":{"_id": "Brave New World"}}
|
|
|
|
{"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268}
|
|
|
|
{"index":{"_id": "Foundation"}}
|
|
|
|
{"name": "Foundation", "author": "Isaac Asimov", "release_date": "1951-06-01", "page_count": 224}
|
|
|
|
{"index":{"_id": "The Giver"}}
|
|
|
|
{"name": "The Giver", "author": "Lois Lowry", "release_date": "1993-04-26", "page_count": 208}
|
|
|
|
{"index":{"_id": "Slaughterhouse-Five"}}
|
|
|
|
{"name": "Slaughterhouse-Five", "author": "Kurt Vonnegut", "release_date": "1969-06-01", "page_count": 275}
|
|
|
|
{"index":{"_id": "The Hitchhiker's Guide to the Galaxy"}}
|
|
|
|
{"name": "The Hitchhiker's Guide to the Galaxy", "author": "Douglas Adams", "release_date": "1979-10-12", "page_count": 180}
|
|
|
|
{"index":{"_id": "Snow Crash"}}
|
|
|
|
{"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470}
|
|
|
|
{"index":{"_id": "Neuromancer"}}
|
|
|
|
{"name": "Neuromancer", "author": "William Gibson", "release_date": "1984-07-01", "page_count": 271}
|
|
|
|
{"index":{"_id": "The Handmaid's Tale"}}
|
|
|
|
{"name": "The Handmaid's Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311}
|
|
|
|
{"index":{"_id": "Starship Troopers"}}
|
|
|
|
{"name": "Starship Troopers", "author": "Robert A. Heinlein", "release_date": "1959-12-01", "page_count": 335}
|
|
|
|
{"index":{"_id": "The Left Hand of Darkness"}}
|
|
|
|
{"name": "The Left Hand of Darkness", "author": "Ursula K. Le Guin", "release_date": "1969-06-01", "page_count": 304}
|
|
|
|
{"index":{"_id": "The Moon is a Harsh Mistress"}}
|
|
|
|
{"name": "The Moon is a Harsh Mistress", "author": "Robert A. Heinlein", "release_date": "1966-04-01", "page_count": 288}
|
|
|
|
|
2018-06-27 15:41:44 -04:00
|
|
|
'''
|