Merge remote-tracking branch 'elastic/master' into zen2
This commit is contained in:
commit
d80b639c18
|
@ -7,5 +7,6 @@
|
|||
|
||||
ES_RUNTIME_JAVA:
|
||||
- java8
|
||||
- java8fips
|
||||
- java10
|
||||
- java11
|
||||
|
|
|
@ -100,6 +100,12 @@ JDK 10 and testing on a JDK 8 runtime; to do this, set `RUNTIME_JAVA_HOME`
|
|||
pointing to the Java home of a JDK 8 installation. Note that this mechanism can
|
||||
be used to test against other JDKs as well, this is not only limited to JDK 8.
|
||||
|
||||
> Note: It is also required to have `JAVA7_HOME`, `JAVA8_HOME` and
|
||||
`JAVA10_HOME` available so that the tests can pass.
|
||||
|
||||
> Warning: do not use `sdkman` for Java installations which do not have proper
|
||||
`jrunscript` for jdk distributions.
|
||||
|
||||
Elasticsearch uses the Gradle wrapper for its build. You can execute Gradle
|
||||
using the wrapper via the `gradlew` script in the root of the repository.
|
||||
|
||||
|
@ -214,7 +220,7 @@ If your changes affect only the documentation, run:
|
|||
```sh
|
||||
./gradlew -p docs check
|
||||
```
|
||||
For more information about testing code examples in the documentation, see
|
||||
For more information about testing code examples in the documentation, see
|
||||
https://github.com/elastic/elasticsearch/blob/master/docs/README.asciidoc
|
||||
|
||||
### Project layout
|
||||
|
@ -305,6 +311,39 @@ the `qa` subdirectory functions just like the top level `qa` subdirectory. The
|
|||
Elasticsearch process. The `transport-client` subdirectory contains extensions
|
||||
to Elasticsearch's standard transport client to work properly with x-pack.
|
||||
|
||||
### Gradle Build
|
||||
|
||||
We use Gradle to build Elasticsearch because it is flexible enough to not only
|
||||
build and package Elasticsearch, but also orchestrate all of the ways that we
|
||||
have to test Elasticsearch.
|
||||
|
||||
#### Configurations
|
||||
|
||||
Gradle organizes dependencies and build artifacts into "configurations" and
|
||||
allows you to use these configurations arbitrarilly. Here are some of the most
|
||||
common configurations in our build and how we use them:
|
||||
|
||||
<dl>
|
||||
<dt>`compile`</dt><dd>Code that is on the classpath at both compile and
|
||||
runtime. If the [`shadow`][shadow-plugin] plugin is applied to the project then
|
||||
this code is bundled into the jar produced by the project.</dd>
|
||||
<dt>`runtime`</dt><dd>Code that is not on the classpath at compile time but is
|
||||
on the classpath at runtime. We mostly use this configuration to make sure that
|
||||
we do not accidentally compile against dependencies of our dependencies also
|
||||
known as "transitive" dependencies".</dd>
|
||||
<dt>`compileOnly`</dt><dd>Code that is on the classpath at comile time but that
|
||||
should not be shipped with the project because it is "provided" by the runtime
|
||||
somehow. Elasticsearch plugins use this configuration to include dependencies
|
||||
that are bundled with Elasticsearch's server.</dd>
|
||||
<dt>`shadow`</dt><dd>Only available in projects with the shadow plugin. Code
|
||||
that is on the classpath at both compile and runtime but it *not* bundled into
|
||||
the jar produced by the project. If you depend on a project with the `shadow`
|
||||
plugin then you need to depend on this configuration because it will bring
|
||||
along all of the dependencies you need at runtime.</dd>
|
||||
<dt>`testCompile`</dt><dd>Code that is on the classpath for compiling tests
|
||||
that are part of this project but not production code. The canonical example
|
||||
of this is `junit`.</dd>
|
||||
</dl>
|
||||
|
||||
Contributing as part of a class
|
||||
-------------------------------
|
||||
|
@ -337,3 +376,4 @@ repeating in this section because it has come up in this context.
|
|||
|
||||
[eclipse]: http://www.eclipse.org/community/eclipse_newsletter/2017/june/
|
||||
[intellij]: https://blog.jetbrains.com/idea/2017/07/intellij-idea-2017-2-is-here-smart-sleek-and-snappy/
|
||||
[shadow-plugin]: https://github.com/johnrengelman/shadow
|
||||
|
|
|
@ -209,10 +209,6 @@ The distribution for each project will be created under the @build/distributions
|
|||
|
||||
See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite.
|
||||
|
||||
h3. Upgrading from Elasticsearch 1.x?
|
||||
h3. Upgrading from older Elasticsearch versions
|
||||
|
||||
In order to ensure a smooth upgrade process from earlier versions of
|
||||
Elasticsearch (1.x), it is required to perform a full cluster restart. Please
|
||||
see the "setup reference":
|
||||
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html
|
||||
for more details on the upgrade process.
|
||||
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our "upgrade documentation":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
|
||||
|
|
|
@ -115,6 +115,11 @@ Vagrant.configure(2) do |config|
|
|||
'opensuse-42'.tap do |box|
|
||||
config.vm.define box, define_opts do |config|
|
||||
config.vm.box = 'elastic/opensuse-42-x86_64'
|
||||
|
||||
# https://github.com/elastic/elasticsearch/issues/30295
|
||||
config.vm.provider 'virtualbox' do |vbox|
|
||||
vbox.customize ['storagectl', :id, '--name', 'SATA Controller', '--hostiocache', 'on']
|
||||
end
|
||||
suse_common config, box
|
||||
end
|
||||
end
|
||||
|
|
|
@ -4,36 +4,39 @@ This directory contains the microbenchmark suite of Elasticsearch. It relies on
|
|||
|
||||
## Purpose
|
||||
|
||||
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
|
||||
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
|
||||
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
|
||||
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
|
||||
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
|
||||
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
|
||||
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.
|
||||
Just run `gradlew -p benchmarks run` from the project root
|
||||
directory. It will build all microbenchmarks, execute them and print
|
||||
the result.
|
||||
|
||||
## Running Microbenchmarks
|
||||
|
||||
Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.
|
||||
|
||||
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
|
||||
Running via an IDE is not supported as the results are meaningless
|
||||
because we have no control over the JVM running the benchmarks.
|
||||
|
||||
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
|
||||
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:
|
||||
If you want to run a specific benchmark class like, say,
|
||||
`MemoryStatsBenchmark`, you can use `--args`:
|
||||
|
||||
```
|
||||
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
|
||||
gradlew -p benchmarks run --args ' MemoryStatsBenchmark'
|
||||
```
|
||||
|
||||
JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options.
|
||||
Everything in the `'` gets sent on the command line to JMH. The leading ` `
|
||||
inside the `'`s is important. Without it parameters are sometimes sent to
|
||||
gradle.
|
||||
|
||||
## Adding Microbenchmarks
|
||||
|
||||
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
|
||||
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
|
||||
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).
|
||||
|
||||
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
|
||||
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
|
||||
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.
|
||||
|
||||
## Tips and Best Practices
|
||||
|
@ -42,15 +45,15 @@ To get realistic results, you should exercise care when running benchmarks. Here
|
|||
|
||||
### Do
|
||||
|
||||
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
|
||||
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
|
||||
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
|
||||
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
|
||||
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
|
||||
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
|
||||
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
|
||||
`performance` CPU governor.
|
||||
* Vary the problem input size with `@Param`.
|
||||
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
|
||||
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
|
||||
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
|
||||
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
|
||||
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
|
||||
* Have your benchmarks peer-reviewed.
|
||||
|
@ -59,4 +62,4 @@ To get realistic results, you should exercise care when running benchmarks. Here
|
|||
|
||||
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
|
||||
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
|
||||
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
|
||||
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
|
||||
|
|
|
@ -17,23 +17,9 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
buildscript {
|
||||
repositories {
|
||||
maven {
|
||||
url 'https://plugins.gradle.org/m2/'
|
||||
}
|
||||
}
|
||||
dependencies {
|
||||
classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.4'
|
||||
}
|
||||
}
|
||||
|
||||
apply plugin: 'elasticsearch.build'
|
||||
|
||||
// order of this section matters, see: https://github.com/johnrengelman/shadow/issues/336
|
||||
apply plugin: 'application' // have the shadow plugin provide the runShadow task
|
||||
apply plugin: 'application'
|
||||
mainClassName = 'org.openjdk.jmh.Main'
|
||||
apply plugin: 'com.github.johnrengelman.shadow' // build an uberjar with all benchmarks
|
||||
|
||||
// Not published so no need to assemble
|
||||
tasks.remove(assemble)
|
||||
|
@ -61,10 +47,8 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
|
|||
// needs to be added separately otherwise Gradle will quote it and javac will fail
|
||||
compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])
|
||||
|
||||
forbiddenApis {
|
||||
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
|
||||
ignoreFailures = true
|
||||
}
|
||||
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
|
||||
forbiddenApisMain.enabled = false
|
||||
|
||||
// No licenses for our benchmark deps (we don't ship benchmarks)
|
||||
dependencyLicenses.enabled = false
|
||||
|
@ -80,24 +64,3 @@ thirdPartyAudit.excludes = [
|
|||
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
|
||||
'org.openjdk.jmh.util.Utils'
|
||||
]
|
||||
|
||||
shadowJar {
|
||||
classifier = 'benchmarks'
|
||||
}
|
||||
|
||||
runShadow {
|
||||
executable = new File(project.runtimeJavaHome, 'bin/java')
|
||||
}
|
||||
|
||||
// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
|
||||
task jmhJar(
|
||||
dependsOn: shadowJar,
|
||||
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
|
||||
group: 'Benchmark'
|
||||
)
|
||||
|
||||
task jmh(
|
||||
dependsOn: runShadow,
|
||||
description: 'Runs all microbenchmarks',
|
||||
group: 'Benchmark'
|
||||
)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %m%n
|
||||
|
||||
# Do not log at all if it is not really critical - we're in a benchmark
|
||||
rootLogger.level = error
|
||||
|
|
118
build.gradle
118
build.gradle
|
@ -16,21 +16,17 @@
|
|||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
|
||||
import org.apache.tools.ant.taskdefs.condition.Os
|
||||
import org.apache.tools.ant.filters.ReplaceTokens
|
||||
import com.github.jengelman.gradle.plugins.shadow.ShadowPlugin
|
||||
import org.elasticsearch.gradle.BuildPlugin
|
||||
import org.elasticsearch.gradle.LoggedExec
|
||||
import org.elasticsearch.gradle.Version
|
||||
import org.elasticsearch.gradle.VersionCollection
|
||||
import org.elasticsearch.gradle.VersionProperties
|
||||
import org.gradle.plugins.ide.eclipse.model.SourceFolder
|
||||
|
||||
import org.gradle.api.tasks.wrapper.Wrapper
|
||||
import org.gradle.api.tasks.wrapper.Wrapper.DistributionType
|
||||
import org.gradle.util.GradleVersion
|
||||
import org.gradle.util.DistributionLocator
|
||||
import org.apache.tools.ant.taskdefs.condition.Os
|
||||
import org.apache.tools.ant.filters.ReplaceTokens
|
||||
|
||||
import java.nio.file.Files
|
||||
import java.nio.file.Path
|
||||
|
@ -222,7 +218,7 @@ subprojects {
|
|||
"org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
|
||||
"org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
|
||||
"org.elasticsearch:elasticsearch:${version}": ':server',
|
||||
"org.elasticsearch:elasticsearch-cli:${version}": ':libs:cli',
|
||||
"org.elasticsearch:elasticsearch-cli:${version}": ':libs:elasticsearch-cli',
|
||||
"org.elasticsearch:elasticsearch-core:${version}": ':libs:core',
|
||||
"org.elasticsearch:elasticsearch-nio:${version}": ':libs:nio',
|
||||
"org.elasticsearch:elasticsearch-x-content:${version}": ':libs:x-content',
|
||||
|
@ -303,18 +299,55 @@ subprojects {
|
|||
if (project.plugins.hasPlugin(BuildPlugin)) {
|
||||
String artifactsHost = VersionProperties.elasticsearch.isSnapshot() ? "https://snapshots.elastic.co" : "https://artifacts.elastic.co"
|
||||
Closure sortClosure = { a, b -> b.group <=> a.group }
|
||||
Closure depJavadocClosure = { dep ->
|
||||
if (dep.group != null && dep.group.startsWith('org.elasticsearch')) {
|
||||
Project upstreamProject = dependencyToProject(dep)
|
||||
if (upstreamProject != null) {
|
||||
project.javadoc.dependsOn "${upstreamProject.path}:javadoc"
|
||||
String artifactPath = dep.group.replaceAll('\\.', '/') + '/' + dep.name.replaceAll('\\.', '/') + '/' + dep.version
|
||||
project.javadoc.options.linksOffline artifactsHost + "/javadoc/" + artifactPath, "${upstreamProject.buildDir}/docs/javadoc/"
|
||||
Closure depJavadocClosure = { shadowed, dep ->
|
||||
if (dep.group == null || false == dep.group.startsWith('org.elasticsearch')) {
|
||||
return
|
||||
}
|
||||
Project upstreamProject = dependencyToProject(dep)
|
||||
if (upstreamProject == null) {
|
||||
return
|
||||
}
|
||||
if (shadowed) {
|
||||
/*
|
||||
* Include the source of shadowed upstream projects so we don't
|
||||
* have to publish their javadoc.
|
||||
*/
|
||||
project.evaluationDependsOn(upstreamProject.path)
|
||||
project.javadoc.source += upstreamProject.javadoc.source
|
||||
/*
|
||||
* Do not add those projects to the javadoc classpath because
|
||||
* we are going to resolve them with their source instead.
|
||||
*/
|
||||
project.javadoc.classpath = project.javadoc.classpath.filter { f ->
|
||||
false == upstreamProject.configurations.archives.artifacts.files.files.contains(f)
|
||||
}
|
||||
/*
|
||||
* Instead we need the upstream project's javadoc classpath so
|
||||
* we don't barf on the classes that it references.
|
||||
*/
|
||||
project.javadoc.classpath += upstreamProject.javadoc.classpath
|
||||
} else {
|
||||
// Link to non-shadowed dependant projects
|
||||
project.javadoc.dependsOn "${upstreamProject.path}:javadoc"
|
||||
String artifactPath = dep.group.replaceAll('\\.', '/') + '/' + dep.name.replaceAll('\\.', '/') + '/' + dep.version
|
||||
project.javadoc.options.linksOffline artifactsHost + "/javadoc/" + artifactPath, "${upstreamProject.buildDir}/docs/javadoc/"
|
||||
}
|
||||
}
|
||||
project.configurations.compile.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure)
|
||||
project.configurations.compileOnly.dependencies.findAll().toSorted(sortClosure).each(depJavadocClosure)
|
||||
boolean hasShadow = project.plugins.hasPlugin(ShadowPlugin)
|
||||
project.configurations.compile.dependencies
|
||||
.findAll()
|
||||
.toSorted(sortClosure)
|
||||
.each({ c -> depJavadocClosure(hasShadow, c) })
|
||||
project.configurations.compileOnly.dependencies
|
||||
.findAll()
|
||||
.toSorted(sortClosure)
|
||||
.each({ c -> depJavadocClosure(hasShadow, c) })
|
||||
if (hasShadow) {
|
||||
project.configurations.shadow.dependencies
|
||||
.findAll()
|
||||
.toSorted(sortClosure)
|
||||
.each({ c -> depJavadocClosure(false, c) })
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -479,6 +512,31 @@ allprojects {
|
|||
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)
|
||||
}
|
||||
|
||||
allprojects {
|
||||
/*
|
||||
* IntelliJ and Eclipse don't know about the shadow plugin so when we're
|
||||
* in "IntelliJ mode" or "Eclipse mode" add "runtime" dependencies
|
||||
* eveywhere where we see a "shadow" dependency which will cause them to
|
||||
* reference shadowed projects directly rather than rely on the shadowing
|
||||
* to include them. This is the correct thing for it to do because it
|
||||
* doesn't run the jar shadowing at all. This isn't needed for the project
|
||||
* itself because the IDE configuration is done by SourceSets but it is
|
||||
* *is* needed for projects that depends on the project doing the shadowing.
|
||||
* Without this they won't properly depend on the shadowed project.
|
||||
*/
|
||||
if (isEclipse || isIdea) {
|
||||
configurations.all { Configuration configuration ->
|
||||
dependencies.all { Dependency dep ->
|
||||
if (dep instanceof ProjectDependency) {
|
||||
if (dep.getTargetConfiguration() == 'shadow') {
|
||||
configuration.dependencies.add(project.dependencies.project(path: dep.dependencyProject.path, configuration: 'runtime'))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// we need to add the same --debug-jvm option as
|
||||
// the real RunTask has, so we can pass it through
|
||||
class Run extends DefaultTask {
|
||||
|
@ -500,7 +558,7 @@ task run(type: Run) {
|
|||
}
|
||||
|
||||
wrapper {
|
||||
distributionType = DistributionType.ALL
|
||||
distributionType = 'ALL'
|
||||
doLast {
|
||||
final DistributionLocator locator = new DistributionLocator()
|
||||
final GradleVersion version = GradleVersion.version(wrapper.gradleVersion)
|
||||
|
@ -509,6 +567,10 @@ wrapper {
|
|||
final String sha256Sum = new String(sha256Uri.toURL().bytes)
|
||||
wrapper.getPropertiesFile() << "distributionSha256Sum=${sha256Sum}\n"
|
||||
println "Added checksum to wrapper properties"
|
||||
// Update build-tools to reflect the Gradle upgrade
|
||||
// TODO: we can remove this once we have tests to make sure older versions work.
|
||||
project(':build-tools').file('src/main/resources/minimumGradleVersion').text = gradleVersion
|
||||
println "Updated minimum Gradle Version"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -537,6 +599,7 @@ subprojects { project ->
|
|||
commandLine "${->new File(rootProject.compilerJavaHome, 'bin/jar')}",
|
||||
'xf', "${-> jarTask.outputs.files.singleFile}", 'META-INF/LICENSE.txt', 'META-INF/NOTICE.txt'
|
||||
workingDir destination
|
||||
onlyIf {jarTask.enabled}
|
||||
doFirst {
|
||||
project.delete(destination)
|
||||
Files.createDirectories(destination)
|
||||
|
@ -545,6 +608,7 @@ subprojects { project ->
|
|||
|
||||
final Task checkNotice = project.task("verify${jarTask.name.capitalize()}Notice") {
|
||||
dependsOn extract
|
||||
onlyIf {jarTask.enabled}
|
||||
doLast {
|
||||
final List<String> noticeLines = Files.readAllLines(project.noticeFile.toPath())
|
||||
final Path noticePath = extract.destination.resolve('META-INF/NOTICE.txt')
|
||||
|
@ -555,6 +619,7 @@ subprojects { project ->
|
|||
|
||||
final Task checkLicense = project.task("verify${jarTask.name.capitalize()}License") {
|
||||
dependsOn extract
|
||||
onlyIf {jarTask.enabled}
|
||||
doLast {
|
||||
final List<String> licenseLines = Files.readAllLines(project.licenseFile.toPath())
|
||||
final Path licensePath = extract.destination.resolve('META-INF/LICENSE.txt')
|
||||
|
@ -582,6 +647,21 @@ gradle.projectsEvaluated {
|
|||
}
|
||||
}
|
||||
}
|
||||
// Having the same group and name for distinct projects causes Gradle to consider them equal when resolving
|
||||
// dependencies leading to hard to debug failures. Run a check across all project to prevent this from happening.
|
||||
// see: https://github.com/gradle/gradle/issues/847
|
||||
Map coordsToProject = [:]
|
||||
project.allprojects.forEach { p ->
|
||||
String coords = "${p.group}:${p.name}"
|
||||
if (false == coordsToProject.putIfAbsent(coords, p)) {
|
||||
throw new GradleException(
|
||||
"Detected that two projects: ${p.path} and ${coordsToProject[coords].path} " +
|
||||
"have the same name and group: ${coords}. " +
|
||||
"This doesn't currently work correctly in Gradle, see: " +
|
||||
"https://github.com/gradle/gradle/issues/847"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (System.properties.get("build.compare") != null) {
|
||||
|
@ -596,7 +676,7 @@ if (System.properties.get("build.compare") != null) {
|
|||
}
|
||||
}
|
||||
sourceBuild {
|
||||
gradleVersion = "4.8.1" // does not default to gradle weapper of project dir, but current version
|
||||
gradleVersion = gradle.getGradleVersion()
|
||||
projectDir = referenceProject
|
||||
tasks = ["clean", "assemble"]
|
||||
arguments = ["-Dbuild.compare_friendly=true"]
|
||||
|
|
|
@ -25,8 +25,9 @@ plugins {
|
|||
|
||||
group = 'org.elasticsearch.gradle'
|
||||
|
||||
if (GradleVersion.current() < GradleVersion.version('3.3')) {
|
||||
throw new GradleException('Gradle 3.3+ is required to build elasticsearch')
|
||||
String minimumGradleVersion = file('src/main/resources/minimumGradleVersion').text.trim()
|
||||
if (GradleVersion.current() < GradleVersion.version(minimumGradleVersion)) {
|
||||
throw new GradleException("Gradle ${minimumGradleVersion}+ is required to build elasticsearch")
|
||||
}
|
||||
|
||||
if (JavaVersion.current() < JavaVersion.VERSION_1_8) {
|
||||
|
@ -104,6 +105,7 @@ dependencies {
|
|||
compile 'de.thetaphi:forbiddenapis:2.5'
|
||||
compile 'org.apache.rat:apache-rat:0.11'
|
||||
compile "org.elasticsearch:jna:4.5.1"
|
||||
compile 'com.github.jengelman.gradle.plugins:shadow:2.0.4'
|
||||
testCompile "junit:junit:${props.getProperty('junit')}"
|
||||
}
|
||||
|
||||
|
@ -181,4 +183,12 @@ if (project != rootProject) {
|
|||
testClass = 'org.elasticsearch.gradle.test.GradleUnitTestCase'
|
||||
integTestClass = 'org.elasticsearch.gradle.test.GradleIntegrationTestCase'
|
||||
}
|
||||
|
||||
/*
|
||||
* We alread configure publication and we don't need or want this one that
|
||||
* comes from the java-gradle-plugin.
|
||||
*/
|
||||
afterEvaluate {
|
||||
generatePomFileForPluginMavenPublication.enabled = false
|
||||
}
|
||||
}
|
||||
|
|
|
@ -74,7 +74,7 @@ class RandomizedTestingPlugin implements Plugin<Project> {
|
|||
// since we can't be sure if the task was ever realized, we remove both the provider and the task
|
||||
TaskProvider<Test> oldTestProvider
|
||||
try {
|
||||
oldTestProvider = tasks.getByNameLater(Test, 'test')
|
||||
oldTestProvider = tasks.named('test')
|
||||
} catch (UnknownTaskException unused) {
|
||||
// no test task, ok, user will use testing task on their own
|
||||
return
|
||||
|
|
|
@ -19,6 +19,8 @@
|
|||
package org.elasticsearch.gradle
|
||||
|
||||
import com.carrotsearch.gradle.junit4.RandomizedTestingTask
|
||||
import com.github.jengelman.gradle.plugins.shadow.ShadowPlugin
|
||||
import org.apache.commons.io.IOUtils
|
||||
import org.apache.tools.ant.taskdefs.condition.Os
|
||||
import org.eclipse.jgit.lib.Constants
|
||||
import org.eclipse.jgit.lib.RepositoryBuilder
|
||||
|
@ -36,12 +38,14 @@ import org.gradle.api.artifacts.ModuleDependency
|
|||
import org.gradle.api.artifacts.ModuleVersionIdentifier
|
||||
import org.gradle.api.artifacts.ProjectDependency
|
||||
import org.gradle.api.artifacts.ResolvedArtifact
|
||||
import org.gradle.api.artifacts.SelfResolvingDependency
|
||||
import org.gradle.api.artifacts.dsl.RepositoryHandler
|
||||
import org.gradle.api.execution.TaskExecutionGraph
|
||||
import org.gradle.api.plugins.JavaPlugin
|
||||
import org.gradle.api.publish.maven.MavenPublication
|
||||
import org.gradle.api.publish.maven.plugins.MavenPublishPlugin
|
||||
import org.gradle.api.publish.maven.tasks.GenerateMavenPom
|
||||
import org.gradle.api.tasks.SourceSet
|
||||
import org.gradle.api.tasks.bundling.Jar
|
||||
import org.gradle.api.tasks.compile.GroovyCompile
|
||||
import org.gradle.api.tasks.compile.JavaCompile
|
||||
|
@ -50,6 +54,7 @@ import org.gradle.internal.jvm.Jvm
|
|||
import org.gradle.process.ExecResult
|
||||
import org.gradle.util.GradleVersion
|
||||
|
||||
import java.nio.charset.StandardCharsets
|
||||
import java.time.ZoneOffset
|
||||
import java.time.ZonedDateTime
|
||||
/**
|
||||
|
@ -64,6 +69,14 @@ class BuildPlugin implements Plugin<Project> {
|
|||
+ 'elasticearch.standalone-rest-test, and elasticsearch.build '
|
||||
+ 'are mutually exclusive')
|
||||
}
|
||||
final String minimumGradleVersion
|
||||
InputStream is = getClass().getResourceAsStream("/minimumGradleVersion")
|
||||
try { minimumGradleVersion = IOUtils.toString(is, StandardCharsets.UTF_8.toString()) } finally { is.close() }
|
||||
if (GradleVersion.current() < GradleVersion.version(minimumGradleVersion.trim())) {
|
||||
throw new GradleException(
|
||||
"Gradle ${minimumGradleVersion}+ is required to use elasticsearch.build plugin"
|
||||
)
|
||||
}
|
||||
project.pluginManager.apply('java')
|
||||
project.pluginManager.apply('carrotsearch.randomized-testing')
|
||||
// these plugins add lots of info to our jars
|
||||
|
@ -125,6 +138,9 @@ class BuildPlugin implements Plugin<Project> {
|
|||
runtimeJavaVersionEnum = JavaVersion.toVersion(findJavaSpecificationVersion(project, runtimeJavaHome))
|
||||
}
|
||||
|
||||
String inFipsJvmScript = 'print(java.security.Security.getProviders()[0].name.toLowerCase().contains("fips"));'
|
||||
boolean inFipsJvm = Boolean.parseBoolean(runJavascript(project, runtimeJavaHome, inFipsJvmScript))
|
||||
|
||||
// Build debugging info
|
||||
println '======================================='
|
||||
println 'Elasticsearch Build Hamster says Hello!'
|
||||
|
@ -144,14 +160,6 @@ class BuildPlugin implements Plugin<Project> {
|
|||
}
|
||||
println " Random Testing Seed : ${project.testSeed}"
|
||||
|
||||
// enforce Gradle version
|
||||
final GradleVersion currentGradleVersion = GradleVersion.current();
|
||||
|
||||
final GradleVersion minGradle = GradleVersion.version('4.3')
|
||||
if (currentGradleVersion < minGradle) {
|
||||
throw new GradleException("${minGradle} or above is required to build Elasticsearch")
|
||||
}
|
||||
|
||||
// enforce Java version
|
||||
if (compilerJavaVersionEnum < minimumCompilerVersion) {
|
||||
final String message =
|
||||
|
@ -196,6 +204,7 @@ class BuildPlugin implements Plugin<Project> {
|
|||
project.rootProject.ext.buildChecksDone = true
|
||||
project.rootProject.ext.minimumCompilerVersion = minimumCompilerVersion
|
||||
project.rootProject.ext.minimumRuntimeVersion = minimumRuntimeVersion
|
||||
project.rootProject.ext.inFipsJvm = inFipsJvm
|
||||
}
|
||||
|
||||
project.targetCompatibility = project.rootProject.ext.minimumRuntimeVersion
|
||||
|
@ -207,6 +216,7 @@ class BuildPlugin implements Plugin<Project> {
|
|||
project.ext.compilerJavaVersion = project.rootProject.ext.compilerJavaVersion
|
||||
project.ext.runtimeJavaVersion = project.rootProject.ext.runtimeJavaVersion
|
||||
project.ext.javaVersions = project.rootProject.ext.javaVersions
|
||||
project.ext.inFipsJvm = project.rootProject.ext.inFipsJvm
|
||||
}
|
||||
|
||||
private static String findCompilerJavaHome() {
|
||||
|
@ -216,7 +226,11 @@ class BuildPlugin implements Plugin<Project> {
|
|||
// IntelliJ does not set JAVA_HOME, so we use the JDK that Gradle was run with
|
||||
return Jvm.current().javaHome
|
||||
} else {
|
||||
throw new GradleException("JAVA_HOME must be set to build Elasticsearch")
|
||||
throw new GradleException(
|
||||
"JAVA_HOME must be set to build Elasticsearch. " +
|
||||
"Note that if the variable was just set you might have to run `./gradlew --stop` for " +
|
||||
"it to be picked up. See https://github.com/elastic/elasticsearch/issues/31399 details."
|
||||
)
|
||||
}
|
||||
}
|
||||
return javaHome
|
||||
|
@ -376,6 +390,9 @@ class BuildPlugin implements Plugin<Project> {
|
|||
project.configurations.compile.dependencies.all(disableTransitiveDeps)
|
||||
project.configurations.testCompile.dependencies.all(disableTransitiveDeps)
|
||||
project.configurations.compileOnly.dependencies.all(disableTransitiveDeps)
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
project.configurations.shadow.dependencies.all(disableTransitiveDeps)
|
||||
}
|
||||
}
|
||||
|
||||
/** Adds repositories used by ES dependencies */
|
||||
|
@ -498,7 +515,41 @@ class BuildPlugin implements Plugin<Project> {
|
|||
}
|
||||
}
|
||||
}
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
project.publishing {
|
||||
publications {
|
||||
nebula(MavenPublication) {
|
||||
artifact project.tasks.shadowJar
|
||||
artifactId = project.archivesBaseName
|
||||
/*
|
||||
* Configure the pom to include the "shadow" as compile dependencies
|
||||
* because that is how we're using them but remove all other dependencies
|
||||
* because they've been shaded into the jar.
|
||||
*/
|
||||
pom.withXml { XmlProvider xml ->
|
||||
Node root = xml.asNode()
|
||||
root.remove(root.dependencies)
|
||||
Node dependenciesNode = root.appendNode('dependencies')
|
||||
project.configurations.shadow.allDependencies.each {
|
||||
if (false == it instanceof SelfResolvingDependency) {
|
||||
Node dependencyNode = dependenciesNode.appendNode('dependency')
|
||||
dependencyNode.appendNode('groupId', it.group)
|
||||
dependencyNode.appendNode('artifactId', it.name)
|
||||
dependencyNode.appendNode('version', it.version)
|
||||
dependencyNode.appendNode('scope', 'compile')
|
||||
}
|
||||
}
|
||||
// Be tidy and remove the element if it is empty
|
||||
if (dependenciesNode.children.empty) {
|
||||
root.remove(dependenciesNode)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/** Adds compiler settings to the project */
|
||||
|
@ -660,6 +711,28 @@ class BuildPlugin implements Plugin<Project> {
|
|||
}
|
||||
}
|
||||
}
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
/*
|
||||
* When we use the shadow plugin we entirely replace the
|
||||
* normal jar with the shadow jar so we no longer want to run
|
||||
* the jar task.
|
||||
*/
|
||||
project.tasks.jar.enabled = false
|
||||
project.tasks.shadowJar {
|
||||
/*
|
||||
* Replace the default "shadow" classifier with null
|
||||
* which will leave the classifier off of the file name.
|
||||
*/
|
||||
classifier = null
|
||||
/*
|
||||
* Not all cases need service files merged but it is
|
||||
* better to be safe
|
||||
*/
|
||||
mergeServiceFiles()
|
||||
}
|
||||
// Make sure we assemble the shadow jar
|
||||
project.tasks.assemble.dependsOn project.tasks.shadowJar
|
||||
}
|
||||
}
|
||||
|
||||
/** Returns a closure of common configuration shared by unit and integration tests. */
|
||||
|
@ -691,7 +764,6 @@ class BuildPlugin implements Plugin<Project> {
|
|||
systemProperty 'tests.task', path
|
||||
systemProperty 'tests.security.manager', 'true'
|
||||
systemProperty 'jna.nosys', 'true'
|
||||
systemProperty 'es.scripting.exception_for_missing_value', 'true'
|
||||
// TODO: remove setting logging level via system property
|
||||
systemProperty 'tests.logger.level', 'WARN'
|
||||
for (Map.Entry<String, String> property : System.properties.entrySet()) {
|
||||
|
@ -705,7 +777,11 @@ class BuildPlugin implements Plugin<Project> {
|
|||
systemProperty property.getKey(), property.getValue()
|
||||
}
|
||||
}
|
||||
|
||||
// Set the system keystore/truststore password if we're running tests in a FIPS-140 JVM
|
||||
if (project.inFipsJvm) {
|
||||
systemProperty 'javax.net.ssl.trustStorePassword', 'password'
|
||||
systemProperty 'javax.net.ssl.keyStorePassword', 'password'
|
||||
}
|
||||
boolean assertionsEnabled = Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))
|
||||
enableSystemAssertions assertionsEnabled
|
||||
enableAssertions assertionsEnabled
|
||||
|
@ -744,6 +820,18 @@ class BuildPlugin implements Plugin<Project> {
|
|||
}
|
||||
|
||||
exclude '**/*$*.class'
|
||||
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
/*
|
||||
* If we make a shaded jar we test against it.
|
||||
*/
|
||||
classpath -= project.tasks.compileJava.outputs.files
|
||||
classpath -= project.configurations.compile
|
||||
classpath -= project.configurations.runtime
|
||||
classpath += project.configurations.shadow
|
||||
classpath += project.tasks.shadowJar.outputs.files
|
||||
dependsOn project.tasks.shadowJar
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -764,9 +852,28 @@ class BuildPlugin implements Plugin<Project> {
|
|||
additionalTest.configure(commonTestConfig(project))
|
||||
additionalTest.configure(config)
|
||||
additionalTest.dependsOn(project.tasks.testClasses)
|
||||
test.dependsOn(additionalTest)
|
||||
project.check.dependsOn(additionalTest)
|
||||
});
|
||||
return test
|
||||
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
/*
|
||||
* We need somewhere to configure dependencies that we don't wish
|
||||
* to shade into the jar. The shadow plugin creates a "shadow"
|
||||
* configuration which is *almost* exactly that. It is never
|
||||
* bundled into the shaded jar but is used for main source
|
||||
* compilation. Unfortunately, by default it is not used for
|
||||
* *test* source compilation and isn't used in tests at all. This
|
||||
* change makes it available for test compilation.
|
||||
*
|
||||
* Note that this isn't going to work properly with qa projects
|
||||
* but they have no business applying the shadow plugin in the
|
||||
* firstplace.
|
||||
*/
|
||||
SourceSet testSourceSet = project.sourceSets.findByName('test')
|
||||
if (testSourceSet != null) {
|
||||
testSourceSet.compileClasspath += project.configurations.shadow
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static configurePrecommit(Project project) {
|
||||
|
@ -777,11 +884,20 @@ class BuildPlugin implements Plugin<Project> {
|
|||
project.dependencyLicenses.dependencies = project.configurations.runtime.fileCollection {
|
||||
it.group.startsWith('org.elasticsearch') == false
|
||||
} - project.configurations.compileOnly
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
project.dependencyLicenses.dependencies += project.configurations.shadow.fileCollection {
|
||||
it.group.startsWith('org.elasticsearch') == false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static configureDependenciesInfo(Project project) {
|
||||
Task deps = project.tasks.create("dependenciesInfo", DependenciesInfoTask.class)
|
||||
deps.runtimeConfiguration = project.configurations.runtime
|
||||
project.plugins.withType(ShadowPlugin).whenPluginAdded {
|
||||
deps.runtimeConfiguration = project.configurations.create('infoDeps')
|
||||
deps.runtimeConfiguration.extendsFrom(project.configurations.runtime, project.configurations.shadow)
|
||||
}
|
||||
deps.compileOnlyConfiguration = project.configurations.compileOnly
|
||||
project.afterEvaluate {
|
||||
deps.mappings = project.dependencyLicenses.mappings
|
||||
|
|
|
@ -18,11 +18,13 @@
|
|||
*/
|
||||
package org.elasticsearch.gradle.plugin
|
||||
|
||||
import com.github.jengelman.gradle.plugins.shadow.ShadowPlugin
|
||||
import nebula.plugin.info.scm.ScmInfoPlugin
|
||||
import org.elasticsearch.gradle.BuildPlugin
|
||||
import org.elasticsearch.gradle.NoticeTask
|
||||
import org.elasticsearch.gradle.test.RestIntegTestTask
|
||||
import org.elasticsearch.gradle.test.RunTask
|
||||
import org.gradle.api.InvalidUserDataException
|
||||
import org.gradle.api.JavaVersion
|
||||
import org.gradle.api.Project
|
||||
import org.gradle.api.Task
|
||||
|
@ -61,10 +63,10 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
// and generate a different pom for the zip
|
||||
addClientJarPomGeneration(project)
|
||||
addClientJarTask(project)
|
||||
} else {
|
||||
// no client plugin, so use the pom file from nebula, without jar, for the zip
|
||||
project.ext.set("nebulaPublish.maven.jar", false)
|
||||
}
|
||||
// while the jar isn't normally published, we still at least build a pom of deps
|
||||
// in case it is published, for instance when other plugins extend this plugin
|
||||
configureJarPom(project)
|
||||
|
||||
project.integTestCluster.dependsOn(project.bundlePlugin)
|
||||
project.tasks.run.dependsOn(project.bundlePlugin)
|
||||
|
@ -80,7 +82,6 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
}
|
||||
|
||||
if (isModule == false || isXPackModule) {
|
||||
addZipPomGeneration(project)
|
||||
addNoticeGeneration(project)
|
||||
}
|
||||
|
||||
|
@ -140,8 +141,13 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
include(buildProperties.descriptorOutput.name)
|
||||
}
|
||||
from pluginMetadata // metadata (eg custom security policy)
|
||||
from project.jar // this plugin's jar
|
||||
from project.configurations.runtime - project.configurations.compileOnly // the dep jars
|
||||
/*
|
||||
* If the plugin is using the shadow plugin then we need to bundle
|
||||
* "shadow" things rather than the default jar and dependencies so
|
||||
* we don't hit jar hell.
|
||||
*/
|
||||
from { project.plugins.hasPlugin(ShadowPlugin) ? project.shadowJar : project.jar }
|
||||
from { project.plugins.hasPlugin(ShadowPlugin) ? project.configurations.shadow : project.configurations.runtime - project.configurations.compileOnly }
|
||||
// extra files for the plugin to go into the zip
|
||||
from('src/main/packaging') // TODO: move all config/bin/_size/etc into packaging
|
||||
from('src/main') {
|
||||
|
@ -225,36 +231,15 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
}
|
||||
}
|
||||
|
||||
/** Adds a task to generate a pom file for the zip distribution. */
|
||||
public static void addZipPomGeneration(Project project) {
|
||||
/** Configure the pom for the main jar of this plugin */
|
||||
protected static void configureJarPom(Project project) {
|
||||
project.plugins.apply(ScmInfoPlugin.class)
|
||||
project.plugins.apply(MavenPublishPlugin.class)
|
||||
|
||||
project.publishing {
|
||||
publications {
|
||||
zip(MavenPublication) {
|
||||
artifact project.bundlePlugin
|
||||
}
|
||||
/* HUGE HACK: the underlying maven publication library refuses to deploy any attached artifacts
|
||||
* when the packaging type is set to 'pom'. But Sonatype's OSS repositories require source files
|
||||
* for artifacts that are of type 'zip'. We already publish the source and javadoc for Elasticsearch
|
||||
* under the various other subprojects. So here we create another publication using the same
|
||||
* name that has the "real" pom, and rely on the fact that gradle will execute the publish tasks
|
||||
* in alphabetical order. This lets us publish the zip file and even though the pom says the
|
||||
* type is 'pom' instead of 'zip'. We cannot setup a dependency between the tasks because the
|
||||
* publishing tasks are created *extremely* late in the configuration phase, so that we cannot get
|
||||
* ahold of the actual task. Furthermore, this entire hack only exists so we can make publishing to
|
||||
* maven local work, since we publish to maven central externally. */
|
||||
zipReal(MavenPublication) {
|
||||
artifactId = project.pluginProperties.extension.name
|
||||
pom.withXml { XmlProvider xml ->
|
||||
Node root = xml.asNode()
|
||||
root.appendNode('name', project.pluginProperties.extension.name)
|
||||
root.appendNode('description', project.pluginProperties.extension.description)
|
||||
root.appendNode('url', urlFromOrigin(project.scminfo.origin))
|
||||
Node scmNode = root.appendNode('scm')
|
||||
scmNode.appendNode('url', project.scminfo.origin)
|
||||
}
|
||||
nebula(MavenPublication) {
|
||||
artifactId project.pluginProperties.extension.name
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -137,7 +137,12 @@ class ClusterConfiguration {
|
|||
this.project = project
|
||||
}
|
||||
|
||||
Map<String, String> systemProperties = new HashMap<>()
|
||||
// **Note** for systemProperties, settings, keystoreFiles etc:
|
||||
// value could be a GString that is evaluated to just a String
|
||||
// there are cases when value depends on task that is not executed yet on configuration stage
|
||||
Map<String, Object> systemProperties = new HashMap<>()
|
||||
|
||||
Map<String, Object> environmentVariables = new HashMap<>()
|
||||
|
||||
Map<String, Object> settings = new HashMap<>()
|
||||
|
||||
|
@ -157,10 +162,15 @@ class ClusterConfiguration {
|
|||
List<Object> dependencies = new ArrayList<>()
|
||||
|
||||
@Input
|
||||
void systemProperty(String property, String value) {
|
||||
void systemProperty(String property, Object value) {
|
||||
systemProperties.put(property, value)
|
||||
}
|
||||
|
||||
@Input
|
||||
void environment(String variable, Object value) {
|
||||
environmentVariables.put(variable, value)
|
||||
}
|
||||
|
||||
@Input
|
||||
void setting(String name, Object value) {
|
||||
settings.put(name, value)
|
||||
|
|
|
@ -331,6 +331,12 @@ class ClusterFormationTasks {
|
|||
}
|
||||
// increase script compilation limit since tests can rapid-fire script compilations
|
||||
esConfig['script.max_compilations_rate'] = '2048/1m'
|
||||
// Temporarily disable the real memory usage circuit breaker. It depends on real memory usage which we have no full control
|
||||
// over and the REST client will not retry on circuit breaking exceptions yet (see #31986 for details). Once the REST client
|
||||
// can retry on circuit breaking exceptions, we can revert again to the default configuration.
|
||||
if (node.nodeVersion.major >= 7) {
|
||||
esConfig['indices.breaker.total.use_real_memory'] = false
|
||||
}
|
||||
esConfig.putAll(node.config.settings)
|
||||
|
||||
Task writeConfig = project.tasks.create(name: name, type: DefaultTask, dependsOn: setup)
|
||||
|
@ -603,7 +609,6 @@ class ClusterFormationTasks {
|
|||
|
||||
/** Adds a task to start an elasticsearch node with the given configuration */
|
||||
static Task configureStartTask(String name, Project project, Task setup, NodeInfo node) {
|
||||
|
||||
// this closure is converted into ant nodes by groovy's AntBuilder
|
||||
Closure antRunner = { AntBuilder ant ->
|
||||
ant.exec(executable: node.executable, spawn: node.config.daemonize, dir: node.cwd, taskname: 'elasticsearch') {
|
||||
|
@ -624,13 +629,6 @@ class ClusterFormationTasks {
|
|||
node.writeWrapperScript()
|
||||
}
|
||||
|
||||
// we must add debug options inside the closure so the config is read at execution time, as
|
||||
// gradle task options are not processed until the end of the configuration phase
|
||||
if (node.config.debug) {
|
||||
println 'Running elasticsearch in debug mode, suspending until connected on port 8000'
|
||||
node.env['ES_JAVA_OPTS'] = '-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000'
|
||||
}
|
||||
|
||||
node.getCommandString().eachLine { line -> logger.info(line) }
|
||||
|
||||
if (logger.isInfoEnabled() || node.config.daemonize == false) {
|
||||
|
@ -648,6 +646,27 @@ class ClusterFormationTasks {
|
|||
}
|
||||
start.doLast(elasticsearchRunner)
|
||||
start.doFirst {
|
||||
// Configure ES JAVA OPTS - adds system properties, assertion flags, remote debug etc
|
||||
List<String> esJavaOpts = [node.env.get('ES_JAVA_OPTS', '')]
|
||||
String collectedSystemProperties = node.config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
|
||||
esJavaOpts.add(collectedSystemProperties)
|
||||
esJavaOpts.add(node.config.jvmArgs)
|
||||
if (Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))) {
|
||||
// put the enable assertions options before other options to allow
|
||||
// flexibility to disable assertions for specific packages or classes
|
||||
// in the cluster-specific options
|
||||
esJavaOpts.add("-ea")
|
||||
esJavaOpts.add("-esa")
|
||||
}
|
||||
// we must add debug options inside the closure so the config is read at execution time, as
|
||||
// gradle task options are not processed until the end of the configuration phase
|
||||
if (node.config.debug) {
|
||||
println 'Running elasticsearch in debug mode, suspending until connected on port 8000'
|
||||
esJavaOpts.add('-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000')
|
||||
}
|
||||
node.env['ES_JAVA_OPTS'] = esJavaOpts.join(" ")
|
||||
|
||||
//
|
||||
project.logger.info("Starting node in ${node.clusterName} distribution: ${node.config.distribution}")
|
||||
}
|
||||
return start
|
||||
|
|
|
@ -180,15 +180,8 @@ class NodeInfo {
|
|||
}
|
||||
|
||||
args.addAll("-E", "node.portsfile=true")
|
||||
String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
|
||||
String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
|
||||
if (Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))) {
|
||||
// put the enable assertions options before other options to allow
|
||||
// flexibility to disable assertions for specific packages or classes
|
||||
// in the cluster-specific options
|
||||
esJavaOpts = String.join(" ", "-ea", "-esa", esJavaOpts)
|
||||
}
|
||||
env = ['ES_JAVA_OPTS': esJavaOpts]
|
||||
env = [:]
|
||||
env.putAll(config.environmentVariables)
|
||||
for (Map.Entry<String, String> property : System.properties.entrySet()) {
|
||||
if (property.key.startsWith('tests.es.')) {
|
||||
args.add("-E")
|
||||
|
|
|
@ -24,7 +24,6 @@ import org.elasticsearch.gradle.VersionProperties
|
|||
import org.gradle.api.DefaultTask
|
||||
import org.gradle.api.Project
|
||||
import org.gradle.api.Task
|
||||
import org.gradle.api.Transformer
|
||||
import org.gradle.api.execution.TaskExecutionAdapter
|
||||
import org.gradle.api.internal.tasks.options.Option
|
||||
import org.gradle.api.provider.Property
|
||||
|
@ -217,7 +216,7 @@ public class RestIntegTestTask extends DefaultTask {
|
|||
* @param project The project to add the copy task to
|
||||
* @param includePackagedTests true if the packaged tests should be copied, false otherwise
|
||||
*/
|
||||
private static Task createCopyRestSpecTask(Project project, Provider<Boolean> includePackagedTests) {
|
||||
static Task createCopyRestSpecTask(Project project, Provider<Boolean> includePackagedTests) {
|
||||
project.configurations {
|
||||
restSpec
|
||||
}
|
||||
|
|
|
@ -16,6 +16,8 @@ import org.gradle.api.tasks.SourceSetContainer;
|
|||
import java.io.File;
|
||||
import java.io.FileWriter;
|
||||
import java.io.IOException;
|
||||
import java.net.URISyntaxException;
|
||||
import java.net.URL;
|
||||
import java.util.Objects;
|
||||
|
||||
/**
|
||||
|
@ -30,16 +32,25 @@ public class NamingConventionsTask extends LoggedExec {
|
|||
final Project project = getProject();
|
||||
|
||||
SourceSetContainer sourceSets = getJavaSourceSets();
|
||||
final FileCollection classpath = project.files(
|
||||
// This works because the class only depends on one class from junit that will be available from the
|
||||
// tests compile classpath. It's the most straight forward way of telling Java where to find the main
|
||||
// class.
|
||||
NamingConventionsCheck.class.getProtectionDomain().getCodeSource().getLocation().getPath(),
|
||||
// the tests to be loaded
|
||||
checkForTestsInMain ? sourceSets.getByName("main").getRuntimeClasspath() : project.files(),
|
||||
sourceSets.getByName("test").getCompileClasspath(),
|
||||
sourceSets.getByName("test").getOutput()
|
||||
);
|
||||
final FileCollection classpath;
|
||||
try {
|
||||
URL location = NamingConventionsCheck.class.getProtectionDomain().getCodeSource().getLocation();
|
||||
if (location.getProtocol().equals("file") == false) {
|
||||
throw new GradleException("Unexpected location for NamingConventionCheck class: "+ location);
|
||||
}
|
||||
classpath = project.files(
|
||||
// This works because the class only depends on one class from junit that will be available from the
|
||||
// tests compile classpath. It's the most straight forward way of telling Java where to find the main
|
||||
// class.
|
||||
location.toURI().getPath(),
|
||||
// the tests to be loaded
|
||||
checkForTestsInMain ? sourceSets.getByName("main").getRuntimeClasspath() : project.files(),
|
||||
sourceSets.getByName("test").getCompileClasspath(),
|
||||
sourceSets.getByName("test").getOutput()
|
||||
);
|
||||
} catch (URISyntaxException e) {
|
||||
throw new AssertionError(e);
|
||||
}
|
||||
dependsOn(project.getTasks().matching(it -> "testCompileClasspath".equals(it.getName())));
|
||||
getInputs().files(classpath);
|
||||
|
||||
|
@ -111,10 +122,6 @@ public class NamingConventionsTask extends LoggedExec {
|
|||
this.successMarker = successMarker;
|
||||
}
|
||||
|
||||
public boolean getSkipIntegTestInDisguise() {
|
||||
return skipIntegTestInDisguise;
|
||||
}
|
||||
|
||||
public boolean isSkipIntegTestInDisguise() {
|
||||
return skipIntegTestInDisguise;
|
||||
}
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
4.9
|
|
@ -1,6 +1,11 @@
|
|||
package org.elasticsearch.gradle.test;
|
||||
|
||||
import org.gradle.testkit.runner.GradleRunner;
|
||||
|
||||
import java.io.File;
|
||||
import java.util.List;
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.stream.Stream;
|
||||
|
||||
public abstract class GradleIntegrationTestCase extends GradleUnitTestCase {
|
||||
|
||||
|
@ -13,4 +18,47 @@ public abstract class GradleIntegrationTestCase extends GradleUnitTestCase {
|
|||
return new File(root, name);
|
||||
}
|
||||
|
||||
protected GradleRunner getGradleRunner(String sampleProject) {
|
||||
return GradleRunner.create()
|
||||
.withProjectDir(getProjectDir(sampleProject))
|
||||
.withPluginClasspath();
|
||||
}
|
||||
|
||||
protected File getBuildDir(String name) {
|
||||
return new File(getProjectDir(name), "build");
|
||||
}
|
||||
|
||||
protected void assertOutputContains(String output, String... lines) {
|
||||
for (String line : lines) {
|
||||
assertOutputContains(output, line);
|
||||
}
|
||||
List<Integer> index = Stream.of(lines).map(line -> output.indexOf(line)).collect(Collectors.toList());
|
||||
if (index.equals(index.stream().sorted().collect(Collectors.toList())) == false) {
|
||||
fail("Expected the following lines to appear in this order:\n" +
|
||||
Stream.of(lines).map(line -> " - `" + line + "`").collect(Collectors.joining("\n")) +
|
||||
"\nBut they did not. Output is:\n\n```" + output + "\n```\n"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
protected void assertOutputContains(String output, String line) {
|
||||
assertTrue(
|
||||
"Expected the following line in output:\n\n" + line + "\n\nOutput is:\n" + output,
|
||||
output.contains(line)
|
||||
);
|
||||
}
|
||||
|
||||
protected void assertOutputDoesNotContain(String output, String line) {
|
||||
assertFalse(
|
||||
"Expected the following line not to be in output:\n\n" + line + "\n\nOutput is:\n" + output,
|
||||
output.contains(line)
|
||||
);
|
||||
}
|
||||
|
||||
protected void assertOutputDoesNotContain(String output, String... lines) {
|
||||
for (String line : lines) {
|
||||
assertOutputDoesNotContain(line);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
elasticsearch = 7.0.0-alpha1
|
||||
lucene = 7.4.0
|
||||
lucene = 7.5.0-snapshot-608f0277b0
|
||||
|
||||
# optional dependencies
|
||||
spatial4j = 0.7
|
||||
|
|
|
@ -2,10 +2,18 @@
|
|||
|
||||
1. Build `client-benchmark-noop-api-plugin` with `gradle :client:client-benchmark-noop-api-plugin:assemble`
|
||||
2. Install it on the target host with `bin/elasticsearch-plugin install file:///full/path/to/client-benchmark-noop-api-plugin.zip`
|
||||
3. Start Elasticsearch on the target host (ideally *not* on the same machine)
|
||||
4. Build an uberjar with `gradle :client:benchmark:shadowJar` and execute it.
|
||||
3. Start Elasticsearch on the target host (ideally *not* on the machine
|
||||
that runs the benchmarks)
|
||||
4. Run the benchmark with
|
||||
```
|
||||
./gradlew -p client/benchmark run --args ' params go here'
|
||||
```
|
||||
|
||||
Repeat all steps above for the other benchmark candidate.
|
||||
Everything in the `'` gets sent on the command line to JMH. The leading ` `
|
||||
inside the `'`s is important. Without it parameters are sometimes sent to
|
||||
gradle.
|
||||
|
||||
See below for some example invocations.
|
||||
|
||||
### Example benchmark
|
||||
|
||||
|
@ -13,32 +21,35 @@ In general, you should define a few GC-related settings `-Xms8192M -Xmx8192M -XX
|
|||
|
||||
#### Bulk indexing
|
||||
|
||||
Download benchmark data from http://benchmarks.elastic.co/corpora/geonames/documents.json.bz2 and decompress them.
|
||||
Download benchmark data from http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames and decompress them.
|
||||
|
||||
Example command line parameters:
|
||||
Example invocation:
|
||||
|
||||
```
|
||||
rest bulk 192.168.2.2 ./documents.json geonames type 8647880 5000
|
||||
wget http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames/documents-2.json.bz2
|
||||
bzip2 -d documents-2.json.bz2
|
||||
mv documents-2.json client/benchmark/build
|
||||
gradlew -p client/benchmark run --args ' rest bulk localhost build/documents-2.json geonames type 8647880 5000'
|
||||
```
|
||||
|
||||
The parameters are in order:
|
||||
The parameters are all in the `'`s and are in order:
|
||||
|
||||
* Client type: Use either "rest" or "transport"
|
||||
* Benchmark type: Use either "bulk" or "search"
|
||||
* Benchmark target host IP (the host where Elasticsearch is running)
|
||||
* full path to the file that should be bulk indexed
|
||||
* name of the index
|
||||
* name of the (sole) type in the index
|
||||
* name of the (sole) type in the index
|
||||
* number of documents in the file
|
||||
* bulk size
|
||||
|
||||
|
||||
#### Bulk indexing
|
||||
#### Search
|
||||
|
||||
Example command line parameters:
|
||||
Example invocation:
|
||||
|
||||
```
|
||||
rest search 192.168.2.2 geonames "{ \"query\": { \"match_phrase\": { \"name\": \"Sankt Georgen\" } } }\"" 500,1000,1100,1200
|
||||
gradlew -p client/benchmark run --args ' rest search localhost geonames {"query":{"match_phrase":{"name":"Sankt Georgen"}}} 500,1000,1100,1200'
|
||||
```
|
||||
|
||||
The parameters are in order:
|
||||
|
@ -49,5 +60,3 @@ The parameters are in order:
|
|||
* name of the index
|
||||
* a search request body (remember to escape double quotes). The `TransportClientBenchmark` uses `QueryBuilders.wrapperQuery()` internally which automatically adds a root key `query`, so it must not be present in the command line parameter.
|
||||
* A comma-separated list of target throughput rates
|
||||
|
||||
|
||||
|
|
|
@ -17,22 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
buildscript {
|
||||
repositories {
|
||||
maven {
|
||||
url 'https://plugins.gradle.org/m2/'
|
||||
}
|
||||
}
|
||||
dependencies {
|
||||
classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.4'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
apply plugin: 'elasticsearch.build'
|
||||
// build an uberjar with all benchmarks
|
||||
apply plugin: 'com.github.johnrengelman.shadow'
|
||||
// have the shadow plugin provide the runShadow task
|
||||
apply plugin: 'application'
|
||||
|
||||
group = 'org.elasticsearch.client'
|
||||
|
@ -44,7 +29,6 @@ build.dependsOn.remove('assemble')
|
|||
archivesBaseName = 'client-benchmarks'
|
||||
mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain'
|
||||
|
||||
|
||||
// never try to invoke tests on the benchmark project - there aren't any
|
||||
test.enabled = false
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %m%n
|
||||
|
||||
rootLogger.level = info
|
||||
rootLogger.appenderRef.console.ref = console
|
||||
|
|
|
@ -18,19 +18,8 @@
|
|||
*/
|
||||
|
||||
import org.elasticsearch.gradle.precommit.PrecommitTasks
|
||||
import org.gradle.api.XmlProvider
|
||||
import org.gradle.api.publish.maven.MavenPublication
|
||||
|
||||
buildscript {
|
||||
repositories {
|
||||
maven {
|
||||
url 'https://plugins.gradle.org/m2/'
|
||||
}
|
||||
}
|
||||
dependencies {
|
||||
classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.4'
|
||||
}
|
||||
}
|
||||
import org.elasticsearch.gradle.test.RestIntegTestTask
|
||||
import org.gradle.api.internal.provider.Providers
|
||||
|
||||
apply plugin: 'elasticsearch.build'
|
||||
apply plugin: 'elasticsearch.rest-test'
|
||||
|
@ -41,48 +30,9 @@ apply plugin: 'com.github.johnrengelman.shadow'
|
|||
group = 'org.elasticsearch.client'
|
||||
archivesBaseName = 'elasticsearch-rest-high-level-client'
|
||||
|
||||
publishing {
|
||||
publications {
|
||||
nebula(MavenPublication) {
|
||||
artifact shadowJar
|
||||
artifactId = archivesBaseName
|
||||
/*
|
||||
* Configure the pom to include the "shadow" as compile dependencies
|
||||
* because that is how we're using them but remove all other dependencies
|
||||
* because they've been shaded into the jar.
|
||||
*/
|
||||
pom.withXml { XmlProvider xml ->
|
||||
Node root = xml.asNode()
|
||||
root.remove(root.dependencies)
|
||||
Node dependenciesNode = root.appendNode('dependencies')
|
||||
project.configurations.shadow.allDependencies.each {
|
||||
if (false == it instanceof SelfResolvingDependency) {
|
||||
Node dependencyNode = dependenciesNode.appendNode('dependency')
|
||||
dependencyNode.appendNode('groupId', it.group)
|
||||
dependencyNode.appendNode('artifactId', it.name)
|
||||
dependencyNode.appendNode('version', it.version)
|
||||
dependencyNode.appendNode('scope', 'compile')
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* We need somewhere to configure dependencies that we don't wish to shade
|
||||
* into the high level REST client. The shadow plugin creates a "shadow"
|
||||
* configuration which is *almost* exactly that. It is never bundled into
|
||||
* the shaded jar but is used for main source compilation. Unfortunately,
|
||||
* by default it is not used for *test* source compilation and isn't used
|
||||
* in tests at all. This change makes it available for test compilation.
|
||||
* A change below makes it available for testing.
|
||||
*/
|
||||
sourceSets {
|
||||
test {
|
||||
compileClasspath += configurations.shadow
|
||||
}
|
||||
}
|
||||
//we need to copy the yaml spec so we can check naming (see RestHighlevelClientTests#testApiNamingConventions)
|
||||
Task copyRestSpec = RestIntegTestTask.createCopyRestSpecTask(project, Providers.FALSE)
|
||||
test.dependsOn(copyRestSpec)
|
||||
|
||||
dependencies {
|
||||
/*
|
||||
|
@ -102,6 +52,8 @@ dependencies {
|
|||
testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
|
||||
testCompile "junit:junit:${versions.junit}"
|
||||
testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}"
|
||||
//this is needed to make RestHighLevelClientTests#testApiNamingConventions work from IDEs
|
||||
testCompile "org.elasticsearch:rest-api-spec:${version}"
|
||||
}
|
||||
|
||||
dependencyLicenses {
|
||||
|
@ -119,47 +71,6 @@ forbiddenApisMain {
|
|||
signaturesURLs += [file('src/main/resources/forbidden/rest-high-level-signatures.txt').toURI().toURL()]
|
||||
}
|
||||
|
||||
shadowJar {
|
||||
classifier = null
|
||||
mergeServiceFiles()
|
||||
}
|
||||
|
||||
// We don't need normal jar, we use shadow jar instead
|
||||
jar.enabled = false
|
||||
assemble.dependsOn shadowJar
|
||||
|
||||
javadoc {
|
||||
/*
|
||||
* Bundle all of the javadoc from all of the shaded projects into this one
|
||||
* so we don't *have* to publish javadoc for all of the "client" jars.
|
||||
*/
|
||||
configurations.compile.dependencies.all { Dependency dep ->
|
||||
Project p = dependencyToProject(dep)
|
||||
if (p != null) {
|
||||
evaluationDependsOn(p.path)
|
||||
source += p.sourceSets.main.allJava
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the jar for testing so we have tests of the bundled jar.
|
||||
* Use the "shadow" configuration for testing because we need things
|
||||
* in it.
|
||||
*/
|
||||
test {
|
||||
classpath -= compileJava.outputs.files
|
||||
classpath -= configurations.compile
|
||||
classpath -= configurations.runtime
|
||||
classpath += configurations.shadow
|
||||
classpath += shadowJar.outputs.files
|
||||
dependsOn shadowJar
|
||||
}
|
||||
integTestRunner {
|
||||
classpath -= compileJava.outputs.files
|
||||
classpath -= configurations.compile
|
||||
classpath -= configurations.runtime
|
||||
classpath += configurations.shadow
|
||||
classpath += shadowJar.outputs.files
|
||||
dependsOn shadowJar
|
||||
integTestCluster {
|
||||
setting 'xpack.license.self_generated.type', 'trial'
|
||||
}
|
||||
|
|
|
@ -174,7 +174,7 @@ public final class IndicesClient {
|
|||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public GetMappingsResponse getMappings(GetMappingsRequest getMappingsRequest, RequestOptions options) throws IOException {
|
||||
public GetMappingsResponse getMapping(GetMappingsRequest getMappingsRequest, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(getMappingsRequest, RequestConverters::getMappings, options,
|
||||
GetMappingsResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
@ -187,8 +187,8 @@ public final class IndicesClient {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void getMappingsAsync(GetMappingsRequest getMappingsRequest, RequestOptions options,
|
||||
ActionListener<GetMappingsResponse> listener) {
|
||||
public void getMappingAsync(GetMappingsRequest getMappingsRequest, RequestOptions options,
|
||||
ActionListener<GetMappingsResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(getMappingsRequest, RequestConverters::getMappings, options,
|
||||
GetMappingsResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
@ -474,8 +474,23 @@ public final class IndicesClient {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
* @deprecated use {@link #forcemerge(ForceMergeRequest, RequestOptions)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public ForceMergeResponse forceMerge(ForceMergeRequest forceMergeRequest, RequestOptions options) throws IOException {
|
||||
return forcemerge(forceMergeRequest, options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Force merge one or more indices using the Force Merge API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html">
|
||||
* Force Merge API on elastic.co</a>
|
||||
* @param forceMergeRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public ForceMergeResponse forcemerge(ForceMergeRequest forceMergeRequest, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(forceMergeRequest, RequestConverters::forceMerge, options,
|
||||
ForceMergeResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
@ -487,8 +502,22 @@ public final class IndicesClient {
|
|||
* @param forceMergeRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
* @deprecated use {@link #forcemergeAsync(ForceMergeRequest, RequestOptions, ActionListener)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public void forceMergeAsync(ForceMergeRequest forceMergeRequest, RequestOptions options, ActionListener<ForceMergeResponse> listener) {
|
||||
forcemergeAsync(forceMergeRequest, options, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously force merge one or more indices using the Force Merge API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html">
|
||||
* Force Merge API on elastic.co</a>
|
||||
* @param forceMergeRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void forcemergeAsync(ForceMergeRequest forceMergeRequest, RequestOptions options, ActionListener<ForceMergeResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(forceMergeRequest, RequestConverters::forceMerge, options,
|
||||
ForceMergeResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
|
|
@ -139,7 +139,7 @@ public final class IngestClient {
|
|||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public SimulatePipelineResponse simulatePipeline(SimulatePipelineRequest request, RequestOptions options) throws IOException {
|
||||
public SimulatePipelineResponse simulate(SimulatePipelineRequest request, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity( request, RequestConverters::simulatePipeline, options,
|
||||
SimulatePipelineResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
@ -154,9 +154,9 @@ public final class IngestClient {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void simulatePipelineAsync(SimulatePipelineRequest request,
|
||||
RequestOptions options,
|
||||
ActionListener<SimulatePipelineResponse> listener) {
|
||||
public void simulateAsync(SimulatePipelineRequest request,
|
||||
RequestOptions options,
|
||||
ActionListener<SimulatePipelineResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity( request, RequestConverters::simulatePipeline, options,
|
||||
SimulatePipelineResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
|
|
@ -0,0 +1,66 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.protocol.xpack.license.PutLicenseRequest;
|
||||
import org.elasticsearch.protocol.xpack.license.PutLicenseResponse;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Collections.emptySet;
|
||||
|
||||
/**
|
||||
* A wrapper for the {@link RestHighLevelClient} that provides methods for
|
||||
* accessing the Elastic License-related methods
|
||||
* <p>
|
||||
* See the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/licensing-apis.html">
|
||||
* X-Pack Licensing APIs on elastic.co</a> for more information.
|
||||
*/
|
||||
public class LicenseClient {
|
||||
|
||||
private final RestHighLevelClient restHighLevelClient;
|
||||
|
||||
LicenseClient(RestHighLevelClient restHighLevelClient) {
|
||||
this.restHighLevelClient = restHighLevelClient;
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates license for the cluster.
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public PutLicenseResponse putLicense(PutLicenseRequest request, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, RequestConverters::putLicense, options,
|
||||
PutLicenseResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously updates license for the cluster cluster.
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void putLicenseAsync(PutLicenseRequest request, RequestOptions options, ActionListener<PutLicenseResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, RequestConverters::putLicense, options,
|
||||
PutLicenseResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
}
|
|
@ -40,6 +40,7 @@ import org.elasticsearch.action.admin.cluster.settings.ClusterGetSettingsRequest
|
|||
import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
|
||||
|
@ -106,6 +107,10 @@ import org.elasticsearch.common.xcontent.XContentType;
|
|||
import org.elasticsearch.index.VersionType;
|
||||
import org.elasticsearch.index.rankeval.RankEvalRequest;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.XPackUsageRequest;
|
||||
import org.elasticsearch.protocol.xpack.license.PutLicenseRequest;
|
||||
import org.elasticsearch.rest.action.search.RestSearchAction;
|
||||
import org.elasticsearch.script.mustache.MultiSearchTemplateRequest;
|
||||
import org.elasticsearch.script.mustache.SearchTemplateRequest;
|
||||
|
@ -978,6 +983,20 @@ final class RequestConverters {
|
|||
return request;
|
||||
}
|
||||
|
||||
static Request restoreSnapshot(RestoreSnapshotRequest restoreSnapshotRequest) throws IOException {
|
||||
String endpoint = new EndpointBuilder().addPathPartAsIs("_snapshot")
|
||||
.addPathPart(restoreSnapshotRequest.repository())
|
||||
.addPathPart(restoreSnapshotRequest.snapshot())
|
||||
.addPathPartAsIs("_restore")
|
||||
.build();
|
||||
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
|
||||
Params parameters = new Params(request);
|
||||
parameters.withMasterTimeout(restoreSnapshotRequest.masterNodeTimeout());
|
||||
parameters.withWaitForCompletion(restoreSnapshotRequest.waitForCompletion());
|
||||
request.setEntity(createEntity(restoreSnapshotRequest, REQUEST_BODY_CONTENT_TYPE));
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request deleteSnapshot(DeleteSnapshotRequest deleteSnapshotRequest) {
|
||||
String endpoint = new EndpointBuilder().addPathPartAsIs("_snapshot")
|
||||
.addPathPart(deleteSnapshotRequest.repository())
|
||||
|
@ -1096,6 +1115,56 @@ final class RequestConverters {
|
|||
return request;
|
||||
}
|
||||
|
||||
static Request xPackWatcherPutWatch(PutWatchRequest putWatchRequest) {
|
||||
String endpoint = new EndpointBuilder()
|
||||
.addPathPartAsIs("_xpack")
|
||||
.addPathPartAsIs("watcher")
|
||||
.addPathPartAsIs("watch")
|
||||
.addPathPart(putWatchRequest.getId())
|
||||
.build();
|
||||
|
||||
Request request = new Request(HttpPut.METHOD_NAME, endpoint);
|
||||
Params params = new Params(request).withVersion(putWatchRequest.getVersion());
|
||||
if (putWatchRequest.isActive() == false) {
|
||||
params.putParam("active", "false");
|
||||
}
|
||||
ContentType contentType = createContentType(putWatchRequest.xContentType());
|
||||
BytesReference source = putWatchRequest.getSource();
|
||||
request.setEntity(new ByteArrayEntity(source.toBytesRef().bytes, 0, source.length(), contentType));
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request xPackWatcherDeleteWatch(DeleteWatchRequest deleteWatchRequest) {
|
||||
String endpoint = new EndpointBuilder()
|
||||
.addPathPartAsIs("_xpack")
|
||||
.addPathPartAsIs("watcher")
|
||||
.addPathPartAsIs("watch")
|
||||
.addPathPart(deleteWatchRequest.getId())
|
||||
.build();
|
||||
|
||||
Request request = new Request(HttpDelete.METHOD_NAME, endpoint);
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request xpackUsage(XPackUsageRequest usageRequest) {
|
||||
Request request = new Request(HttpGet.METHOD_NAME, "/_xpack/usage");
|
||||
Params parameters = new Params(request);
|
||||
parameters.withMasterTimeout(usageRequest.masterNodeTimeout());
|
||||
return request;
|
||||
}
|
||||
|
||||
static Request putLicense(PutLicenseRequest putLicenseRequest) {
|
||||
Request request = new Request(HttpPut.METHOD_NAME, "/_xpack/license");
|
||||
Params parameters = new Params(request);
|
||||
parameters.withTimeout(putLicenseRequest.timeout());
|
||||
parameters.withMasterTimeout(putLicenseRequest.masterNodeTimeout());
|
||||
if (putLicenseRequest.isAcknowledge()) {
|
||||
parameters.putParam("acknowledge", "true");
|
||||
}
|
||||
request.setJsonEntity(putLicenseRequest.getLicenseDefinition());
|
||||
return request;
|
||||
}
|
||||
|
||||
private static HttpEntity createEntity(ToXContent toXContent, XContentType xContentType) throws IOException {
|
||||
BytesRef source = XContentHelper.toXContent(toXContent, xContentType, false).toBytesRef();
|
||||
return new ByteArrayEntity(source.bytes, source.offset, source.length, createContentType(xContentType));
|
||||
|
|
|
@ -85,8 +85,10 @@ import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBu
|
|||
import org.elasticsearch.search.aggregations.bucket.geogrid.ParsedGeoHashGrid;
|
||||
import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.bucket.global.ParsedGlobal;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.AutoDateHistogramAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.ParsedAutoDateHistogram;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.ParsedDateHistogram;
|
||||
import org.elasticsearch.search.aggregations.bucket.histogram.ParsedHistogram;
|
||||
import org.elasticsearch.search.aggregations.bucket.missing.MissingAggregationBuilder;
|
||||
|
@ -382,8 +384,23 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
* @deprecated use {@link #mget(MultiGetRequest, RequestOptions)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final MultiGetResponse multiGet(MultiGetRequest multiGetRequest, RequestOptions options) throws IOException {
|
||||
return mget(multiGetRequest, options);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Retrieves multiple documents by id using the Multi Get API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-multi-get.html">Multi Get API on elastic.co</a>
|
||||
* @param multiGetRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public final MultiGetResponse mget(MultiGetRequest multiGetRequest, RequestOptions options) throws IOException {
|
||||
return performRequestAndParseEntity(multiGetRequest, RequestConverters::multiGet, options, MultiGetResponse::fromXContent,
|
||||
singleton(404));
|
||||
}
|
||||
|
@ -394,8 +411,21 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param multiGetRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
* @deprecated use {@link #mgetAsync(MultiGetRequest, RequestOptions, ActionListener)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final void multiGetAsync(MultiGetRequest multiGetRequest, RequestOptions options, ActionListener<MultiGetResponse> listener) {
|
||||
mgetAsync(multiGetRequest, options, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously retrieves multiple documents by id using the Multi Get API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-multi-get.html">Multi Get API on elastic.co</a>
|
||||
* @param multiGetRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public final void mgetAsync(MultiGetRequest multiGetRequest, RequestOptions options, ActionListener<MultiGetResponse> listener) {
|
||||
performRequestAsyncAndParseEntity(multiGetRequest, RequestConverters::multiGet, options, MultiGetResponse::fromXContent, listener,
|
||||
singleton(404));
|
||||
}
|
||||
|
@ -529,8 +559,23 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
* @deprecated use {@link #msearch(MultiSearchRequest, RequestOptions)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final MultiSearchResponse multiSearch(MultiSearchRequest multiSearchRequest, RequestOptions options) throws IOException {
|
||||
return msearch(multiSearchRequest, options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a multi search using the msearch API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html">Multi search API on
|
||||
* elastic.co</a>
|
||||
* @param multiSearchRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public final MultiSearchResponse msearch(MultiSearchRequest multiSearchRequest, RequestOptions options) throws IOException {
|
||||
return performRequestAndParseEntity(multiSearchRequest, RequestConverters::multiSearch, options, MultiSearchResponse::fromXContext,
|
||||
emptySet());
|
||||
}
|
||||
|
@ -542,9 +587,24 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param searchRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
* @deprecated use {@link #msearchAsync(MultiSearchRequest, RequestOptions, ActionListener)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final void multiSearchAsync(MultiSearchRequest searchRequest, RequestOptions options,
|
||||
ActionListener<MultiSearchResponse> listener) {
|
||||
ActionListener<MultiSearchResponse> listener) {
|
||||
msearchAsync(searchRequest, options, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously executes a multi search using the msearch API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html">Multi search API on
|
||||
* elastic.co</a>
|
||||
* @param searchRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public final void msearchAsync(MultiSearchRequest searchRequest, RequestOptions options,
|
||||
ActionListener<MultiSearchResponse> listener) {
|
||||
performRequestAsyncAndParseEntity(searchRequest, RequestConverters::multiSearch, options, MultiSearchResponse::fromXContext,
|
||||
listener, emptySet());
|
||||
}
|
||||
|
@ -557,8 +617,23 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
* @deprecated use {@link #scroll(SearchScrollRequest, RequestOptions)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, RequestOptions options) throws IOException {
|
||||
return scroll(searchScrollRequest, options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a search using the Search Scroll API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html">Search Scroll
|
||||
* API on elastic.co</a>
|
||||
* @param searchScrollRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public final SearchResponse scroll(SearchScrollRequest searchScrollRequest, RequestOptions options) throws IOException {
|
||||
return performRequestAndParseEntity(searchScrollRequest, RequestConverters::searchScroll, options, SearchResponse::fromXContent,
|
||||
emptySet());
|
||||
}
|
||||
|
@ -570,9 +645,24 @@ public class RestHighLevelClient implements Closeable {
|
|||
* @param searchScrollRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
* @deprecated use {@link #scrollAsync(SearchScrollRequest, RequestOptions, ActionListener)} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public final void searchScrollAsync(SearchScrollRequest searchScrollRequest, RequestOptions options,
|
||||
ActionListener<SearchResponse> listener) {
|
||||
ActionListener<SearchResponse> listener) {
|
||||
scrollAsync(searchScrollRequest, options, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously executes a search using the Search Scroll API.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html">Search Scroll
|
||||
* API on elastic.co</a>
|
||||
* @param searchScrollRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public final void scrollAsync(SearchScrollRequest searchScrollRequest, RequestOptions options,
|
||||
ActionListener<SearchResponse> listener) {
|
||||
performRequestAsyncAndParseEntity(searchScrollRequest, RequestConverters::searchScroll, options, SearchResponse::fromXContent,
|
||||
listener, emptySet());
|
||||
}
|
||||
|
@ -689,8 +779,8 @@ public class RestHighLevelClient implements Closeable {
|
|||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-search-template.html">Multi Search Template API
|
||||
* on elastic.co</a>.
|
||||
*/
|
||||
public final MultiSearchTemplateResponse multiSearchTemplate(MultiSearchTemplateRequest multiSearchTemplateRequest,
|
||||
RequestOptions options) throws IOException {
|
||||
public final MultiSearchTemplateResponse msearchTemplate(MultiSearchTemplateRequest multiSearchTemplateRequest,
|
||||
RequestOptions options) throws IOException {
|
||||
return performRequestAndParseEntity(multiSearchTemplateRequest, RequestConverters::multiSearchTemplate,
|
||||
options, MultiSearchTemplateResponse::fromXContext, emptySet());
|
||||
}
|
||||
|
@ -701,9 +791,9 @@ public class RestHighLevelClient implements Closeable {
|
|||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-search-template.html">Multi Search Template API
|
||||
* on elastic.co</a>.
|
||||
*/
|
||||
public final void multiSearchTemplateAsync(MultiSearchTemplateRequest multiSearchTemplateRequest,
|
||||
RequestOptions options,
|
||||
ActionListener<MultiSearchTemplateResponse> listener) {
|
||||
public final void msearchTemplateAsync(MultiSearchTemplateRequest multiSearchTemplateRequest,
|
||||
RequestOptions options,
|
||||
ActionListener<MultiSearchTemplateResponse> listener) {
|
||||
performRequestAsyncAndParseEntity(multiSearchTemplateRequest, RequestConverters::multiSearchTemplate,
|
||||
options, MultiSearchTemplateResponse::fromXContext, listener, emptySet());
|
||||
}
|
||||
|
@ -1004,6 +1094,7 @@ public class RestHighLevelClient implements Closeable {
|
|||
map.put(GeoCentroidAggregationBuilder.NAME, (p, c) -> ParsedGeoCentroid.fromXContent(p, (String) c));
|
||||
map.put(HistogramAggregationBuilder.NAME, (p, c) -> ParsedHistogram.fromXContent(p, (String) c));
|
||||
map.put(DateHistogramAggregationBuilder.NAME, (p, c) -> ParsedDateHistogram.fromXContent(p, (String) c));
|
||||
map.put(AutoDateHistogramAggregationBuilder.NAME, (p, c) -> ParsedAutoDateHistogram.fromXContent(p, (String) c));
|
||||
map.put(StringTerms.NAME, (p, c) -> ParsedStringTerms.fromXContent(p, (String) c));
|
||||
map.put(LongTerms.NAME, (p, c) -> ParsedLongTerms.fromXContent(p, (String) c));
|
||||
map.put(DoubleTerms.NAME, (p, c) -> ParsedDoubleTerms.fromXContent(p, (String) c));
|
||||
|
|
|
@ -30,6 +30,8 @@ import org.elasticsearch.action.admin.cluster.repositories.verify.VerifyReposito
|
|||
import org.elasticsearch.action.admin.cluster.repositories.verify.VerifyRepositoryResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
|
||||
|
@ -63,7 +65,7 @@ public final class SnapshotClient {
|
|||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public GetRepositoriesResponse getRepositories(GetRepositoriesRequest getRepositoriesRequest, RequestOptions options)
|
||||
public GetRepositoriesResponse getRepository(GetRepositoriesRequest getRepositoriesRequest, RequestOptions options)
|
||||
throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(getRepositoriesRequest, RequestConverters::getRepositories, options,
|
||||
GetRepositoriesResponse::fromXContent, emptySet());
|
||||
|
@ -78,8 +80,8 @@ public final class SnapshotClient {
|
|||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void getRepositoriesAsync(GetRepositoriesRequest getRepositoriesRequest, RequestOptions options,
|
||||
ActionListener<GetRepositoriesResponse> listener) {
|
||||
public void getRepositoryAsync(GetRepositoriesRequest getRepositoriesRequest, RequestOptions options,
|
||||
ActionListener<GetRepositoriesResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(getRepositoriesRequest, RequestConverters::getRepositories, options,
|
||||
GetRepositoriesResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
@ -176,7 +178,7 @@ public final class SnapshotClient {
|
|||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html"> Snapshot and Restore
|
||||
* API on elastic.co</a>
|
||||
*/
|
||||
public CreateSnapshotResponse createSnapshot(CreateSnapshotRequest createSnapshotRequest, RequestOptions options)
|
||||
public CreateSnapshotResponse create(CreateSnapshotRequest createSnapshotRequest, RequestOptions options)
|
||||
throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(createSnapshotRequest, RequestConverters::createSnapshot, options,
|
||||
CreateSnapshotResponse::fromXContent, emptySet());
|
||||
|
@ -188,7 +190,7 @@ public final class SnapshotClient {
|
|||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html"> Snapshot and Restore
|
||||
* API on elastic.co</a>
|
||||
*/
|
||||
public void createSnapshotAsync(CreateSnapshotRequest createSnapshotRequest, RequestOptions options,
|
||||
public void createAsync(CreateSnapshotRequest createSnapshotRequest, RequestOptions options,
|
||||
ActionListener<CreateSnapshotResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(createSnapshotRequest, RequestConverters::createSnapshot, options,
|
||||
CreateSnapshotResponse::fromXContent, listener, emptySet());
|
||||
|
@ -252,6 +254,36 @@ public final class SnapshotClient {
|
|||
SnapshotsStatusResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Restores a snapshot.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html"> Snapshot and Restore
|
||||
* API on elastic.co</a>
|
||||
*
|
||||
* @param restoreSnapshotRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public RestoreSnapshotResponse restore(RestoreSnapshotRequest restoreSnapshotRequest, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(restoreSnapshotRequest, RequestConverters::restoreSnapshot, options,
|
||||
RestoreSnapshotResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously restores a snapshot.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html"> Snapshot and Restore
|
||||
* API on elastic.co</a>
|
||||
*
|
||||
* @param restoreSnapshotRequest the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void restoreAsync(RestoreSnapshotRequest restoreSnapshotRequest, RequestOptions options,
|
||||
ActionListener<RestoreSnapshotResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(restoreSnapshotRequest, RequestConverters::restoreSnapshot, options,
|
||||
RestoreSnapshotResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Deletes a snapshot.
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html"> Snapshot and Restore
|
||||
|
|
|
@ -0,0 +1,94 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchResponse;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchResponse;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Collections.emptySet;
|
||||
import static java.util.Collections.singleton;
|
||||
|
||||
public final class WatcherClient {
|
||||
|
||||
private final RestHighLevelClient restHighLevelClient;
|
||||
|
||||
WatcherClient(RestHighLevelClient restHighLevelClient) {
|
||||
this.restHighLevelClient = restHighLevelClient;
|
||||
}
|
||||
|
||||
/**
|
||||
* Put a watch into the cluster
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/watcher-api-put-watch.html">
|
||||
* the docs</a> for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public PutWatchResponse putWatch(PutWatchRequest request, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, RequestConverters::xPackWatcherPutWatch, options,
|
||||
PutWatchResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously put a watch into the cluster
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/watcher-api-put-watch.html">
|
||||
* the docs</a> for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void putWatchAsync(PutWatchRequest request, RequestOptions options,
|
||||
ActionListener<PutWatchResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, RequestConverters::xPackWatcherPutWatch, options,
|
||||
PutWatchResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Deletes a watch from the cluster
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/watcher-api-delete-watch.html">
|
||||
* the docs</a> for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public DeleteWatchResponse deleteWatch(DeleteWatchRequest request, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, RequestConverters::xPackWatcherDeleteWatch, options,
|
||||
DeleteWatchResponse::fromXContent, singleton(404));
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously deletes a watch from the cluster
|
||||
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/watcher-api-delete-watch.html">
|
||||
* the docs</a> for more.
|
||||
* @param request the request
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void deleteWatchAsync(DeleteWatchRequest request, RequestOptions options, ActionListener<DeleteWatchResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, RequestConverters::xPackWatcherDeleteWatch, options,
|
||||
DeleteWatchResponse::fromXContent, listener, singleton(404));
|
||||
}
|
||||
}
|
|
@ -22,6 +22,8 @@ package org.elasticsearch.client;
|
|||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoRequest;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoResponse;
|
||||
import org.elasticsearch.protocol.xpack.XPackUsageRequest;
|
||||
import org.elasticsearch.protocol.xpack.XPackUsageResponse;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
|
@ -37,10 +39,19 @@ import static java.util.Collections.emptySet;
|
|||
* X-Pack APIs on elastic.co</a> for more information.
|
||||
*/
|
||||
public final class XPackClient {
|
||||
|
||||
private final RestHighLevelClient restHighLevelClient;
|
||||
private final WatcherClient watcherClient;
|
||||
private final LicenseClient licenseClient;
|
||||
|
||||
XPackClient(RestHighLevelClient restHighLevelClient) {
|
||||
this.restHighLevelClient = restHighLevelClient;
|
||||
this.watcherClient = new WatcherClient(restHighLevelClient);
|
||||
this.licenseClient = new LicenseClient(restHighLevelClient);
|
||||
}
|
||||
|
||||
public WatcherClient watcher() {
|
||||
return watcherClient;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -70,4 +81,36 @@ public final class XPackClient {
|
|||
restHighLevelClient.performRequestAsyncAndParseEntity(request, RequestConverters::xPackInfo, options,
|
||||
XPackInfoResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch usage information about X-Pack features from the cluster.
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @return the response
|
||||
* @throws IOException in case there is a problem sending the request or parsing back the response
|
||||
*/
|
||||
public XPackUsageResponse usage(XPackUsageRequest request, RequestOptions options) throws IOException {
|
||||
return restHighLevelClient.performRequestAndParseEntity(request, RequestConverters::xpackUsage, options,
|
||||
XPackUsageResponse::fromXContent, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Asynchronously fetch usage information about X-Pack features from the cluster.
|
||||
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
|
||||
* @param listener the listener to be notified upon request completion
|
||||
*/
|
||||
public void usageAsync(XPackUsageRequest request, RequestOptions options, ActionListener<XPackUsageResponse> listener) {
|
||||
restHighLevelClient.performRequestAsyncAndParseEntity(request, RequestConverters::xpackUsage, options,
|
||||
XPackUsageResponse::fromXContent, listener, emptySet());
|
||||
}
|
||||
|
||||
/**
|
||||
* A wrapper for the {@link RestHighLevelClient} that provides methods for
|
||||
* accessing the Elastic Licensing APIs.
|
||||
* <p>
|
||||
* See the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/licensing-apis.html">
|
||||
* X-Pack APIs on elastic.co</a> for more information.
|
||||
*/
|
||||
public LicenseClient license() {
|
||||
return licenseClient;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -79,7 +79,7 @@ public class BulkProcessorIT extends ESRestHighLevelClientTestCase {
|
|||
assertThat(listener.afterCounts.get(), equalTo(1));
|
||||
assertThat(listener.bulkFailures.size(), equalTo(0));
|
||||
assertResponseItems(listener.bulkItems, numDocs);
|
||||
assertMultiGetResponse(highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
assertMultiGetResponse(highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -105,7 +105,7 @@ public class BulkProcessorIT extends ESRestHighLevelClientTestCase {
|
|||
assertThat(listener.afterCounts.get(), equalTo(1));
|
||||
assertThat(listener.bulkFailures.size(), equalTo(0));
|
||||
assertResponseItems(listener.bulkItems, numDocs);
|
||||
assertMultiGetResponse(highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
assertMultiGetResponse(highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -157,7 +157,7 @@ public class BulkProcessorIT extends ESRestHighLevelClientTestCase {
|
|||
assertThat(ids.add(bulkItemResponse.getId()), equalTo(true));
|
||||
}
|
||||
|
||||
assertMultiGetResponse(highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
assertMultiGetResponse(highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
}
|
||||
|
||||
public void testBulkProcessorWaitOnClose() throws Exception {
|
||||
|
@ -188,7 +188,7 @@ public class BulkProcessorIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
assertThat(listener.bulkFailures.size(), equalTo(0));
|
||||
assertResponseItems(listener.bulkItems, numDocs);
|
||||
assertMultiGetResponse(highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
assertMultiGetResponse(highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT), numDocs);
|
||||
}
|
||||
|
||||
public void testBulkProcessorConcurrentRequestsReadOnlyIndex() throws Exception {
|
||||
|
@ -265,7 +265,7 @@ public class BulkProcessorIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
}
|
||||
|
||||
assertMultiGetResponse(highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT), testDocs);
|
||||
assertMultiGetResponse(highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT), testDocs);
|
||||
}
|
||||
|
||||
private static MultiGetRequest indexDocs(BulkProcessor processor, int numDocs) throws Exception {
|
||||
|
|
|
@ -129,7 +129,7 @@ public class BulkProcessorRetryIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
|
||||
highLevelClient().indices().refresh(new RefreshRequest(), RequestOptions.DEFAULT);
|
||||
int multiGetResponsesCount = highLevelClient().multiGet(multiGetRequest, RequestOptions.DEFAULT).getResponses().length;
|
||||
int multiGetResponsesCount = highLevelClient().mget(multiGetRequest, RequestOptions.DEFAULT).getResponses().length;
|
||||
|
||||
if (rejectedExecutionExpected) {
|
||||
assertThat(multiGetResponsesCount, lessThanOrEqualTo(numberOfAsyncOps));
|
||||
|
|
|
@ -253,7 +253,7 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
|
|||
MultiGetRequest multiGetRequest = new MultiGetRequest();
|
||||
multiGetRequest.add("index", "type", "id1");
|
||||
multiGetRequest.add("index", "type", "id2");
|
||||
MultiGetResponse response = execute(multiGetRequest, highLevelClient()::multiGet, highLevelClient()::multiGetAsync);
|
||||
MultiGetResponse response = execute(multiGetRequest, highLevelClient()::mget, highLevelClient()::mgetAsync);
|
||||
assertEquals(2, response.getResponses().length);
|
||||
|
||||
assertTrue(response.getResponses()[0].isFailed());
|
||||
|
@ -285,7 +285,7 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
|
|||
MultiGetRequest multiGetRequest = new MultiGetRequest();
|
||||
multiGetRequest.add("index", "type", "id1");
|
||||
multiGetRequest.add("index", "type", "id2");
|
||||
MultiGetResponse response = execute(multiGetRequest, highLevelClient()::multiGet, highLevelClient()::multiGetAsync);
|
||||
MultiGetResponse response = execute(multiGetRequest, highLevelClient()::mget, highLevelClient()::mgetAsync);
|
||||
assertEquals(2, response.getResponses().length);
|
||||
|
||||
assertFalse(response.getResponses()[0].isFailed());
|
||||
|
|
|
@ -121,7 +121,7 @@ public class CustomRestHighLevelClientTests extends ESTestCase {
|
|||
* so that they can be used by subclasses to implement custom logic.
|
||||
*/
|
||||
@SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods")
|
||||
public void testMethodsVisibility() throws ClassNotFoundException {
|
||||
public void testMethodsVisibility() {
|
||||
final String[] methodNames = new String[]{"parseEntity",
|
||||
"parseResponseException",
|
||||
"performRequest",
|
||||
|
|
|
@ -443,7 +443,7 @@ public class IndicesClientIT extends ESRestHighLevelClientTestCase {
|
|||
.types("_doc");
|
||||
|
||||
GetMappingsResponse getMappingsResponse =
|
||||
execute(request, highLevelClient().indices()::getMappings, highLevelClient().indices()::getMappingsAsync);
|
||||
execute(request, highLevelClient().indices()::getMapping, highLevelClient().indices()::getMappingAsync);
|
||||
|
||||
Map<String, Object> mappings = getMappingsResponse.getMappings().get(indexName).get("_doc").sourceAsMap();
|
||||
Map<String, String> type = new HashMap<>();
|
||||
|
@ -796,7 +796,7 @@ public class IndicesClientIT extends ESRestHighLevelClientTestCase {
|
|||
createIndex(index, settings);
|
||||
ForceMergeRequest forceMergeRequest = new ForceMergeRequest(index);
|
||||
ForceMergeResponse forceMergeResponse =
|
||||
execute(forceMergeRequest, highLevelClient().indices()::forceMerge, highLevelClient().indices()::forceMergeAsync);
|
||||
execute(forceMergeRequest, highLevelClient().indices()::forcemerge, highLevelClient().indices()::forcemergeAsync);
|
||||
assertThat(forceMergeResponse.getTotalShards(), equalTo(1));
|
||||
assertThat(forceMergeResponse.getSuccessfulShards(), equalTo(1));
|
||||
assertThat(forceMergeResponse.getFailedShards(), equalTo(0));
|
||||
|
@ -807,7 +807,7 @@ public class IndicesClientIT extends ESRestHighLevelClientTestCase {
|
|||
assertFalse(indexExists(nonExistentIndex));
|
||||
ForceMergeRequest forceMergeRequest = new ForceMergeRequest(nonExistentIndex);
|
||||
ElasticsearchException exception = expectThrows(ElasticsearchException.class,
|
||||
() -> execute(forceMergeRequest, highLevelClient().indices()::forceMerge, highLevelClient().indices()::forceMergeAsync));
|
||||
() -> execute(forceMergeRequest, highLevelClient().indices()::forcemerge, highLevelClient().indices()::forcemergeAsync));
|
||||
assertEquals(RestStatus.NOT_FOUND, exception.status());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -135,7 +135,7 @@ public class IngestClientIT extends ESRestHighLevelClientTestCase {
|
|||
);
|
||||
request.setVerbose(isVerbose);
|
||||
SimulatePipelineResponse response =
|
||||
execute(request, highLevelClient().ingest()::simulatePipeline, highLevelClient().ingest()::simulatePipelineAsync);
|
||||
execute(request, highLevelClient().ingest()::simulate, highLevelClient().ingest()::simulateAsync);
|
||||
List<SimulateDocumentResult> results = response.getResults();
|
||||
assertEquals(1, results.size());
|
||||
if (isVerbose) {
|
||||
|
|
|
@ -66,13 +66,13 @@ public class PingAndInfoIT extends ESRestHighLevelClientTestCase {
|
|||
|
||||
assertEquals(mainResponse.getBuild().shortHash(), info.getBuildInfo().getHash());
|
||||
|
||||
assertEquals("basic", info.getLicenseInfo().getType());
|
||||
assertEquals("basic", info.getLicenseInfo().getMode());
|
||||
assertEquals("trial", info.getLicenseInfo().getType());
|
||||
assertEquals("trial", info.getLicenseInfo().getMode());
|
||||
assertEquals(LicenseStatus.ACTIVE, info.getLicenseInfo().getStatus());
|
||||
|
||||
FeatureSet graph = info.getFeatureSetsInfo().getFeatureSets().get("graph");
|
||||
assertNotNull(graph.description());
|
||||
assertFalse(graph.available());
|
||||
assertTrue(graph.available());
|
||||
assertTrue(graph.enabled());
|
||||
assertNull(graph.nativeCodeInfo());
|
||||
FeatureSet monitoring = info.getFeatureSetsInfo().getFeatureSets().get("monitoring");
|
||||
|
@ -82,7 +82,7 @@ public class PingAndInfoIT extends ESRestHighLevelClientTestCase {
|
|||
assertNull(monitoring.nativeCodeInfo());
|
||||
FeatureSet ml = info.getFeatureSetsInfo().getFeatureSets().get("ml");
|
||||
assertNotNull(ml.description());
|
||||
assertFalse(ml.available());
|
||||
assertTrue(ml.available());
|
||||
assertTrue(ml.enabled());
|
||||
assertEquals(mainResponse.getVersion().toString(),
|
||||
ml.nativeCodeInfo().get("version").toString().replace("-SNAPSHOT", ""));
|
||||
|
|
|
@ -22,7 +22,11 @@ package org.elasticsearch.client;
|
|||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.index.query.MatchAllQueryBuilder;
|
||||
import org.elasticsearch.index.rankeval.DiscountedCumulativeGain;
|
||||
import org.elasticsearch.index.rankeval.EvalQueryQuality;
|
||||
import org.elasticsearch.index.rankeval.EvaluationMetric;
|
||||
import org.elasticsearch.index.rankeval.ExpectedReciprocalRank;
|
||||
import org.elasticsearch.index.rankeval.MeanReciprocalRank;
|
||||
import org.elasticsearch.index.rankeval.PrecisionAtK;
|
||||
import org.elasticsearch.index.rankeval.RankEvalRequest;
|
||||
import org.elasticsearch.index.rankeval.RankEvalResponse;
|
||||
|
@ -35,12 +39,14 @@ import org.junit.Before;
|
|||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.function.Supplier;
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.stream.Stream;
|
||||
|
||||
import static org.elasticsearch.index.rankeval.EvaluationMetric.filterUnknownDocuments;
|
||||
import static org.elasticsearch.index.rankeval.EvaluationMetric.filterUnratedDocuments;
|
||||
|
||||
public class RankEvalIT extends ESRestHighLevelClientTestCase {
|
||||
|
||||
|
@ -64,15 +70,7 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
|
|||
* calculation where all unlabeled documents are treated as not relevant.
|
||||
*/
|
||||
public void testRankEvalRequest() throws IOException {
|
||||
SearchSourceBuilder testQuery = new SearchSourceBuilder();
|
||||
testQuery.query(new MatchAllQueryBuilder());
|
||||
List<RatedDocument> amsterdamRatedDocs = createRelevant("index" , "amsterdam1", "amsterdam2", "amsterdam3", "amsterdam4");
|
||||
amsterdamRatedDocs.addAll(createRelevant("index2", "amsterdam0"));
|
||||
RatedRequest amsterdamRequest = new RatedRequest("amsterdam_query", amsterdamRatedDocs, testQuery);
|
||||
RatedRequest berlinRequest = new RatedRequest("berlin_query", createRelevant("index", "berlin"), testQuery);
|
||||
List<RatedRequest> specifications = new ArrayList<>();
|
||||
specifications.add(amsterdamRequest);
|
||||
specifications.add(berlinRequest);
|
||||
List<RatedRequest> specifications = createTestEvaluationSpec();
|
||||
PrecisionAtK metric = new PrecisionAtK(1, false, 10);
|
||||
RankEvalSpec spec = new RankEvalSpec(specifications, metric);
|
||||
|
||||
|
@ -80,11 +78,11 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
|
|||
RankEvalResponse response = execute(rankEvalRequest, highLevelClient()::rankEval, highLevelClient()::rankEvalAsync);
|
||||
// the expected Prec@ for the first query is 5/7 and the expected Prec@ for the second is 1/7, divided by 2 to get the average
|
||||
double expectedPrecision = (1.0 / 7.0 + 5.0 / 7.0) / 2.0;
|
||||
assertEquals(expectedPrecision, response.getEvaluationResult(), Double.MIN_VALUE);
|
||||
assertEquals(expectedPrecision, response.getMetricScore(), Double.MIN_VALUE);
|
||||
Map<String, EvalQueryQuality> partialResults = response.getPartialResults();
|
||||
assertEquals(2, partialResults.size());
|
||||
EvalQueryQuality amsterdamQueryQuality = partialResults.get("amsterdam_query");
|
||||
assertEquals(2, filterUnknownDocuments(amsterdamQueryQuality.getHitsAndRatings()).size());
|
||||
assertEquals(2, filterUnratedDocuments(amsterdamQueryQuality.getHitsAndRatings()).size());
|
||||
List<RatedSearchHit> hitsAndRatings = amsterdamQueryQuality.getHitsAndRatings();
|
||||
assertEquals(7, hitsAndRatings.size());
|
||||
for (RatedSearchHit hit : hitsAndRatings) {
|
||||
|
@ -96,7 +94,7 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
}
|
||||
EvalQueryQuality berlinQueryQuality = partialResults.get("berlin_query");
|
||||
assertEquals(6, filterUnknownDocuments(berlinQueryQuality.getHitsAndRatings()).size());
|
||||
assertEquals(6, filterUnratedDocuments(berlinQueryQuality.getHitsAndRatings()).size());
|
||||
hitsAndRatings = berlinQueryQuality.getHitsAndRatings();
|
||||
assertEquals(7, hitsAndRatings.size());
|
||||
for (RatedSearchHit hit : hitsAndRatings) {
|
||||
|
@ -114,6 +112,38 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
|
|||
response = execute(rankEvalRequest, highLevelClient()::rankEval, highLevelClient()::rankEvalAsync);
|
||||
}
|
||||
|
||||
private static List<RatedRequest> createTestEvaluationSpec() {
|
||||
SearchSourceBuilder testQuery = new SearchSourceBuilder();
|
||||
testQuery.query(new MatchAllQueryBuilder());
|
||||
List<RatedDocument> amsterdamRatedDocs = createRelevant("index" , "amsterdam1", "amsterdam2", "amsterdam3", "amsterdam4");
|
||||
amsterdamRatedDocs.addAll(createRelevant("index2", "amsterdam0"));
|
||||
RatedRequest amsterdamRequest = new RatedRequest("amsterdam_query", amsterdamRatedDocs, testQuery);
|
||||
RatedRequest berlinRequest = new RatedRequest("berlin_query", createRelevant("index", "berlin"), testQuery);
|
||||
List<RatedRequest> specifications = new ArrayList<>();
|
||||
specifications.add(amsterdamRequest);
|
||||
specifications.add(berlinRequest);
|
||||
return specifications;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test case checks that the default metrics are registered and usable
|
||||
*/
|
||||
public void testMetrics() throws IOException {
|
||||
List<RatedRequest> specifications = createTestEvaluationSpec();
|
||||
List<Supplier<EvaluationMetric>> metrics = Arrays.asList(PrecisionAtK::new, MeanReciprocalRank::new, DiscountedCumulativeGain::new,
|
||||
() -> new ExpectedReciprocalRank(1));
|
||||
double expectedScores[] = new double[] {0.4285714285714286, 0.75, 1.6408962261063627, 0.4407738095238095};
|
||||
int i = 0;
|
||||
for (Supplier<EvaluationMetric> metricSupplier : metrics) {
|
||||
RankEvalSpec spec = new RankEvalSpec(specifications, metricSupplier.get());
|
||||
|
||||
RankEvalRequest rankEvalRequest = new RankEvalRequest(spec, new String[] { "index", "index2" });
|
||||
RankEvalResponse response = execute(rankEvalRequest, highLevelClient()::rankEval, highLevelClient()::rankEvalAsync);
|
||||
assertEquals(expectedScores[i], response.getMetricScore(), Double.MIN_VALUE);
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
private static List<RatedDocument> createRelevant(String indexName, String... docs) {
|
||||
return Stream.of(docs).map(s -> new RatedDocument(indexName, s, 1)).collect(Collectors.toList());
|
||||
}
|
||||
|
|
|
@ -41,9 +41,10 @@ import org.elasticsearch.action.admin.cluster.settings.ClusterUpdateSettingsRequ
|
|||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest;
|
||||
import org.elasticsearch.action.admin.indices.alias.Alias;
|
||||
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
|
||||
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;
|
||||
|
@ -125,6 +126,8 @@ import org.elasticsearch.index.rankeval.RankEvalSpec;
|
|||
import org.elasticsearch.index.rankeval.RatedRequest;
|
||||
import org.elasticsearch.index.rankeval.RestRankEvalAction;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchRequest;
|
||||
import org.elasticsearch.repositories.fs.FsRepository;
|
||||
import org.elasticsearch.rest.action.search.RestSearchAction;
|
||||
import org.elasticsearch.script.ScriptType;
|
||||
|
@ -145,6 +148,7 @@ import org.elasticsearch.test.ESTestCase;
|
|||
import org.elasticsearch.test.RandomObjects;
|
||||
import org.hamcrest.CoreMatchers;
|
||||
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
|
@ -2196,6 +2200,31 @@ public class RequestConvertersTests extends ESTestCase {
|
|||
assertThat(request.getEntity(), is(nullValue()));
|
||||
}
|
||||
|
||||
public void testRestoreSnapshot() throws IOException {
|
||||
Map<String, String> expectedParams = new HashMap<>();
|
||||
String repository = randomIndicesNames(1, 1)[0];
|
||||
String snapshot = "snapshot-" + randomAlphaOfLengthBetween(2, 5).toLowerCase(Locale.ROOT);
|
||||
String endpoint = String.format(Locale.ROOT, "/_snapshot/%s/%s/_restore", repository, snapshot);
|
||||
|
||||
RestoreSnapshotRequest restoreSnapshotRequest = new RestoreSnapshotRequest(repository, snapshot);
|
||||
setRandomMasterTimeout(restoreSnapshotRequest, expectedParams);
|
||||
if (randomBoolean()) {
|
||||
restoreSnapshotRequest.waitForCompletion(true);
|
||||
expectedParams.put("wait_for_completion", "true");
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
String timeout = randomTimeValue();
|
||||
restoreSnapshotRequest.masterNodeTimeout(timeout);
|
||||
expectedParams.put("master_timeout", timeout);
|
||||
}
|
||||
|
||||
Request request = RequestConverters.restoreSnapshot(restoreSnapshotRequest);
|
||||
assertThat(endpoint, equalTo(request.getEndpoint()));
|
||||
assertThat(HttpPost.METHOD_NAME, equalTo(request.getMethod()));
|
||||
assertThat(expectedParams, equalTo(request.getParameters()));
|
||||
assertToXContentBody(restoreSnapshotRequest, request.getEntity());
|
||||
}
|
||||
|
||||
public void testDeleteSnapshot() {
|
||||
Map<String, String> expectedParams = new HashMap<>();
|
||||
String repository = randomIndicesNames(1, 1)[0];
|
||||
|
@ -2523,6 +2552,46 @@ public class RequestConvertersTests extends ESTestCase {
|
|||
assertEquals(expectedParams, request.getParameters());
|
||||
}
|
||||
|
||||
public void testXPackPutWatch() throws Exception {
|
||||
PutWatchRequest putWatchRequest = new PutWatchRequest();
|
||||
String watchId = randomAlphaOfLength(10);
|
||||
putWatchRequest.setId(watchId);
|
||||
String body = randomAlphaOfLength(20);
|
||||
putWatchRequest.setSource(new BytesArray(body), XContentType.JSON);
|
||||
|
||||
Map<String, String> expectedParams = new HashMap<>();
|
||||
if (randomBoolean()) {
|
||||
putWatchRequest.setActive(false);
|
||||
expectedParams.put("active", "false");
|
||||
}
|
||||
|
||||
if (randomBoolean()) {
|
||||
long version = randomLongBetween(10, 100);
|
||||
putWatchRequest.setVersion(version);
|
||||
expectedParams.put("version", String.valueOf(version));
|
||||
}
|
||||
|
||||
Request request = RequestConverters.xPackWatcherPutWatch(putWatchRequest);
|
||||
assertEquals(HttpPut.METHOD_NAME, request.getMethod());
|
||||
assertEquals("/_xpack/watcher/watch/" + watchId, request.getEndpoint());
|
||||
assertEquals(expectedParams, request.getParameters());
|
||||
assertThat(request.getEntity().getContentType().getValue(), is(XContentType.JSON.mediaTypeWithoutParameters()));
|
||||
ByteArrayOutputStream bos = new ByteArrayOutputStream();
|
||||
request.getEntity().writeTo(bos);
|
||||
assertThat(bos.toString("UTF-8"), is(body));
|
||||
}
|
||||
|
||||
public void testXPackDeleteWatch() {
|
||||
DeleteWatchRequest deleteWatchRequest = new DeleteWatchRequest();
|
||||
String watchId = randomAlphaOfLength(10);
|
||||
deleteWatchRequest.setId(watchId);
|
||||
|
||||
Request request = RequestConverters.xPackWatcherDeleteWatch(deleteWatchRequest);
|
||||
assertEquals(HttpDelete.METHOD_NAME, request.getMethod());
|
||||
assertEquals("/_xpack/watcher/watch/" + watchId, request.getEndpoint());
|
||||
assertThat(request.getEntity(), nullValue());
|
||||
}
|
||||
|
||||
/**
|
||||
* Randomize the {@link FetchSourceContext} request parameters.
|
||||
*/
|
||||
|
|
|
@ -21,7 +21,6 @@ package org.elasticsearch.client;
|
|||
|
||||
import com.fasterxml.jackson.core.JsonParseException;
|
||||
|
||||
import org.apache.http.Header;
|
||||
import org.apache.http.HttpEntity;
|
||||
import org.apache.http.HttpHost;
|
||||
import org.apache.http.HttpResponse;
|
||||
|
@ -53,6 +52,7 @@ import org.elasticsearch.action.search.ShardSearchFailure;
|
|||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.common.CheckedFunction;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.collect.Tuple;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
@ -61,6 +61,7 @@ import org.elasticsearch.common.xcontent.cbor.CborXContent;
|
|||
import org.elasticsearch.common.xcontent.smile.SmileXContent;
|
||||
import org.elasticsearch.index.rankeval.DiscountedCumulativeGain;
|
||||
import org.elasticsearch.index.rankeval.EvaluationMetric;
|
||||
import org.elasticsearch.index.rankeval.ExpectedReciprocalRank;
|
||||
import org.elasticsearch.index.rankeval.MeanReciprocalRank;
|
||||
import org.elasticsearch.index.rankeval.MetricDetail;
|
||||
import org.elasticsearch.index.rankeval.PrecisionAtK;
|
||||
|
@ -73,20 +74,30 @@ import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregation
|
|||
import org.elasticsearch.search.suggest.Suggest;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.test.InternalAggregationTestCase;
|
||||
import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestApi;
|
||||
import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestSpec;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.lang.reflect.Method;
|
||||
import java.lang.reflect.Modifier;
|
||||
import java.net.SocketTimeoutException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.stream.Stream;
|
||||
|
||||
import static org.elasticsearch.client.RestClientTestUtil.randomHeaders;
|
||||
import static org.elasticsearch.common.xcontent.XContentHelper.toXContent;
|
||||
import static org.hamcrest.CoreMatchers.endsWith;
|
||||
import static org.hamcrest.CoreMatchers.equalTo;
|
||||
import static org.hamcrest.CoreMatchers.instanceOf;
|
||||
import static org.mockito.Matchers.any;
|
||||
import static org.mockito.Mockito.mock;
|
||||
|
@ -137,7 +148,6 @@ public class RestHighLevelClientTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testInfo() throws IOException {
|
||||
Header[] headers = randomHeaders(random(), "Header");
|
||||
MainResponse testInfo = new MainResponse("nodeName", Version.CURRENT, new ClusterName("clusterName"), "clusterUuid",
|
||||
Build.CURRENT);
|
||||
mockResponse(testInfo);
|
||||
|
@ -150,7 +160,7 @@ public class RestHighLevelClientTests extends ESTestCase {
|
|||
null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, ShardSearchFailure.EMPTY_ARRAY,
|
||||
SearchResponse.Clusters.EMPTY);
|
||||
mockResponse(mockSearchResponse);
|
||||
SearchResponse searchResponse = restHighLevelClient.searchScroll(
|
||||
SearchResponse searchResponse = restHighLevelClient.scroll(
|
||||
new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)), RequestOptions.DEFAULT);
|
||||
assertEquals(mockSearchResponse.getScrollId(), searchResponse.getScrollId());
|
||||
assertEquals(0, searchResponse.getHits().totalHits);
|
||||
|
@ -608,7 +618,7 @@ public class RestHighLevelClientTests extends ESTestCase {
|
|||
|
||||
public void testProvidedNamedXContents() {
|
||||
List<NamedXContentRegistry.Entry> namedXContents = RestHighLevelClient.getProvidedNamedXContents();
|
||||
assertEquals(8, namedXContents.size());
|
||||
assertEquals(10, namedXContents.size());
|
||||
Map<Class<?>, Integer> categories = new HashMap<>();
|
||||
List<String> names = new ArrayList<>();
|
||||
for (NamedXContentRegistry.Entry namedXContent : namedXContents) {
|
||||
|
@ -622,14 +632,165 @@ public class RestHighLevelClientTests extends ESTestCase {
|
|||
assertEquals(Integer.valueOf(2), categories.get(Aggregation.class));
|
||||
assertTrue(names.contains(ChildrenAggregationBuilder.NAME));
|
||||
assertTrue(names.contains(MatrixStatsAggregationBuilder.NAME));
|
||||
assertEquals(Integer.valueOf(3), categories.get(EvaluationMetric.class));
|
||||
assertEquals(Integer.valueOf(4), categories.get(EvaluationMetric.class));
|
||||
assertTrue(names.contains(PrecisionAtK.NAME));
|
||||
assertTrue(names.contains(DiscountedCumulativeGain.NAME));
|
||||
assertTrue(names.contains(MeanReciprocalRank.NAME));
|
||||
assertEquals(Integer.valueOf(3), categories.get(MetricDetail.class));
|
||||
assertTrue(names.contains(ExpectedReciprocalRank.NAME));
|
||||
assertEquals(Integer.valueOf(4), categories.get(MetricDetail.class));
|
||||
assertTrue(names.contains(PrecisionAtK.NAME));
|
||||
assertTrue(names.contains(MeanReciprocalRank.NAME));
|
||||
assertTrue(names.contains(DiscountedCumulativeGain.NAME));
|
||||
assertTrue(names.contains(ExpectedReciprocalRank.NAME));
|
||||
}
|
||||
|
||||
public void testApiNamingConventions() throws Exception {
|
||||
//this list should be empty once the high-level client is feature complete
|
||||
String[] notYetSupportedApi = new String[]{
|
||||
"cluster.remote_info",
|
||||
"count",
|
||||
"create",
|
||||
"delete_by_query",
|
||||
"exists_source",
|
||||
"get_source",
|
||||
"indices.delete_alias",
|
||||
"indices.delete_template",
|
||||
"indices.exists_template",
|
||||
"indices.exists_type",
|
||||
"indices.get_upgrade",
|
||||
"indices.put_alias",
|
||||
"mtermvectors",
|
||||
"put_script",
|
||||
"reindex",
|
||||
"reindex_rethrottle",
|
||||
"render_search_template",
|
||||
"scripts_painless_execute",
|
||||
"tasks.get",
|
||||
"termvectors",
|
||||
"update_by_query"
|
||||
};
|
||||
//These API are not required for high-level client feature completeness
|
||||
String[] notRequiredApi = new String[] {
|
||||
"cluster.allocation_explain",
|
||||
"cluster.pending_tasks",
|
||||
"cluster.reroute",
|
||||
"cluster.state",
|
||||
"cluster.stats",
|
||||
"indices.shard_stores",
|
||||
"indices.upgrade",
|
||||
"indices.recovery",
|
||||
"indices.segments",
|
||||
"indices.stats",
|
||||
"ingest.processor_grok",
|
||||
"nodes.info",
|
||||
"nodes.stats",
|
||||
"nodes.hot_threads",
|
||||
"nodes.usage",
|
||||
"search_shards",
|
||||
};
|
||||
Set<String> deprecatedMethods = new HashSet<>();
|
||||
deprecatedMethods.add("indices.force_merge");
|
||||
deprecatedMethods.add("multi_get");
|
||||
deprecatedMethods.add("multi_search");
|
||||
deprecatedMethods.add("search_scroll");
|
||||
|
||||
ClientYamlSuiteRestSpec restSpec = ClientYamlSuiteRestSpec.load("/rest-api-spec/api");
|
||||
Set<String> apiSpec = restSpec.getApis().stream().map(ClientYamlSuiteRestApi::getName).collect(Collectors.toSet());
|
||||
|
||||
Set<String> topLevelMethodsExclusions = new HashSet<>();
|
||||
topLevelMethodsExclusions.add("getLowLevelClient");
|
||||
topLevelMethodsExclusions.add("close");
|
||||
|
||||
Map<String, Method> methods = Arrays.stream(RestHighLevelClient.class.getMethods())
|
||||
.filter(method -> method.getDeclaringClass().equals(RestHighLevelClient.class)
|
||||
&& topLevelMethodsExclusions.contains(method.getName()) == false)
|
||||
.map(method -> Tuple.tuple(toSnakeCase(method.getName()), method))
|
||||
.flatMap(tuple -> tuple.v2().getReturnType().getName().endsWith("Client")
|
||||
? getSubClientMethods(tuple.v1(), tuple.v2().getReturnType()) : Stream.of(tuple))
|
||||
.collect(Collectors.toMap(Tuple::v1, Tuple::v2));
|
||||
|
||||
Set<String> apiNotFound = new HashSet<>();
|
||||
|
||||
for (Map.Entry<String, Method> entry : methods.entrySet()) {
|
||||
Method method = entry.getValue();
|
||||
String apiName = entry.getKey();
|
||||
|
||||
assertTrue("method [" + apiName + "] is not final",
|
||||
Modifier.isFinal(method.getClass().getModifiers()) || Modifier.isFinal(method.getModifiers()));
|
||||
assertTrue(Modifier.isPublic(method.getModifiers()));
|
||||
|
||||
//we convert all the method names to snake case, hence we need to look for the '_async' suffix rather than 'Async'
|
||||
if (apiName.endsWith("_async")) {
|
||||
assertTrue("async method [" + method.getName() + "] doesn't have corresponding sync method",
|
||||
methods.containsKey(apiName.substring(0, apiName.length() - 6)));
|
||||
assertThat(method.getReturnType(), equalTo(Void.TYPE));
|
||||
assertEquals(0, method.getExceptionTypes().length);
|
||||
assertEquals(3, method.getParameterTypes().length);
|
||||
assertThat(method.getParameterTypes()[0].getSimpleName(), endsWith("Request"));
|
||||
assertThat(method.getParameterTypes()[1], equalTo(RequestOptions.class));
|
||||
assertThat(method.getParameterTypes()[2], equalTo(ActionListener.class));
|
||||
} else {
|
||||
//A few methods return a boolean rather than a response object
|
||||
if (apiName.equals("ping") || apiName.contains("exist")) {
|
||||
assertThat(method.getReturnType().getSimpleName(), equalTo("boolean"));
|
||||
} else {
|
||||
assertThat(method.getReturnType().getSimpleName(), endsWith("Response"));
|
||||
}
|
||||
|
||||
assertEquals(1, method.getExceptionTypes().length);
|
||||
//a few methods don't accept a request object as argument
|
||||
if (apiName.equals("ping") || apiName.equals("info")) {
|
||||
assertEquals(1, method.getParameterTypes().length);
|
||||
assertThat(method.getParameterTypes()[0], equalTo(RequestOptions.class));
|
||||
} else {
|
||||
assertEquals(apiName, 2, method.getParameterTypes().length);
|
||||
assertThat(method.getParameterTypes()[0].getSimpleName(), endsWith("Request"));
|
||||
assertThat(method.getParameterTypes()[1], equalTo(RequestOptions.class));
|
||||
}
|
||||
|
||||
boolean remove = apiSpec.remove(apiName);
|
||||
if (remove == false) {
|
||||
if (deprecatedMethods.contains(apiName)) {
|
||||
assertTrue("method [" + method.getName() + "], api [" + apiName + "] should be deprecated",
|
||||
method.isAnnotationPresent(Deprecated.class));
|
||||
} else {
|
||||
//TODO xpack api are currently ignored, we need to load xpack yaml spec too
|
||||
if (apiName.startsWith("xpack.") == false) {
|
||||
apiNotFound.add(apiName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
assertThat("Some client method doesn't match a corresponding API defined in the REST spec: " + apiNotFound,
|
||||
apiNotFound.size(), equalTo(0));
|
||||
|
||||
//we decided not to support cat API in the high-level REST client, they are supposed to be used from a low-level client
|
||||
apiSpec.removeIf(api -> api.startsWith("cat."));
|
||||
Stream.concat(Arrays.stream(notYetSupportedApi), Arrays.stream(notRequiredApi)).forEach(
|
||||
api -> assertTrue(api + " API is either not defined in the spec or already supported by the high-level client",
|
||||
apiSpec.remove(api)));
|
||||
assertThat("Some API are not supported but they should be: " + apiSpec, apiSpec.size(), equalTo(0));
|
||||
}
|
||||
|
||||
private static Stream<Tuple<String, Method>> getSubClientMethods(String namespace, Class<?> clientClass) {
|
||||
return Arrays.stream(clientClass.getMethods()).filter(method -> method.getDeclaringClass().equals(clientClass))
|
||||
.map(method -> Tuple.tuple(namespace + "." + toSnakeCase(method.getName()), method))
|
||||
.flatMap(tuple -> tuple.v2().getReturnType().getName().endsWith("Client")
|
||||
? getSubClientMethods(tuple.v1(), tuple.v2().getReturnType()) : Stream.of(tuple));
|
||||
}
|
||||
|
||||
private static String toSnakeCase(String camelCase) {
|
||||
StringBuilder snakeCaseString = new StringBuilder();
|
||||
for (Character aChar : camelCase.toCharArray()) {
|
||||
if (Character.isUpperCase(aChar)) {
|
||||
snakeCaseString.append('_');
|
||||
snakeCaseString.append(Character.toLowerCase(aChar));
|
||||
} else {
|
||||
snakeCaseString.append(aChar);
|
||||
}
|
||||
}
|
||||
return snakeCaseString.toString();
|
||||
}
|
||||
|
||||
private static class TrackingActionListener implements ActionListener<Integer> {
|
||||
|
|
|
@ -597,7 +597,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
|
||||
searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)),
|
||||
highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync);
|
||||
highLevelClient()::scroll, highLevelClient()::scrollAsync);
|
||||
|
||||
assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L));
|
||||
assertThat(searchResponse.getHits().getHits().length, equalTo(35));
|
||||
|
@ -606,7 +606,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
|
||||
searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)),
|
||||
highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync);
|
||||
highLevelClient()::scroll, highLevelClient()::scrollAsync);
|
||||
|
||||
assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L));
|
||||
assertThat(searchResponse.getHits().getHits().length, equalTo(30));
|
||||
|
@ -623,7 +623,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
|
||||
SearchScrollRequest scrollRequest = new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2));
|
||||
ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> execute(scrollRequest,
|
||||
highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync));
|
||||
highLevelClient()::scroll, highLevelClient()::scrollAsync));
|
||||
assertEquals(RestStatus.NOT_FOUND, exception.status());
|
||||
assertThat(exception.getRootCause(), instanceOf(ElasticsearchException.class));
|
||||
ElasticsearchException rootCause = (ElasticsearchException) exception.getRootCause();
|
||||
|
@ -644,7 +644,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
multiSearchRequest.add(searchRequest3);
|
||||
|
||||
MultiSearchResponse multiSearchResponse =
|
||||
execute(multiSearchRequest, highLevelClient()::multiSearch, highLevelClient()::multiSearchAsync);
|
||||
execute(multiSearchRequest, highLevelClient()::msearch, highLevelClient()::msearchAsync);
|
||||
assertThat(multiSearchResponse.getTook().millis(), Matchers.greaterThanOrEqualTo(0L));
|
||||
assertThat(multiSearchResponse.getResponses().length, Matchers.equalTo(3));
|
||||
|
||||
|
@ -686,7 +686,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
multiSearchRequest.add(searchRequest3);
|
||||
|
||||
MultiSearchResponse multiSearchResponse =
|
||||
execute(multiSearchRequest, highLevelClient()::multiSearch, highLevelClient()::multiSearchAsync);
|
||||
execute(multiSearchRequest, highLevelClient()::msearch, highLevelClient()::msearchAsync);
|
||||
assertThat(multiSearchResponse.getTook().millis(), Matchers.greaterThanOrEqualTo(0L));
|
||||
assertThat(multiSearchResponse.getResponses().length, Matchers.equalTo(3));
|
||||
|
||||
|
@ -734,7 +734,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
multiSearchRequest.add(searchRequest3);
|
||||
|
||||
MultiSearchResponse multiSearchResponse =
|
||||
execute(multiSearchRequest, highLevelClient()::multiSearch, highLevelClient()::multiSearchAsync);
|
||||
execute(multiSearchRequest, highLevelClient()::msearch, highLevelClient()::msearchAsync);
|
||||
assertThat(multiSearchResponse.getTook().millis(), Matchers.greaterThanOrEqualTo(0L));
|
||||
assertThat(multiSearchResponse.getResponses().length, Matchers.equalTo(3));
|
||||
|
||||
|
@ -759,7 +759,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
searchRequest1.source().highlighter(new HighlightBuilder().field("field"));
|
||||
searchRequest2.source().highlighter(new HighlightBuilder().field("field"));
|
||||
searchRequest3.source().highlighter(new HighlightBuilder().field("field"));
|
||||
multiSearchResponse = execute(multiSearchRequest, highLevelClient()::multiSearch, highLevelClient()::multiSearchAsync);
|
||||
multiSearchResponse = execute(multiSearchRequest, highLevelClient()::msearch, highLevelClient()::msearchAsync);
|
||||
assertThat(multiSearchResponse.getTook().millis(), Matchers.greaterThanOrEqualTo(0L));
|
||||
assertThat(multiSearchResponse.getResponses().length, Matchers.equalTo(3));
|
||||
|
||||
|
@ -797,7 +797,7 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
multiSearchRequest.add(searchRequest2);
|
||||
|
||||
MultiSearchResponse multiSearchResponse =
|
||||
execute(multiSearchRequest, highLevelClient()::multiSearch, highLevelClient()::multiSearchAsync);
|
||||
execute(multiSearchRequest, highLevelClient()::msearch, highLevelClient()::msearchAsync);
|
||||
assertThat(multiSearchResponse.getTook().millis(), Matchers.greaterThanOrEqualTo(0L));
|
||||
assertThat(multiSearchResponse.getResponses().length, Matchers.equalTo(2));
|
||||
|
||||
|
@ -941,8 +941,8 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
multiSearchTemplateRequest.add(badRequest);
|
||||
|
||||
MultiSearchTemplateResponse multiSearchTemplateResponse =
|
||||
execute(multiSearchTemplateRequest, highLevelClient()::multiSearchTemplate,
|
||||
highLevelClient()::multiSearchTemplateAsync);
|
||||
execute(multiSearchTemplateRequest, highLevelClient()::msearchTemplate,
|
||||
highLevelClient()::msearchTemplateAsync);
|
||||
|
||||
Item[] responses = multiSearchTemplateResponse.getResponses();
|
||||
|
||||
|
@ -999,8 +999,8 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
|
|||
|
||||
// The whole HTTP request should fail if no nested search requests are valid
|
||||
ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class,
|
||||
() -> execute(multiSearchTemplateRequest, highLevelClient()::multiSearchTemplate,
|
||||
highLevelClient()::multiSearchTemplateAsync));
|
||||
() -> execute(multiSearchTemplateRequest, highLevelClient()::msearchTemplate,
|
||||
highLevelClient()::msearchTemplateAsync));
|
||||
|
||||
assertEquals(RestStatus.BAD_REQUEST, exception.status());
|
||||
assertThat(exception.getMessage(), containsString("no requests added"));
|
||||
|
|
|
@ -28,6 +28,8 @@ import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryRequ
|
|||
import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.verify.VerifyRepositoryRequest;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.verify.VerifyRepositoryResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
|
@ -40,12 +42,15 @@ import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.repositories.fs.FsRepository;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.snapshots.RestoreInfo;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.hamcrest.Matchers.contains;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
||||
|
@ -61,8 +66,8 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
private CreateSnapshotResponse createTestSnapshot(CreateSnapshotRequest createSnapshotRequest) throws IOException {
|
||||
// assumes the repository already exists
|
||||
|
||||
return execute(createSnapshotRequest, highLevelClient().snapshot()::createSnapshot,
|
||||
highLevelClient().snapshot()::createSnapshotAsync);
|
||||
return execute(createSnapshotRequest, highLevelClient().snapshot()::create,
|
||||
highLevelClient().snapshot()::createAsync);
|
||||
}
|
||||
|
||||
public void testCreateRepository() throws IOException {
|
||||
|
@ -77,8 +82,8 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
|
||||
GetRepositoriesRequest request = new GetRepositoriesRequest();
|
||||
request.repositories(new String[]{testRepository});
|
||||
GetRepositoriesResponse response = execute(request, highLevelClient().snapshot()::getRepositories,
|
||||
highLevelClient().snapshot()::getRepositoriesAsync);
|
||||
GetRepositoriesResponse response = execute(request, highLevelClient().snapshot()::getRepository,
|
||||
highLevelClient().snapshot()::getRepositoryAsync);
|
||||
assertThat(1, equalTo(response.repositories().size()));
|
||||
}
|
||||
|
||||
|
@ -86,8 +91,8 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
assertTrue(createTestRepository("other", FsRepository.TYPE, "{\"location\": \".\"}").isAcknowledged());
|
||||
assertTrue(createTestRepository("test", FsRepository.TYPE, "{\"location\": \".\"}").isAcknowledged());
|
||||
|
||||
GetRepositoriesResponse response = execute(new GetRepositoriesRequest(), highLevelClient().snapshot()::getRepositories,
|
||||
highLevelClient().snapshot()::getRepositoriesAsync);
|
||||
GetRepositoriesResponse response = execute(new GetRepositoriesRequest(), highLevelClient().snapshot()::getRepository,
|
||||
highLevelClient().snapshot()::getRepositoryAsync);
|
||||
assertThat(2, equalTo(response.repositories().size()));
|
||||
}
|
||||
|
||||
|
@ -95,7 +100,7 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
String repository = "doesnotexist";
|
||||
GetRepositoriesRequest request = new GetRepositoriesRequest(new String[]{repository});
|
||||
ElasticsearchException exception = expectThrows(ElasticsearchException.class, () -> execute(request,
|
||||
highLevelClient().snapshot()::getRepositories, highLevelClient().snapshot()::getRepositoriesAsync));
|
||||
highLevelClient().snapshot()::getRepository, highLevelClient().snapshot()::getRepositoryAsync));
|
||||
|
||||
assertThat(exception.status(), equalTo(RestStatus.NOT_FOUND));
|
||||
assertThat(exception.getMessage(), equalTo(
|
||||
|
@ -107,8 +112,8 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
assertTrue(createTestRepository(repository, FsRepository.TYPE, "{\"location\": \".\"}").isAcknowledged());
|
||||
|
||||
GetRepositoriesRequest request = new GetRepositoriesRequest();
|
||||
GetRepositoriesResponse response = execute(request, highLevelClient().snapshot()::getRepositories,
|
||||
highLevelClient().snapshot()::getRepositoriesAsync);
|
||||
GetRepositoriesResponse response = execute(request, highLevelClient().snapshot()::getRepository,
|
||||
highLevelClient().snapshot()::getRepositoryAsync);
|
||||
assertThat(1, equalTo(response.repositories().size()));
|
||||
|
||||
DeleteRepositoryRequest deleteRequest = new DeleteRepositoryRequest(repository);
|
||||
|
@ -205,6 +210,42 @@ public class SnapshotIT extends ESRestHighLevelClientTestCase {
|
|||
assertThat(response.getSnapshots().get(0).getIndices().containsKey(testIndex), is(true));
|
||||
}
|
||||
|
||||
public void testRestoreSnapshot() throws IOException {
|
||||
String testRepository = "test";
|
||||
String testSnapshot = "snapshot_1";
|
||||
String testIndex = "test_index";
|
||||
String restoredIndex = testIndex + "_restored";
|
||||
|
||||
PutRepositoryResponse putRepositoryResponse = createTestRepository(testRepository, FsRepository.TYPE, "{\"location\": \".\"}");
|
||||
assertTrue(putRepositoryResponse.isAcknowledged());
|
||||
|
||||
createIndex(testIndex, Settings.EMPTY);
|
||||
assertTrue("index [" + testIndex + "] should have been created", indexExists(testIndex));
|
||||
|
||||
CreateSnapshotRequest createSnapshotRequest = new CreateSnapshotRequest(testRepository, testSnapshot);
|
||||
createSnapshotRequest.indices(testIndex);
|
||||
createSnapshotRequest.waitForCompletion(true);
|
||||
CreateSnapshotResponse createSnapshotResponse = createTestSnapshot(createSnapshotRequest);
|
||||
assertEquals(RestStatus.OK, createSnapshotResponse.status());
|
||||
|
||||
deleteIndex(testIndex);
|
||||
assertFalse("index [" + testIndex + "] should have been deleted", indexExists(testIndex));
|
||||
|
||||
RestoreSnapshotRequest request = new RestoreSnapshotRequest(testRepository, testSnapshot);
|
||||
request.waitForCompletion(true);
|
||||
request.renamePattern(testIndex);
|
||||
request.renameReplacement(restoredIndex);
|
||||
|
||||
RestoreSnapshotResponse response = execute(request, highLevelClient().snapshot()::restore,
|
||||
highLevelClient().snapshot()::restoreAsync);
|
||||
|
||||
RestoreInfo restoreInfo = response.getRestoreInfo();
|
||||
assertThat(restoreInfo.name(), equalTo(testSnapshot));
|
||||
assertThat(restoreInfo.indices(), equalTo(Collections.singletonList(restoredIndex)));
|
||||
assertThat(restoreInfo.successfulShards(), greaterThan(0));
|
||||
assertThat(restoreInfo.failedShards(), equalTo(0));
|
||||
}
|
||||
|
||||
public void testDeleteSnapshot() throws IOException {
|
||||
String repository = "test_repository";
|
||||
String snapshot = "test_snapshot";
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchResponse;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchResponse;
|
||||
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
public class WatcherIT extends ESRestHighLevelClientTestCase {
|
||||
|
||||
public void testPutWatch() throws Exception {
|
||||
String watchId = randomAlphaOfLength(10);
|
||||
PutWatchResponse putWatchResponse = createWatch(watchId);
|
||||
assertThat(putWatchResponse.isCreated(), is(true));
|
||||
assertThat(putWatchResponse.getId(), is(watchId));
|
||||
assertThat(putWatchResponse.getVersion(), is(1L));
|
||||
}
|
||||
|
||||
private PutWatchResponse createWatch(String watchId) throws Exception {
|
||||
String json = "{ \n" +
|
||||
" \"trigger\": { \"schedule\": { \"interval\": \"10h\" } },\n" +
|
||||
" \"input\": { \"none\": {} },\n" +
|
||||
" \"actions\": { \"logme\": { \"logging\": { \"text\": \"{{ctx.payload}}\" } } }\n" +
|
||||
"}";
|
||||
BytesReference bytesReference = new BytesArray(json);
|
||||
PutWatchRequest putWatchRequest = new PutWatchRequest(watchId, bytesReference, XContentType.JSON);
|
||||
return highLevelClient().xpack().watcher().putWatch(putWatchRequest, RequestOptions.DEFAULT);
|
||||
}
|
||||
|
||||
public void testDeleteWatch() throws Exception {
|
||||
// delete watch that exists
|
||||
{
|
||||
String watchId = randomAlphaOfLength(10);
|
||||
createWatch(watchId);
|
||||
DeleteWatchResponse deleteWatchResponse = highLevelClient().xpack().watcher().deleteWatch(new DeleteWatchRequest(watchId),
|
||||
RequestOptions.DEFAULT);
|
||||
assertThat(deleteWatchResponse.getId(), is(watchId));
|
||||
assertThat(deleteWatchResponse.getVersion(), is(2L));
|
||||
assertThat(deleteWatchResponse.isFound(), is(true));
|
||||
}
|
||||
|
||||
// delete watch that does not exist
|
||||
{
|
||||
String watchId = randomAlphaOfLength(10);
|
||||
DeleteWatchResponse deleteWatchResponse = highLevelClient().xpack().watcher().deleteWatch(new DeleteWatchRequest(watchId),
|
||||
RequestOptions.DEFAULT);
|
||||
assertThat(deleteWatchResponse.getId(), is(watchId));
|
||||
assertThat(deleteWatchResponse.getVersion(), is(1L));
|
||||
assertThat(deleteWatchResponse.isFound(), is(false));
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -1121,7 +1121,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// end::multi-get-request-top-level-extras
|
||||
|
||||
// tag::multi-get-execute
|
||||
MultiGetResponse response = client.multiGet(request, RequestOptions.DEFAULT);
|
||||
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
|
||||
// end::multi-get-execute
|
||||
|
||||
// tag::multi-get-response
|
||||
|
@ -1174,7 +1174,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::multi-get-execute-async
|
||||
client.multiGetAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.mgetAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::multi-get-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
@ -1185,7 +1185,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
request.add(new MultiGetRequest.Item("index", "type", "example_id")
|
||||
.fetchSourceContext(FetchSourceContext.DO_NOT_FETCH_SOURCE)); // <1>
|
||||
// end::multi-get-request-no-source
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.multiGet(request, RequestOptions.DEFAULT));
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.mget(request, RequestOptions.DEFAULT));
|
||||
assertNull(item.getResponse().getSource());
|
||||
}
|
||||
{
|
||||
|
@ -1198,7 +1198,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
request.add(new MultiGetRequest.Item("index", "type", "example_id")
|
||||
.fetchSourceContext(fetchSourceContext)); // <1>
|
||||
// end::multi-get-request-source-include
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.multiGet(request, RequestOptions.DEFAULT));
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.mget(request, RequestOptions.DEFAULT));
|
||||
assertThat(item.getResponse().getSource(), hasEntry("foo", "val1"));
|
||||
assertThat(item.getResponse().getSource(), hasEntry("bar", "val2"));
|
||||
assertThat(item.getResponse().getSource(), not(hasKey("baz")));
|
||||
|
@ -1213,7 +1213,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
request.add(new MultiGetRequest.Item("index", "type", "example_id")
|
||||
.fetchSourceContext(fetchSourceContext)); // <1>
|
||||
// end::multi-get-request-source-exclude
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.multiGet(request, RequestOptions.DEFAULT));
|
||||
MultiGetItemResponse item = unwrapAndAssertExample(client.mget(request, RequestOptions.DEFAULT));
|
||||
assertThat(item.getResponse().getSource(), not(hasKey("foo")));
|
||||
assertThat(item.getResponse().getSource(), not(hasKey("bar")));
|
||||
assertThat(item.getResponse().getSource(), hasEntry("baz", "val3"));
|
||||
|
@ -1223,7 +1223,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// tag::multi-get-request-stored
|
||||
request.add(new MultiGetRequest.Item("index", "type", "example_id")
|
||||
.storedFields("foo")); // <1>
|
||||
MultiGetResponse response = client.multiGet(request, RequestOptions.DEFAULT);
|
||||
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
|
||||
MultiGetItemResponse item = response.getResponses()[0];
|
||||
String value = item.getResponse().getField("foo").getValue(); // <2>
|
||||
// end::multi-get-request-stored
|
||||
|
@ -1235,7 +1235,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
MultiGetRequest request = new MultiGetRequest();
|
||||
request.add(new MultiGetRequest.Item("index", "type", "example_id")
|
||||
.version(1000L));
|
||||
MultiGetResponse response = client.multiGet(request, RequestOptions.DEFAULT);
|
||||
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
|
||||
MultiGetItemResponse item = response.getResponses()[0];
|
||||
assertNull(item.getResponse()); // <1>
|
||||
Exception e = item.getFailure().getFailure(); // <2>
|
||||
|
|
|
@ -622,7 +622,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::get-mapping-request-indicesOptions
|
||||
|
||||
// tag::get-mapping-execute
|
||||
GetMappingsResponse getMappingResponse = client.indices().getMappings(request, RequestOptions.DEFAULT);
|
||||
GetMappingsResponse getMappingResponse = client.indices().getMapping(request, RequestOptions.DEFAULT);
|
||||
// end::get-mapping-execute
|
||||
|
||||
// tag::get-mapping-response
|
||||
|
@ -704,7 +704,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
});
|
||||
|
||||
// tag::get-mapping-execute-async
|
||||
client.indices().getMappingsAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.indices().getMappingAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::get-mapping-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
@ -1344,7 +1344,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::force-merge-request-flush
|
||||
|
||||
// tag::force-merge-execute
|
||||
ForceMergeResponse forceMergeResponse = client.indices().forceMerge(request, RequestOptions.DEFAULT);
|
||||
ForceMergeResponse forceMergeResponse = client.indices().forcemerge(request, RequestOptions.DEFAULT);
|
||||
// end::force-merge-execute
|
||||
|
||||
// tag::force-merge-response
|
||||
|
@ -1369,14 +1369,14 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::force-merge-execute-listener
|
||||
|
||||
// tag::force-merge-execute-async
|
||||
client.indices().forceMergeAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.indices().forcemergeAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::force-merge-execute-async
|
||||
}
|
||||
{
|
||||
// tag::force-merge-notfound
|
||||
try {
|
||||
ForceMergeRequest request = new ForceMergeRequest("does_not_exist");
|
||||
client.indices().forceMerge(request, RequestOptions.DEFAULT);
|
||||
client.indices().forcemerge(request, RequestOptions.DEFAULT);
|
||||
} catch (ElasticsearchException exception) {
|
||||
if (exception.status() == RestStatus.NOT_FOUND) {
|
||||
// <1>
|
||||
|
|
|
@ -317,7 +317,7 @@ public class IngestClientDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// end::simulate-pipeline-request-verbose
|
||||
|
||||
// tag::simulate-pipeline-execute
|
||||
SimulatePipelineResponse response = client.ingest().simulatePipeline(request, RequestOptions.DEFAULT); // <1>
|
||||
SimulatePipelineResponse response = client.ingest().simulate(request, RequestOptions.DEFAULT); // <1>
|
||||
// end::simulate-pipeline-execute
|
||||
|
||||
// tag::simulate-pipeline-response
|
||||
|
@ -381,7 +381,7 @@ public class IngestClientDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::simulate-pipeline-execute-async
|
||||
client.ingest().simulatePipelineAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.ingest().simulateAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::simulate-pipeline-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
|
|
@ -0,0 +1,106 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client.documentation;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.LatchedActionListener;
|
||||
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
|
||||
import org.elasticsearch.client.RequestOptions;
|
||||
import org.elasticsearch.client.RestHighLevelClient;
|
||||
import org.elasticsearch.protocol.xpack.license.LicensesStatus;
|
||||
import org.elasticsearch.protocol.xpack.license.PutLicenseRequest;
|
||||
import org.elasticsearch.protocol.xpack.license.PutLicenseResponse;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
import static org.hamcrest.Matchers.startsWith;
|
||||
|
||||
/**
|
||||
* Documentation for Licensing APIs in the high level java client.
|
||||
* Code wrapped in {@code tag} and {@code end} tags is included in the docs.
|
||||
*/
|
||||
public class LicensingDocumentationIT extends ESRestHighLevelClientTestCase {
|
||||
|
||||
public void testPutLicense() throws Exception {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
String license = "{\"license\": {\"uid\":\"893361dc-9749-4997-93cb-802e3d7fa4a8\",\"type\":\"gold\"," +
|
||||
"\"issue_date_in_millis\":1411948800000,\"expiry_date_in_millis\":1914278399999,\"max_nodes\":1,\"issued_to\":\"issued_to\"," +
|
||||
"\"issuer\":\"issuer\",\"signature\":\"AAAAAgAAAA3U8+YmnvwC+CWsV/mRAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSm" +
|
||||
"kxakxZdW5IMlhlTHNoN1N2MXMvRFk4d3JTZEx3R3RRZ0pzU3lobWJKZnQvSEFva0ppTHBkWkprZWZSQi9iNmRQNkw1SlpLN0lDalZCS095MXRGN1lIZlpYcVVTTn" +
|
||||
"FrcTE2dzhJZmZrdFQrN3JQeGwxb0U0MXZ0dDJHSERiZTVLOHNzSDByWnpoZEphZHBEZjUrTVBxRENNSXNsWWJjZllaODdzVmEzUjNiWktNWGM5TUhQV2plaUo4Q1" +
|
||||
"JOUml4MXNuL0pSOEhQaVB2azhmUk9QVzhFeTFoM1Q0RnJXSG53MWk2K055c28zSmRnVkF1b2JSQkFLV2VXUmVHNDZ2R3o2VE1qbVNQS2lxOHN5bUErZlNIWkZSVm" +
|
||||
"ZIWEtaSU9wTTJENDVvT1NCYklacUYyK2FwRW9xa0t6dldMbmMzSGtQc3FWOTgzZ3ZUcXMvQkt2RUZwMFJnZzlvL2d2bDRWUzh6UG5pdENGWFRreXNKNkE9PQAAAQ" +
|
||||
"Be8GfzDm6T537Iuuvjetb3xK5dvg0K5NQapv+rczWcQFxgCuzbF8plkgetP1aAGZP4uRESDQPMlOCsx4d0UqqAm9f7GbBQ3l93P+PogInPFeEH9NvOmaAQovmxVM" +
|
||||
"9SE6DsDqlX4cXSO+bgWpXPTd2LmpoQc1fXd6BZ8GeuyYpVHVKp9hVU0tAYjw6HzYOE7+zuO1oJYOxElqy66AnIfkvHrvni+flym3tE7tDTgsDRaz7W3iBhaqiSnt" +
|
||||
"EqabEkvHdPHQdSR99XGaEvnHO1paK01/35iZF6OXHsF7CCj+558GRXiVxzueOe7TsGSSt8g7YjZwV9bRCyU7oB4B/nidgI\"}}";
|
||||
{
|
||||
//tag::put-license-execute
|
||||
PutLicenseRequest request = new PutLicenseRequest();
|
||||
request.setLicenseDefinition(license); // <1>
|
||||
request.setAcknowledge(false); // <2>
|
||||
|
||||
PutLicenseResponse response = client.xpack().license().putLicense(request, RequestOptions.DEFAULT);
|
||||
//end::put-license-execute
|
||||
|
||||
//tag::put-license-response
|
||||
LicensesStatus status = response.status(); // <1>
|
||||
assertEquals(status, LicensesStatus.VALID); // <2>
|
||||
boolean acknowledged = response.isAcknowledged(); // <3>
|
||||
String acknowledgeHeader = response.acknowledgeHeader(); // <4>
|
||||
Map<String, String[]> acknowledgeMessages = response.acknowledgeMessages(); // <5>
|
||||
//end::put-license-response
|
||||
|
||||
assertFalse(acknowledged); // Should fail because we are trying to downgrade from platinum trial to gold
|
||||
assertThat(acknowledgeHeader, startsWith("This license update requires acknowledgement."));
|
||||
assertThat(acknowledgeMessages.keySet(), not(hasSize(0)));
|
||||
}
|
||||
{
|
||||
PutLicenseRequest request = new PutLicenseRequest();
|
||||
// tag::put-license-execute-listener
|
||||
ActionListener<PutLicenseResponse> listener = new ActionListener<PutLicenseResponse>() {
|
||||
@Override
|
||||
public void onResponse(PutLicenseResponse indexResponse) {
|
||||
// <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::put-license-execute-listener
|
||||
|
||||
// Replace the empty listener by a blocking listener in test
|
||||
final CountDownLatch latch = new CountDownLatch(1);
|
||||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::put-license-execute-async
|
||||
client.xpack().license().putLicenseAsync(
|
||||
request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::put-license-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -35,12 +35,19 @@ import org.elasticsearch.protocol.xpack.XPackInfoResponse;
|
|||
import org.elasticsearch.protocol.xpack.XPackInfoResponse.BuildInfo;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoResponse.FeatureSetsInfo;
|
||||
import org.elasticsearch.protocol.xpack.XPackInfoResponse.LicenseInfo;
|
||||
import org.elasticsearch.protocol.xpack.XPackUsageRequest;
|
||||
import org.elasticsearch.protocol.xpack.XPackUsageResponse;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.time.Instant;
|
||||
import java.util.EnumSet;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.hamcrest.Matchers.greaterThan;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
/**
|
||||
* Documentation for miscellaneous APIs in the high level java client.
|
||||
* Code wrapped in {@code tag} and {@code end} tags is included in the docs.
|
||||
|
@ -92,8 +99,7 @@ public class MiscellaneousDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
//tag::x-pack-info-response
|
||||
BuildInfo build = response.getBuildInfo(); // <1>
|
||||
LicenseInfo license = response.getLicenseInfo(); // <2>
|
||||
assertEquals(XPackInfoResponse.BASIC_SELF_GENERATED_LICENSE_EXPIRATION_MILLIS,
|
||||
license.getExpiryDate()); // <3>
|
||||
assertThat(license.getExpiryDate(), is(greaterThan(Instant.now().toEpochMilli()))); // <3>
|
||||
FeatureSetsInfo features = response.getFeatureSetsInfo(); // <4>
|
||||
//end::x-pack-info-response
|
||||
|
||||
|
@ -129,6 +135,50 @@ public class MiscellaneousDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
}
|
||||
}
|
||||
|
||||
public void testXPackUsage() throws Exception {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
{
|
||||
//tag::x-pack-usage-execute
|
||||
XPackUsageRequest request = new XPackUsageRequest();
|
||||
XPackUsageResponse response = client.xpack().usage(request, RequestOptions.DEFAULT);
|
||||
//end::x-pack-usage-execute
|
||||
|
||||
//tag::x-pack-usage-response
|
||||
Map<String, Map<String, Object>> usages = response.getUsages();
|
||||
Map<String, Object> monitoringUsage = usages.get("monitoring");
|
||||
assertThat(monitoringUsage.get("available"), is(true));
|
||||
assertThat(monitoringUsage.get("enabled"), is(true));
|
||||
assertThat(monitoringUsage.get("collection_enabled"), is(false));
|
||||
//end::x-pack-usage-response
|
||||
}
|
||||
{
|
||||
XPackUsageRequest request = new XPackUsageRequest();
|
||||
// tag::x-pack-usage-execute-listener
|
||||
ActionListener<XPackUsageResponse> listener = new ActionListener<XPackUsageResponse>() {
|
||||
@Override
|
||||
public void onResponse(XPackUsageResponse response) {
|
||||
// <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::x-pack-usage-execute-listener
|
||||
|
||||
// Replace the empty listener by a blocking listener in test
|
||||
final CountDownLatch latch = new CountDownLatch(1);
|
||||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::x-pack-usage-execute-async
|
||||
client.xpack().usageAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::x-pack-usage-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
}
|
||||
|
||||
public void testInitializationFromClientBuilder() throws IOException {
|
||||
//tag::rest-high-level-client-init
|
||||
RestHighLevelClient client = new RestHighLevelClient(
|
||||
|
|
|
@ -295,7 +295,6 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
}
|
||||
|
||||
@SuppressWarnings({ "unused" })
|
||||
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/32029")
|
||||
public void testSearchRequestAggregations() throws IOException {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
{
|
||||
|
@ -338,8 +337,9 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
Range range = aggregations.get("by_company"); // <1>
|
||||
// end::search-request-aggregations-get-wrongCast
|
||||
} catch (ClassCastException ex) {
|
||||
assertEquals("org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms"
|
||||
+ " cannot be cast to org.elasticsearch.search.aggregations.bucket.range.Range", ex.getMessage());
|
||||
String message = ex.getMessage();
|
||||
assertThat(message, containsString("org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms"));
|
||||
assertThat(message, containsString("org.elasticsearch.search.aggregations.bucket.range.Range"));
|
||||
}
|
||||
assertEquals(3, elasticBucket.getDocCount());
|
||||
assertEquals(30, avg, 0.0);
|
||||
|
@ -583,7 +583,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// tag::search-scroll2
|
||||
SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId); // <1>
|
||||
scrollRequest.scroll(TimeValue.timeValueSeconds(30));
|
||||
SearchResponse searchScrollResponse = client.searchScroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
SearchResponse searchScrollResponse = client.scroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
scrollId = searchScrollResponse.getScrollId(); // <2>
|
||||
hits = searchScrollResponse.getHits(); // <3>
|
||||
assertEquals(3, hits.getTotalHits());
|
||||
|
@ -612,7 +612,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// end::scroll-request-arguments
|
||||
|
||||
// tag::search-scroll-execute-sync
|
||||
SearchResponse searchResponse = client.searchScroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
SearchResponse searchResponse = client.scroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
// end::search-scroll-execute-sync
|
||||
|
||||
assertEquals(0, searchResponse.getFailedShards());
|
||||
|
@ -638,7 +638,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
scrollListener = new LatchedActionListener<>(scrollListener, latch);
|
||||
|
||||
// tag::search-scroll-execute-async
|
||||
client.searchScrollAsync(scrollRequest, RequestOptions.DEFAULT, scrollListener); // <1>
|
||||
client.scrollAsync(scrollRequest, RequestOptions.DEFAULT, scrollListener); // <1>
|
||||
// end::search-scroll-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
@ -710,7 +710,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
while (searchHits != null && searchHits.length > 0) { // <2>
|
||||
SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId); // <3>
|
||||
scrollRequest.scroll(scroll);
|
||||
searchResponse = client.searchScroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
searchResponse = client.scroll(scrollRequest, RequestOptions.DEFAULT);
|
||||
scrollId = searchResponse.getScrollId();
|
||||
searchHits = searchResponse.getHits().getHits();
|
||||
// <4>
|
||||
|
@ -861,7 +861,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// end::multi-search-template-request-inline
|
||||
|
||||
// tag::multi-search-template-request-sync
|
||||
MultiSearchTemplateResponse multiResponse = client.multiSearchTemplate(multiRequest, RequestOptions.DEFAULT);
|
||||
MultiSearchTemplateResponse multiResponse = client.msearchTemplate(multiRequest, RequestOptions.DEFAULT);
|
||||
// end::multi-search-template-request-sync
|
||||
|
||||
// tag::multi-search-template-response
|
||||
|
@ -916,7 +916,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
|
||||
|
||||
// tag::multi-search-template-execute
|
||||
MultiSearchTemplateResponse multiResponse = client.multiSearchTemplate(multiRequest, RequestOptions.DEFAULT);
|
||||
MultiSearchTemplateResponse multiResponse = client.msearchTemplate(multiRequest, RequestOptions.DEFAULT);
|
||||
// end::multi-search-template-execute
|
||||
|
||||
assertNotNull(multiResponse);
|
||||
|
@ -944,7 +944,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::multi-search-template-execute-async
|
||||
client.multiSearchTemplateAsync(multiRequest, RequestOptions.DEFAULT, listener);
|
||||
client.msearchTemplateAsync(multiRequest, RequestOptions.DEFAULT, listener);
|
||||
// end::multi-search-template-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
@ -1136,14 +1136,14 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
// end::rank-eval-execute
|
||||
|
||||
// tag::rank-eval-response
|
||||
double evaluationResult = response.getEvaluationResult(); // <1>
|
||||
double evaluationResult = response.getMetricScore(); // <1>
|
||||
assertEquals(1.0 / 3.0, evaluationResult, 0.0);
|
||||
Map<String, EvalQueryQuality> partialResults =
|
||||
response.getPartialResults();
|
||||
EvalQueryQuality evalQuality =
|
||||
partialResults.get("kimchy_query"); // <2>
|
||||
assertEquals("kimchy_query", evalQuality.getId());
|
||||
double qualityLevel = evalQuality.getQualityLevel(); // <3>
|
||||
double qualityLevel = evalQuality.metricScore(); // <3>
|
||||
assertEquals(1.0 / 3.0, qualityLevel, 0.0);
|
||||
List<RatedSearchHit> hitsAndRatings = evalQuality.getHitsAndRatings();
|
||||
RatedSearchHit ratedSearchHit = hitsAndRatings.get(2);
|
||||
|
@ -1201,7 +1201,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
request.add(secondSearchRequest);
|
||||
// end::multi-search-request-basic
|
||||
// tag::multi-search-execute
|
||||
MultiSearchResponse response = client.multiSearch(request, RequestOptions.DEFAULT);
|
||||
MultiSearchResponse response = client.msearch(request, RequestOptions.DEFAULT);
|
||||
// end::multi-search-execute
|
||||
// tag::multi-search-response
|
||||
MultiSearchResponse.Item firstResponse = response.getResponses()[0]; // <1>
|
||||
|
@ -1233,7 +1233,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::multi-search-execute-async
|
||||
client.multiSearchAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.msearchAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::multi-search-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
@ -1244,7 +1244,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
|
|||
request.add(new SearchRequest("posts") // <1>
|
||||
.types("doc")); // <2>
|
||||
// end::multi-search-request-index
|
||||
MultiSearchResponse response = client.multiSearch(request, RequestOptions.DEFAULT);
|
||||
MultiSearchResponse response = client.msearch(request, RequestOptions.DEFAULT);
|
||||
MultiSearchResponse.Item firstResponse = response.getResponses()[0];
|
||||
assertNull(firstResponse.getFailure());
|
||||
SearchResponse searchResponse = firstResponse.getResponse();
|
||||
|
|
|
@ -33,6 +33,8 @@ import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotReq
|
|||
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;
|
||||
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotRequest;
|
||||
|
@ -53,12 +55,15 @@ import org.elasticsearch.common.unit.TimeValue;
|
|||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.repositories.fs.FsRepository;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.snapshots.RestoreInfo;
|
||||
import org.elasticsearch.snapshots.SnapshotId;
|
||||
import org.elasticsearch.snapshots.SnapshotInfo;
|
||||
import org.elasticsearch.snapshots.SnapshotShardFailure;
|
||||
import org.elasticsearch.snapshots.SnapshotState;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.EnumSet;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
|
@ -221,7 +226,7 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::get-repository-request-masterTimeout
|
||||
|
||||
// tag::get-repository-execute
|
||||
GetRepositoriesResponse response = client.snapshot().getRepositories(request, RequestOptions.DEFAULT);
|
||||
GetRepositoriesResponse response = client.snapshot().getRepository(request, RequestOptions.DEFAULT);
|
||||
// end::get-repository-execute
|
||||
|
||||
// tag::get-repository-response
|
||||
|
@ -256,13 +261,114 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::get-repository-execute-async
|
||||
client.snapshot().getRepositoriesAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.snapshot().getRepositoryAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::get-repository-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
}
|
||||
|
||||
public void testRestoreSnapshot() throws IOException {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
|
||||
createTestRepositories();
|
||||
createTestIndex();
|
||||
createTestSnapshots();
|
||||
|
||||
// tag::restore-snapshot-request
|
||||
RestoreSnapshotRequest request = new RestoreSnapshotRequest(repositoryName, snapshotName);
|
||||
// end::restore-snapshot-request
|
||||
// we need to restore as a different index name
|
||||
|
||||
// tag::restore-snapshot-request-masterTimeout
|
||||
request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1>
|
||||
request.masterNodeTimeout("1m"); // <2>
|
||||
// end::restore-snapshot-request-masterTimeout
|
||||
|
||||
// tag::restore-snapshot-request-waitForCompletion
|
||||
request.waitForCompletion(true); // <1>
|
||||
// end::restore-snapshot-request-waitForCompletion
|
||||
|
||||
// tag::restore-snapshot-request-partial
|
||||
request.partial(false); // <1>
|
||||
// end::restore-snapshot-request-partial
|
||||
|
||||
// tag::restore-snapshot-request-include-global-state
|
||||
request.includeGlobalState(false); // <1>
|
||||
// end::restore-snapshot-request-include-global-state
|
||||
|
||||
// tag::restore-snapshot-request-include-aliases
|
||||
request.includeAliases(false); // <1>
|
||||
// end::restore-snapshot-request-include-aliases
|
||||
|
||||
|
||||
// tag::restore-snapshot-request-indices
|
||||
request.indices("test_index");
|
||||
// end::restore-snapshot-request-indices
|
||||
|
||||
String restoredIndexName = "restored_index";
|
||||
// tag::restore-snapshot-request-rename
|
||||
request.renamePattern("test_(.+)"); // <1>
|
||||
request.renameReplacement("restored_$1"); // <2>
|
||||
// end::restore-snapshot-request-rename
|
||||
|
||||
// tag::restore-snapshot-request-index-settings
|
||||
request.indexSettings( // <1>
|
||||
Settings.builder()
|
||||
.put("index.number_of_replicas", 0)
|
||||
.build());
|
||||
|
||||
request.ignoreIndexSettings("index.refresh_interval", "index.search.idle.after"); // <2>
|
||||
request.indicesOptions(new IndicesOptions( // <3>
|
||||
EnumSet.of(IndicesOptions.Option.IGNORE_UNAVAILABLE),
|
||||
EnumSet.of(IndicesOptions.WildcardStates.OPEN)));
|
||||
// end::restore-snapshot-request-index-settings
|
||||
|
||||
// tag::restore-snapshot-execute
|
||||
RestoreSnapshotResponse response = client.snapshot().restore(request, RequestOptions.DEFAULT);
|
||||
// end::restore-snapshot-execute
|
||||
|
||||
// tag::restore-snapshot-response
|
||||
RestoreInfo restoreInfo = response.getRestoreInfo();
|
||||
List<String> indices = restoreInfo.indices(); // <1>
|
||||
// end::restore-snapshot-response
|
||||
assertEquals(Collections.singletonList(restoredIndexName), indices);
|
||||
assertEquals(0, restoreInfo.failedShards());
|
||||
assertTrue(restoreInfo.successfulShards() > 0);
|
||||
}
|
||||
|
||||
public void testRestoreSnapshotAsync() throws InterruptedException {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
{
|
||||
RestoreSnapshotRequest request = new RestoreSnapshotRequest();
|
||||
|
||||
// tag::restore-snapshot-execute-listener
|
||||
ActionListener<RestoreSnapshotResponse> listener =
|
||||
new ActionListener<RestoreSnapshotResponse>() {
|
||||
@Override
|
||||
public void onResponse(RestoreSnapshotResponse restoreSnapshotResponse) {
|
||||
// <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::restore-snapshot-execute-listener
|
||||
|
||||
// Replace the empty listener by a blocking listener in test
|
||||
final CountDownLatch latch = new CountDownLatch(1);
|
||||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::restore-snapshot-execute-async
|
||||
client.snapshot().restoreAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::restore-snapshot-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
}
|
||||
|
||||
public void testSnapshotDeleteRepository() throws IOException {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
|
||||
|
@ -425,7 +531,7 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::create-snapshot-request-waitForCompletion
|
||||
|
||||
// tag::create-snapshot-execute
|
||||
CreateSnapshotResponse response = client.snapshot().createSnapshot(request, RequestOptions.DEFAULT);
|
||||
CreateSnapshotResponse response = client.snapshot().create(request, RequestOptions.DEFAULT);
|
||||
// end::create-snapshot-execute
|
||||
|
||||
// tag::create-snapshot-response
|
||||
|
@ -433,6 +539,12 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
// end::create-snapshot-response
|
||||
|
||||
assertEquals(RestStatus.OK, status);
|
||||
|
||||
// tag::create-snapshot-response-snapshot-info
|
||||
SnapshotInfo snapshotInfo = response.getSnapshotInfo(); // <1>
|
||||
// end::create-snapshot-response-snapshot-info
|
||||
|
||||
assertNotNull(snapshotInfo);
|
||||
}
|
||||
|
||||
public void testSnapshotCreateAsync() throws InterruptedException {
|
||||
|
@ -460,7 +572,7 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
|
|||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::create-snapshot-execute-async
|
||||
client.snapshot().createSnapshotAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
client.snapshot().createAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::create-snapshot-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
|
|
|
@ -0,0 +1,135 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.client.documentation;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.LatchedActionListener;
|
||||
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
|
||||
import org.elasticsearch.client.RequestOptions;
|
||||
import org.elasticsearch.client.RestHighLevelClient;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.DeleteWatchResponse;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchRequest;
|
||||
import org.elasticsearch.protocol.xpack.watcher.PutWatchResponse;
|
||||
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
public class WatcherDocumentationIT extends ESRestHighLevelClientTestCase {
|
||||
|
||||
public void testWatcher() throws Exception {
|
||||
RestHighLevelClient client = highLevelClient();
|
||||
|
||||
{
|
||||
//tag::x-pack-put-watch-execute
|
||||
// you can also use the WatchSourceBuilder from org.elasticsearch.plugin:x-pack-core to create a watch programmatically
|
||||
BytesReference watch = new BytesArray("{ \n" +
|
||||
" \"trigger\": { \"schedule\": { \"interval\": \"10h\" } },\n" +
|
||||
" \"input\": { \"simple\": { \"foo\" : \"bar\" } },\n" +
|
||||
" \"actions\": { \"logme\": { \"logging\": { \"text\": \"{{ctx.payload}}\" } } }\n" +
|
||||
"}");
|
||||
PutWatchRequest request = new PutWatchRequest("my_watch_id", watch, XContentType.JSON);
|
||||
request.setActive(false); // <1>
|
||||
PutWatchResponse response = client.xpack().watcher().putWatch(request, RequestOptions.DEFAULT);
|
||||
//end::x-pack-put-watch-execute
|
||||
|
||||
//tag::x-pack-put-watch-response
|
||||
String watchId = response.getId(); // <1>
|
||||
boolean isCreated = response.isCreated(); // <2>
|
||||
long version = response.getVersion(); // <3>
|
||||
//end::x-pack-put-watch-response
|
||||
}
|
||||
|
||||
{
|
||||
BytesReference watch = new BytesArray("{ \n" +
|
||||
" \"trigger\": { \"schedule\": { \"interval\": \"10h\" } },\n" +
|
||||
" \"input\": { \"simple\": { \"foo\" : \"bar\" } },\n" +
|
||||
" \"actions\": { \"logme\": { \"logging\": { \"text\": \"{{ctx.payload}}\" } } }\n" +
|
||||
"}");
|
||||
PutWatchRequest request = new PutWatchRequest("my_other_watch_id", watch, XContentType.JSON);
|
||||
// tag::x-pack-put-watch-execute-listener
|
||||
ActionListener<PutWatchResponse> listener = new ActionListener<PutWatchResponse>() {
|
||||
@Override
|
||||
public void onResponse(PutWatchResponse response) {
|
||||
// <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::x-pack-put-watch-execute-listener
|
||||
|
||||
// Replace the empty listener by a blocking listener in test
|
||||
final CountDownLatch latch = new CountDownLatch(1);
|
||||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::x-pack-put-watch-execute-async
|
||||
client.xpack().watcher().putWatchAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::x-pack-put-watch-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
|
||||
{
|
||||
//tag::x-pack-delete-watch-execute
|
||||
DeleteWatchRequest request = new DeleteWatchRequest("my_watch_id");
|
||||
DeleteWatchResponse response = client.xpack().watcher().deleteWatch(request, RequestOptions.DEFAULT);
|
||||
//end::x-pack-delete-watch-execute
|
||||
|
||||
//tag::x-pack-delete-watch-response
|
||||
String watchId = response.getId(); // <1>
|
||||
boolean found = response.isFound(); // <2>
|
||||
long version = response.getVersion(); // <3>
|
||||
//end::x-pack-delete-watch-response
|
||||
}
|
||||
|
||||
{
|
||||
DeleteWatchRequest request = new DeleteWatchRequest("my_other_watch_id");
|
||||
// tag::x-pack-delete-watch-execute-listener
|
||||
ActionListener<DeleteWatchResponse> listener = new ActionListener<DeleteWatchResponse>() {
|
||||
@Override
|
||||
public void onResponse(DeleteWatchResponse response) {
|
||||
// <1>
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
// <2>
|
||||
}
|
||||
};
|
||||
// end::x-pack-delete-watch-execute-listener
|
||||
|
||||
// Replace the empty listener by a blocking listener in test
|
||||
final CountDownLatch latch = new CountDownLatch(1);
|
||||
listener = new LatchedActionListener<>(listener, latch);
|
||||
|
||||
// tag::x-pack-delete-watch-execute-async
|
||||
client.xpack().watcher().deleteWatchAsync(request, RequestOptions.DEFAULT, listener); // <1>
|
||||
// end::x-pack-delete-watch-execute-async
|
||||
|
||||
assertTrue(latch.await(30L, TimeUnit.SECONDS));
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -59,6 +59,10 @@ forbiddenApisMain {
|
|||
PrecommitTasks.getResource('/forbidden/http-signatures.txt')]
|
||||
}
|
||||
|
||||
forbiddenPatterns {
|
||||
exclude '**/*.der'
|
||||
}
|
||||
|
||||
forbiddenApisTest {
|
||||
//we are using jdk-internal instead of jdk-non-portable to allow for com.sun.net.httpserver.* usage
|
||||
bundledSignatures -= 'jdk-non-portable'
|
||||
|
|
|
@ -30,14 +30,21 @@ import org.junit.BeforeClass;
|
|||
|
||||
import javax.net.ssl.KeyManagerFactory;
|
||||
import javax.net.ssl.SSLContext;
|
||||
import javax.net.ssl.SSLHandshakeException;
|
||||
import javax.net.ssl.TrustManagerFactory;
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.net.InetAddress;
|
||||
import java.net.InetSocketAddress;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
import java.security.KeyFactory;
|
||||
import java.security.KeyStore;
|
||||
import java.security.cert.Certificate;
|
||||
import java.security.cert.CertificateFactory;
|
||||
import java.security.spec.PKCS8EncodedKeySpec;
|
||||
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
import static org.junit.Assert.assertThat;
|
||||
import static org.junit.Assert.fail;
|
||||
|
@ -72,9 +79,6 @@ public class RestClientBuilderIntegTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
public void testBuilderUsesDefaultSSLContext() throws Exception {
|
||||
assumeFalse("Due to bug inside jdk, this test can't momentarily run with java 11. " +
|
||||
"See: https://github.com/elastic/elasticsearch/issues/31940",
|
||||
System.getProperty("java.version").contains("11"));
|
||||
final SSLContext defaultSSLContext = SSLContext.getDefault();
|
||||
try {
|
||||
try (RestClient client = buildRestClient()) {
|
||||
|
@ -82,7 +86,7 @@ public class RestClientBuilderIntegTests extends RestClientTestCase {
|
|||
client.performRequest(new Request("GET", "/"));
|
||||
fail("connection should have been rejected due to SSL handshake");
|
||||
} catch (Exception e) {
|
||||
assertThat(e.getMessage(), containsString("General SSLEngine problem"));
|
||||
assertThat(e, instanceOf(SSLHandshakeException.class));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -103,12 +107,20 @@ public class RestClientBuilderIntegTests extends RestClientTestCase {
|
|||
|
||||
private static SSLContext getSslContext() throws Exception {
|
||||
SSLContext sslContext = SSLContext.getInstance("TLS");
|
||||
try (InputStream in = RestClientBuilderIntegTests.class.getResourceAsStream("/testks.jks")) {
|
||||
KeyStore keyStore = KeyStore.getInstance("JKS");
|
||||
keyStore.load(in, "password".toCharArray());
|
||||
KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509");
|
||||
try (InputStream certFile = RestClientBuilderIntegTests.class.getResourceAsStream("/test.crt")) {
|
||||
// Build a keystore of default type programmatically since we can't use JKS keystores to
|
||||
// init a KeyManagerFactory in FIPS 140 JVMs.
|
||||
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
|
||||
keyStore.load(null, "password".toCharArray());
|
||||
CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
|
||||
PKCS8EncodedKeySpec privateKeySpec = new PKCS8EncodedKeySpec(Files.readAllBytes(Paths.get(RestClientBuilderIntegTests.class
|
||||
.getResource("/test.der").toURI())));
|
||||
KeyFactory keyFactory = KeyFactory.getInstance("RSA");
|
||||
keyStore.setKeyEntry("mykey", keyFactory.generatePrivate(privateKeySpec), "password".toCharArray(),
|
||||
new Certificate[]{certFactory.generateCertificate(certFile)});
|
||||
KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
|
||||
kmf.init(keyStore, "password".toCharArray());
|
||||
TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509");
|
||||
TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
|
||||
tmf.init(keyStore);
|
||||
sslContext.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);
|
||||
}
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
-----BEGIN CERTIFICATE-----
|
||||
MIIEATCCAumgAwIBAgIEObhDZDANBgkqhkiG9w0BAQsFADBnMQswCQYDVQQGEwJV
|
||||
UzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDU1vdW50YWluIFZpZXcxEDAOBgNVBAoT
|
||||
B2VsYXN0aWMxDTALBgNVBAsTBHRlc3QxEjAQBgNVBAMTCXRlc3Qgbm9kZTAeFw0x
|
||||
NzA3MTcxNjEyNTZaFw0yNzA3MTUxNjEyNTZaMGcxCzAJBgNVBAYTAlVTMQswCQYD
|
||||
VQQIEwJDQTEWMBQGA1UEBxMNTW91bnRhaW4gVmlldzEQMA4GA1UEChMHZWxhc3Rp
|
||||
YzENMAsGA1UECxMEdGVzdDESMBAGA1UEAxMJdGVzdCBub2RlMIIBIjANBgkqhkiG
|
||||
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnXtuGIgAq6vWzUD34HXkYF+0u103hb8d1h35
|
||||
kjeuNApkUhS6x/VbuNp7TpWmprfDgG5w9TourHvyiqcQMDEWrBunS6rmKo1jK1Wm
|
||||
le3qA3F2l9VIZSNeeYQgezmzuElEPPmBjN8XBByIWKYjZcGd5u7DiquPUh9QLIev
|
||||
itgB2jfi9D8ewyvaSbVAQuQwyIaDN9L74wKyMC8EuzzAWNSDjgIhhwcR5qg17msa
|
||||
ItyM44/3hik+ObIGpMlLSxQu2V1U9bOaq48JjQBLHVg1vzC9VzGuNdEb8haFnhJN
|
||||
UrdESdHymbtBSUvy30iB+kHq5R8wQ4pC+WxChQnbA2GskuFrMQIDAQABo4G0MIGx
|
||||
MIGPBgNVHREEgYcwgYSHBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAGCCWxvY2FsaG9z
|
||||
dIIVbG9jYWxob3N0LmxvY2FsZG9tYWluggpsb2NhbGhvc3Q0ghdsb2NhbGhvc3Q0
|
||||
LmxvY2FsZG9tYWluNIIKbG9jYWxob3N0NoIXbG9jYWxob3N0Ni5sb2NhbGRvbWFp
|
||||
bjYwHQYDVR0OBBYEFFwNcqIKfGBCBGo9faQJ3TsHmp0SMA0GCSqGSIb3DQEBCwUA
|
||||
A4IBAQBvUJTRjSOf/+vtyS3OokwRilg1ZGF3psg0DWhjH2ehIRfNibU1Y8FVQo3I
|
||||
VU8LjcIUK1cN85z+AsYqLXo/C4qmJPydQ1tGpQL7uIrPD4h+Xh3tY6A2DKRJRQFO
|
||||
w2LjswPidGufMztpPbXxLREqvkvn80VkDnc44UPxYfHvZFqYwYyxZccA5mm+BhYu
|
||||
IerjfvgX+8zMWIQZOd+jRq8EaVTmVK2Azwwhc5ImWfc0DA3pmGPdECzE4N0VVoIJ
|
||||
N8PCVltXXP3F7K3LoT6CLSiJ3c/IDVNoVS4pRV6R6Y4oIKD9T/T1kAgAvOrUGRWY
|
||||
ejWQ41GdUmkmxrqCaMbVCO4s72BC
|
||||
-----END CERTIFICATE-----
|
Binary file not shown.
|
@ -25,6 +25,8 @@ apply plugin: 'elasticsearch.build'
|
|||
targetCompatibility = JavaVersion.VERSION_1_7
|
||||
sourceCompatibility = JavaVersion.VERSION_1_7
|
||||
|
||||
group = "${group}.client.test"
|
||||
|
||||
dependencies {
|
||||
compile "org.apache.httpcomponents:httpcore:${versions.httpcore}"
|
||||
compile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
|
||||
|
|
|
@ -21,6 +21,7 @@ package org.elasticsearch.transport.client;
|
|||
|
||||
import io.netty.util.ThreadDeathWatcher;
|
||||
import io.netty.util.concurrent.GlobalEventExecutor;
|
||||
|
||||
import org.elasticsearch.client.transport.TransportClient;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
import org.elasticsearch.common.network.NetworkModule;
|
||||
|
|
|
@ -49,7 +49,7 @@ CopySpec archiveFiles(CopySpec modulesFiles, String distributionType, boolean os
|
|||
return copySpec {
|
||||
into("elasticsearch-${version}") {
|
||||
into('lib') {
|
||||
with libFiles
|
||||
with libFiles(oss)
|
||||
}
|
||||
into('config') {
|
||||
dirMode 0750
|
||||
|
|
|
@ -19,14 +19,11 @@
|
|||
|
||||
package org.elasticsearch.test.rest;
|
||||
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.Response;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Collections.emptyMap;
|
||||
import static java.util.Collections.singletonMap;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.startsWith;
|
||||
|
@ -49,26 +46,31 @@ public class CreatedLocationHeaderIT extends ESRestTestCase {
|
|||
}
|
||||
|
||||
public void testUpsert() throws IOException {
|
||||
locationTestCase(client().performRequest("POST", "test/test/1/_update", emptyMap(), new StringEntity("{"
|
||||
+ "\"doc\": {\"test\": \"test\"},"
|
||||
+ "\"doc_as_upsert\": true}", ContentType.APPLICATION_JSON)));
|
||||
Request request = new Request("POST", "test/test/1/_update");
|
||||
request.setJsonEntity("{"
|
||||
+ "\"doc\": {\"test\": \"test\"},"
|
||||
+ "\"doc_as_upsert\": true}");
|
||||
locationTestCase(client().performRequest(request));
|
||||
}
|
||||
|
||||
private void locationTestCase(String method, String url) throws IOException {
|
||||
locationTestCase(client().performRequest(method, url, emptyMap(),
|
||||
new StringEntity("{\"test\": \"test\"}", ContentType.APPLICATION_JSON)));
|
||||
final Request request = new Request(method, url);
|
||||
request.setJsonEntity("{\"test\": \"test\"}");
|
||||
locationTestCase(client().performRequest(request));
|
||||
// we have to delete the index otherwise the second indexing request will route to the single shard and not produce a 201
|
||||
final Response response = client().performRequest(new Request("DELETE", "test"));
|
||||
assertThat(response.getStatusLine().getStatusCode(), equalTo(200));
|
||||
locationTestCase(client().performRequest(method, url + "?routing=cat", emptyMap(),
|
||||
new StringEntity("{\"test\": \"test\"}", ContentType.APPLICATION_JSON)));
|
||||
final Request withRouting = new Request(method, url);
|
||||
withRouting.addParameter("routing", "cat");
|
||||
withRouting.setJsonEntity("{\"test\": \"test\"}");
|
||||
locationTestCase(client().performRequest(withRouting));
|
||||
}
|
||||
|
||||
private void locationTestCase(Response response) throws IOException {
|
||||
assertEquals(201, response.getStatusLine().getStatusCode());
|
||||
String location = response.getHeader("Location");
|
||||
assertThat(location, startsWith("/test/test/"));
|
||||
Response getResponse = client().performRequest("GET", location);
|
||||
Response getResponse = client().performRequest(new Request("GET", location));
|
||||
assertEquals(singletonMap("test", "test"), entityAsMap(getResponse).get("_source"));
|
||||
}
|
||||
|
||||
|
|
|
@ -19,13 +19,11 @@
|
|||
|
||||
package org.elasticsearch.test.rest;
|
||||
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
import org.elasticsearch.client.Response;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.Request;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
|
@ -39,8 +37,8 @@ public class NodeRestUsageIT extends ESRestTestCase {
|
|||
@SuppressWarnings("unchecked")
|
||||
public void testWithRestUsage() throws IOException {
|
||||
// First get the current usage figures
|
||||
Response beforeResponse = client().performRequest("GET",
|
||||
randomFrom("_nodes/usage", "_nodes/usage/rest_actions", "_nodes/usage/_all"));
|
||||
String path = randomFrom("_nodes/usage", "_nodes/usage/rest_actions", "_nodes/usage/_all");
|
||||
Response beforeResponse = client().performRequest(new Request("GET", path));
|
||||
Map<String, Object> beforeResponseBodyMap = entityAsMap(beforeResponse);
|
||||
assertThat(beforeResponseBodyMap, notNullValue());
|
||||
Map<String, Object> before_nodesMap = (Map<String, Object>) beforeResponseBodyMap.get("_nodes");
|
||||
|
@ -80,24 +78,24 @@ public class NodeRestUsageIT extends ESRestTestCase {
|
|||
}
|
||||
|
||||
// Do some requests to get some rest usage stats
|
||||
client().performRequest("PUT", "/test");
|
||||
client().performRequest("POST", "/test/doc/1", Collections.emptyMap(),
|
||||
new StringEntity("{ \"foo\": \"bar\"}", ContentType.APPLICATION_JSON));
|
||||
client().performRequest("POST", "/test/doc/2", Collections.emptyMap(),
|
||||
new StringEntity("{ \"foo\": \"bar\"}", ContentType.APPLICATION_JSON));
|
||||
client().performRequest("POST", "/test/doc/3", Collections.emptyMap(),
|
||||
new StringEntity("{ \"foo\": \"bar\"}", ContentType.APPLICATION_JSON));
|
||||
client().performRequest("GET", "/test/_search");
|
||||
client().performRequest("POST", "/test/doc/4", Collections.emptyMap(),
|
||||
new StringEntity("{ \"foo\": \"bar\"}", ContentType.APPLICATION_JSON));
|
||||
client().performRequest("POST", "/test/_refresh");
|
||||
client().performRequest("GET", "/_cat/indices");
|
||||
client().performRequest("GET", "/_nodes");
|
||||
client().performRequest("GET", "/test/_search");
|
||||
client().performRequest("GET", "/_nodes/stats");
|
||||
client().performRequest("DELETE", "/test");
|
||||
client().performRequest(new Request("PUT", "/test"));
|
||||
for (int i = 0; i < 3; i++) {
|
||||
final Request index = new Request("POST", "/test/doc/1");
|
||||
index.setJsonEntity("{\"foo\": \"bar\"}");
|
||||
client().performRequest(index);
|
||||
}
|
||||
client().performRequest(new Request("GET", "/test/_search"));
|
||||
final Request index4 = new Request("POST", "/test/doc/4");
|
||||
index4.setJsonEntity("{\"foo\": \"bar\"}");
|
||||
client().performRequest(index4);
|
||||
client().performRequest(new Request("POST", "/test/_refresh"));
|
||||
client().performRequest(new Request("GET", "/_cat/indices"));
|
||||
client().performRequest(new Request("GET", "/_nodes"));
|
||||
client().performRequest(new Request("GET", "/test/_search"));
|
||||
client().performRequest(new Request("GET", "/_nodes/stats"));
|
||||
client().performRequest(new Request("DELETE", "/test"));
|
||||
|
||||
Response response = client().performRequest("GET", "_nodes/usage");
|
||||
Response response = client().performRequest(new Request("GET", "_nodes/usage"));
|
||||
Map<String, Object> responseBodyMap = entityAsMap(response);
|
||||
assertThat(responseBodyMap, notNullValue());
|
||||
Map<String, Object> _nodesMap = (Map<String, Object>) responseBodyMap.get("_nodes");
|
||||
|
@ -139,7 +137,7 @@ public class NodeRestUsageIT extends ESRestTestCase {
|
|||
|
||||
public void testMetricsWithAll() throws IOException {
|
||||
ResponseException exception = expectThrows(ResponseException.class,
|
||||
() -> client().performRequest("GET", "_nodes/usage/_all,rest_actions"));
|
||||
() -> client().performRequest(new Request("GET", "_nodes/usage/_all,rest_actions")));
|
||||
assertNotNull(exception);
|
||||
assertThat(exception.getMessage(), containsString("\"type\":\"illegal_argument_exception\","
|
||||
+ "\"reason\":\"request [_nodes/usage/_all,rest_actions] contains _all and individual metrics [_all,rest_actions]\""));
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
package org.elasticsearch.test.rest;
|
||||
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.Request;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
|
@ -28,56 +29,56 @@ import static org.hamcrest.CoreMatchers.containsString;
|
|||
public class RequestsWithoutContentIT extends ESRestTestCase {
|
||||
|
||||
public void testIndexMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "PUT", "/idx/type/123"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "PUT", "/idx/type/123")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
public void testBulkMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "PUT", "/_bulk"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "PUT", "/_bulk")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
public void testPutSettingsMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
"PUT", "/_settings"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request("PUT", "/_settings")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
public void testPutMappingsMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "PUT", "/test_index/test_type/_mapping"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "PUT", "/test_index/test_type/_mapping")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
public void testPutIndexTemplateMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "PUT" : "POST", "/_template/my_template"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "PUT" : "POST", "/_template/my_template")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
public void testMultiSearchMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "GET", "/_msearch"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "GET", "/_msearch")));
|
||||
assertResponseException(responseException, "request body or source parameter is required");
|
||||
}
|
||||
|
||||
public void testPutPipelineMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
"PUT", "/_ingest/pipeline/my_pipeline"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request("PUT", "/_ingest/pipeline/my_pipeline")));
|
||||
assertResponseException(responseException, "request body or source parameter is required");
|
||||
}
|
||||
|
||||
public void testSimulatePipelineMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "GET", "/_ingest/pipeline/my_pipeline/_simulate"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "GET", "/_ingest/pipeline/my_pipeline/_simulate")));
|
||||
assertResponseException(responseException, "request body or source parameter is required");
|
||||
}
|
||||
|
||||
public void testPutScriptMissingBody() throws IOException {
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () -> client().performRequest(
|
||||
randomBoolean() ? "POST" : "PUT", "/_scripts/lang"));
|
||||
ResponseException responseException = expectThrows(ResponseException.class, () ->
|
||||
client().performRequest(new Request(randomBoolean() ? "POST" : "PUT", "/_scripts/lang")));
|
||||
assertResponseException(responseException, "request body is required");
|
||||
}
|
||||
|
||||
|
|
|
@ -19,26 +19,21 @@
|
|||
|
||||
package org.elasticsearch.test.rest;
|
||||
|
||||
import org.apache.http.HttpEntity;
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
import org.apache.http.util.EntityUtils;
|
||||
import org.elasticsearch.action.ActionFuture;
|
||||
import org.elasticsearch.action.support.PlainActionFuture;
|
||||
import org.elasticsearch.client.Response;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.ResponseListener;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.HashMap;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
|
||||
import static java.util.Collections.emptyMap;
|
||||
|
||||
/**
|
||||
* Tests that wait for refresh is fired if the index is closed.
|
||||
*/
|
||||
|
@ -46,13 +41,14 @@ public class WaitForRefreshAndCloseTests extends ESRestTestCase {
|
|||
@Before
|
||||
public void setupIndex() throws IOException {
|
||||
try {
|
||||
client().performRequest("DELETE", indexName());
|
||||
client().performRequest(new Request("DELETE", indexName()));
|
||||
} catch (ResponseException e) {
|
||||
// If we get an error, it should be because the index doesn't exist
|
||||
assertEquals(404, e.getResponse().getStatusLine().getStatusCode());
|
||||
}
|
||||
client().performRequest("PUT", indexName(), emptyMap(),
|
||||
new StringEntity("{\"settings\":{\"refresh_interval\":-1}}", ContentType.APPLICATION_JSON));
|
||||
Request request = new Request("PUT", indexName());
|
||||
request.setJsonEntity("{\"settings\":{\"refresh_interval\":-1}}");
|
||||
client().performRequest(request);
|
||||
}
|
||||
|
||||
@After
|
||||
|
@ -69,17 +65,20 @@ public class WaitForRefreshAndCloseTests extends ESRestTestCase {
|
|||
}
|
||||
|
||||
public void testIndexAndThenClose() throws Exception {
|
||||
closeWhileListenerEngaged(start("PUT", "", new StringEntity("{\"test\":\"test\"}", ContentType.APPLICATION_JSON)));
|
||||
closeWhileListenerEngaged(start("PUT", "", "{\"test\":\"test\"}"));
|
||||
}
|
||||
|
||||
public void testUpdateAndThenClose() throws Exception {
|
||||
client().performRequest("PUT", docPath(), emptyMap(), new StringEntity("{\"test\":\"test\"}", ContentType.APPLICATION_JSON));
|
||||
closeWhileListenerEngaged(start("POST", "/_update",
|
||||
new StringEntity("{\"doc\":{\"name\":\"test\"}}", ContentType.APPLICATION_JSON)));
|
||||
Request request = new Request("PUT", docPath());
|
||||
request.setJsonEntity("{\"test\":\"test\"}");
|
||||
client().performRequest(request);
|
||||
closeWhileListenerEngaged(start("POST", "/_update", "{\"doc\":{\"name\":\"test\"}}"));
|
||||
}
|
||||
|
||||
public void testDeleteAndThenClose() throws Exception {
|
||||
client().performRequest("PUT", docPath(), emptyMap(), new StringEntity("{\"test\":\"test\"}", ContentType.APPLICATION_JSON));
|
||||
Request request = new Request("PUT", docPath());
|
||||
request.setJsonEntity("{\"test\":\"test\"}");
|
||||
client().performRequest(request);
|
||||
closeWhileListenerEngaged(start("DELETE", "", null));
|
||||
}
|
||||
|
||||
|
@ -88,7 +87,7 @@ public class WaitForRefreshAndCloseTests extends ESRestTestCase {
|
|||
assertBusy(() -> {
|
||||
Map<String, Object> stats;
|
||||
try {
|
||||
stats = entityAsMap(client().performRequest("GET", indexName() + "/_stats/refresh"));
|
||||
stats = entityAsMap(client().performRequest(new Request("GET", indexName() + "/_stats/refresh")));
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
|
@ -105,18 +104,19 @@ public class WaitForRefreshAndCloseTests extends ESRestTestCase {
|
|||
});
|
||||
|
||||
// Close the index. That should flush the listener.
|
||||
client().performRequest("POST", indexName() + "/_close");
|
||||
client().performRequest(new Request("POST", indexName() + "/_close"));
|
||||
|
||||
// The request shouldn't fail. It certainly shouldn't hang.
|
||||
future.get();
|
||||
}
|
||||
|
||||
private ActionFuture<String> start(String method, String path, HttpEntity body) {
|
||||
private ActionFuture<String> start(String method, String path, String body) {
|
||||
PlainActionFuture<String> future = new PlainActionFuture<>();
|
||||
Map<String, String> params = new HashMap<>();
|
||||
params.put("refresh", "wait_for");
|
||||
params.put("error_trace", "");
|
||||
client().performRequestAsync(method, docPath() + path, params, body, new ResponseListener() {
|
||||
Request request = new Request(method, docPath() + path);
|
||||
request.addParameter("refresh", "wait_for");
|
||||
request.addParameter("error_trace", "");
|
||||
request.setJsonEntity(body);
|
||||
client().performRequestAsync(request, new ResponseListener() {
|
||||
@Override
|
||||
public void onSuccess(Response response) {
|
||||
try {
|
||||
|
|
|
@ -227,16 +227,24 @@ configure(subprojects.findAll { ['archives', 'packages'].contains(it.name) }) {
|
|||
/*****************************************************************************
|
||||
* Common files in all distributions *
|
||||
*****************************************************************************/
|
||||
libFiles = copySpec {
|
||||
// delay by using closures, since they have not yet been configured, so no jar task exists yet
|
||||
from { project(':server').jar }
|
||||
from { project(':server').configurations.runtime }
|
||||
from { project(':libs:plugin-classloader').jar }
|
||||
from { project(':distribution:tools:java-version-checker').jar }
|
||||
from { project(':distribution:tools:launchers').jar }
|
||||
into('tools/plugin-cli') {
|
||||
from { project(':distribution:tools:plugin-cli').jar }
|
||||
from { project(':distribution:tools:plugin-cli').configurations.runtime }
|
||||
libFiles = { oss ->
|
||||
copySpec {
|
||||
// delay by using closures, since they have not yet been configured, so no jar task exists yet
|
||||
from { project(':server').jar }
|
||||
from { project(':server').configurations.runtime }
|
||||
from { project(':libs:plugin-classloader').jar }
|
||||
from { project(':distribution:tools:java-version-checker').jar }
|
||||
from { project(':distribution:tools:launchers').jar }
|
||||
into('tools/plugin-cli') {
|
||||
from { project(':distribution:tools:plugin-cli').jar }
|
||||
from { project(':distribution:tools:plugin-cli').configurations.runtime }
|
||||
}
|
||||
if (oss == false) {
|
||||
into('tools/security-cli') {
|
||||
from { project(':x-pack:plugin:security:cli').jar }
|
||||
from { project(':x-pack:plugin:security:cli').configurations.compile }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -125,32 +125,22 @@ Closure commonPackageConfig(String type, boolean oss) {
|
|||
fileMode 0644
|
||||
}
|
||||
into('lib') {
|
||||
with copySpec {
|
||||
with libFiles
|
||||
// we need to specify every intermediate directory so we iterate through the parents; duplicate calls with the same part are fine
|
||||
eachFile { FileCopyDetails fcp ->
|
||||
String[] segments = fcp.relativePath.segments
|
||||
for (int i = segments.length - 2; i > 0 && segments[i] != 'lib'; --i) {
|
||||
directory('/' + segments[0..i].join('/'), 0755)
|
||||
}
|
||||
fcp.mode = 0644
|
||||
}
|
||||
}
|
||||
with libFiles(oss)
|
||||
}
|
||||
into('modules') {
|
||||
with copySpec {
|
||||
with modulesFiles(oss)
|
||||
// we need to specify every intermediate directory so we iterate through the parents; duplicate calls with the same part are fine
|
||||
eachFile { FileCopyDetails fcp ->
|
||||
String[] segments = fcp.relativePath.segments
|
||||
for (int i = segments.length - 2; i > 0 && segments[i] != 'modules'; --i) {
|
||||
directory('/' + segments[0..i].join('/'), 0755)
|
||||
}
|
||||
if (segments[-2] == 'bin') {
|
||||
fcp.mode = 0755
|
||||
} else {
|
||||
fcp.mode = 0644
|
||||
}
|
||||
with modulesFiles(oss)
|
||||
}
|
||||
// we need to specify every intermediate directory in these paths so the package managers know they are explicitly
|
||||
// intended to manage them; otherwise they may be left behind on uninstallation. duplicate calls of the same
|
||||
// directory are fine
|
||||
eachFile { FileCopyDetails fcp ->
|
||||
String[] segments = fcp.relativePath.segments
|
||||
for (int i = segments.length - 2; i > 2; --i) {
|
||||
directory('/' + segments[0..i].join('/'), 0755)
|
||||
if (segments[-2] == 'bin') {
|
||||
fcp.mode = 0755
|
||||
} else {
|
||||
fcp.mode = 0644
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -333,12 +323,6 @@ Closure commonRpmConfig(boolean oss) {
|
|||
|
||||
// without this the rpm will have parent dirs of any files we copy in, eg /etc/elasticsearch
|
||||
addParentDirs false
|
||||
|
||||
// Declare the folders so that the RPM package manager removes
|
||||
// them when upgrading or removing the package
|
||||
directory('/usr/share/elasticsearch/bin', 0755)
|
||||
directory('/usr/share/elasticsearch/lib', 0755)
|
||||
directory('/usr/share/elasticsearch/modules', 0755)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -100,3 +100,6 @@ ${error.file}
|
|||
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
|
||||
# time/date parsing will break in an incompatible way for some date patterns and locals
|
||||
9-:-Djava.locale.providers=COMPAT
|
||||
|
||||
# temporary workaround for C2 bug with JDK 10 on hardware with AVX-512
|
||||
10-:-XX:UseAVX=2
|
||||
|
|
|
@ -7,13 +7,13 @@ logger.action.level = debug
|
|||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
|
||||
|
||||
appender.rolling.type = RollingFile
|
||||
appender.rolling.name = rolling
|
||||
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
|
||||
appender.rolling.layout.type = PatternLayout
|
||||
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
|
||||
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
|
||||
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
|
||||
appender.rolling.policies.type = Policies
|
||||
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
|
||||
|
@ -38,7 +38,7 @@ appender.deprecation_rolling.type = RollingFile
|
|||
appender.deprecation_rolling.name = deprecation_rolling
|
||||
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
|
||||
appender.deprecation_rolling.layout.type = PatternLayout
|
||||
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
|
||||
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %.-10000m%n
|
||||
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
|
||||
appender.deprecation_rolling.policies.type = Policies
|
||||
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
|
||||
|
@ -55,7 +55,7 @@ appender.index_search_slowlog_rolling.type = RollingFile
|
|||
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
|
||||
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
|
||||
appender.index_search_slowlog_rolling.layout.type = PatternLayout
|
||||
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
|
||||
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
|
||||
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
|
||||
appender.index_search_slowlog_rolling.policies.type = Policies
|
||||
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
|
||||
|
@ -72,7 +72,7 @@ appender.index_indexing_slowlog_rolling.type = RollingFile
|
|||
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
|
||||
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
|
||||
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
|
||||
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
|
||||
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
|
||||
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%i.log.gz
|
||||
appender.index_indexing_slowlog_rolling.policies.type = Policies
|
||||
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
|
||||
|
|
|
@ -61,6 +61,7 @@ import org.elasticsearch.test.ESTestCase;
|
|||
import org.elasticsearch.test.PosixPermissionsResetter;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
|
||||
import java.io.BufferedReader;
|
||||
import java.io.ByteArrayInputStream;
|
||||
|
@ -139,6 +140,11 @@ public class InstallPluginCommandTests extends ESTestCase {
|
|||
System.setProperty("java.io.tmpdir", temp.apply("tmpdir").toString());
|
||||
}
|
||||
|
||||
@BeforeClass
|
||||
public static void testIfFipsMode() {
|
||||
assumeFalse("Can't run in a FIPS JVM because this depends on BouncyCastle (non-fips)", inFipsJvm());
|
||||
}
|
||||
|
||||
@Override
|
||||
@Before
|
||||
public void setUp() throws Exception {
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
:version: 7.0.0-alpha1
|
||||
:major-version: 7.x
|
||||
:lucene_version: 7.4.0
|
||||
:lucene_version_path: 7_4_0
|
||||
:lucene_version: 7.5.0
|
||||
:lucene_version_path: 7_5_0
|
||||
:branch: master
|
||||
:jdk: 1.8.0_131
|
||||
:jdk_major: 8
|
||||
|
|
|
@ -379,9 +379,9 @@ buildRestTests.setups['exams'] = '''
|
|||
refresh: true
|
||||
body: |
|
||||
{"index":{}}
|
||||
{"grade": 100}
|
||||
{"grade": 100, "weight": 2}
|
||||
{"index":{}}
|
||||
{"grade": 50}'''
|
||||
{"grade": 50, "weight": 3}'''
|
||||
|
||||
buildRestTests.setups['stored_example_script'] = '''
|
||||
# Simple script to load a field. Not really a good example, but a simple one.
|
||||
|
|
|
@ -2,17 +2,22 @@
|
|||
|
||||
==== Put Mapping
|
||||
|
||||
The PUT mapping API allows you to add a new type while creating an index:
|
||||
You can add mappings for a new type at index creation time:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{client-tests}/IndicesDocumentationIT.java[index-with-mapping]
|
||||
--------------------------------------------------
|
||||
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
|
||||
<2> It also adds a `tweet` mapping type.
|
||||
<2> Add a `tweet` type with a field called `message` that has the datatype `text`.
|
||||
|
||||
There are several variants of the above `addMapping` method, some taking an
|
||||
`XContentBuilder` or a `Map` with the mapping definition as arguments. Make sure
|
||||
to check the javadocs to pick the simplest one for your use case.
|
||||
|
||||
The PUT mapping API also allows to add a new type to an existing index:
|
||||
The PUT mapping API also allows to specify the mapping of a type after index
|
||||
creation. In this case you can provide the mapping as a String similar to the
|
||||
Rest API syntax:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -0,0 +1,65 @@
|
|||
[[java-rest-high-put-license]]
|
||||
=== Update License
|
||||
|
||||
[[java-rest-high-put-license-execution]]
|
||||
==== Execution
|
||||
|
||||
The license can be added or updated using the `putLicense()` method:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute]
|
||||
--------------------------------------------------
|
||||
<1> Set the categories of information to retrieve. The the default is to
|
||||
return no information which is useful for checking if {xpack} is installed
|
||||
but not much else.
|
||||
<2> A JSON document containing the license information.
|
||||
|
||||
[[java-rest-high-put-license-response]]
|
||||
==== Response
|
||||
|
||||
The returned `PutLicenseResponse` contains the `LicensesStatus`,
|
||||
`acknowledged` flag and possible acknowledge messages. The acknowledge messages
|
||||
are present if you previously had a license with more features than one you
|
||||
are trying to update and you didn't set the `acknowledge` flag to `true`. In this case
|
||||
you need to display the messages to the end user and if they agree, resubmit the
|
||||
license with the `acknowledge` flag set to `true`. Please note that the request will
|
||||
still return a 200 return code even if requires an acknowledgement. So, it is
|
||||
necessary to check the `acknowledged` flag.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-response]
|
||||
--------------------------------------------------
|
||||
<1> The status of the license
|
||||
<2> Make sure that the license is valid.
|
||||
<3> Check the acknowledge flag. It should be true if license is acknowledged.
|
||||
<4> Otherwise we can see the acknowledge messages in `acknowledgeHeader()`
|
||||
<5> and check component-specific messages in `acknowledgeMessages()`.
|
||||
|
||||
[[java-rest-high-put-license-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
This request can be executed asynchronously:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute-async]
|
||||
--------------------------------------------------
|
||||
<1> The `PutLicenseRequest` to execute and the `ActionListener` to use when
|
||||
the execution completes
|
||||
|
||||
The asynchronous method does not block and returns immediately. Once it is
|
||||
completed the `ActionListener` is called back using the `onResponse` method
|
||||
if the execution successfully completed or using the `onFailure` method if
|
||||
it failed.
|
||||
|
||||
A typical listener for `PutLicenseResponse` looks like:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-execute-listener]
|
||||
--------------------------------------------------
|
||||
<1> Called when the execution is successfully completed. The response is
|
||||
provided as an argument
|
||||
<2> Called in case of failure. The raised exception is provided as an argument
|
|
@ -73,11 +73,22 @@ include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-r
|
|||
[[java-rest-high-snapshot-create-snapshot-sync]]
|
||||
==== Synchronous Execution
|
||||
|
||||
Execute a `CreateSnapshotRequest` synchronously to receive a `CreateSnapshotResponse`.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-execute]
|
||||
--------------------------------------------------
|
||||
|
||||
Retrieve the `SnapshotInfo` from a `CreateSnapshotResponse` when the snapshot is fully created.
|
||||
(The `waitForCompletion` parameter is `true`).
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[create-snapshot-response-snapshot-info]
|
||||
--------------------------------------------------
|
||||
<1> The `SnapshotInfo` object.
|
||||
|
||||
[[java-rest-high-snapshot-create-snapshot-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
|
|
|
@ -0,0 +1,144 @@
|
|||
[[java-rest-high-snapshot-restore-snapshot]]
|
||||
=== Restore Snapshot API
|
||||
|
||||
The Restore Snapshot API allows to restore a snapshot.
|
||||
|
||||
[[java-rest-high-snapshot-restore-snapshot-request]]
|
||||
==== Restore Snapshot Request
|
||||
|
||||
A `RestoreSnapshotRequest`:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request]
|
||||
--------------------------------------------------
|
||||
|
||||
==== Limiting Indices to Restore
|
||||
|
||||
By default all indices are restored. With the `indices` property you can
|
||||
provide a list of indices that should be restored:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-indices]
|
||||
--------------------------------------------------
|
||||
<1> Request that Elasticsearch only restores "test_index".
|
||||
|
||||
==== Renaming Indices
|
||||
|
||||
You can rename indices using regular expressions when restoring a snapshot:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-rename]
|
||||
--------------------------------------------------
|
||||
<1> A regular expression matching the indices that should be renamed.
|
||||
<2> A replacement pattern that references the group from the regular
|
||||
expression as `$1`. "test_index" from the snapshot is restored as
|
||||
"restored_index" in this example.
|
||||
|
||||
==== Index Settings and Options
|
||||
|
||||
You can also customize index settings and options when restoring:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-index-settings]
|
||||
--------------------------------------------------
|
||||
<1> Use `#indexSettings()` to set any specific index setting for the indices
|
||||
that are restored.
|
||||
<2> Use `#ignoreIndexSettings()` to provide index settings that should be
|
||||
ignored from the original indices.
|
||||
<3> Set `IndicesOptions.Option.IGNORE_UNAVAILABLE` in `#indicesOptions()` to
|
||||
have the restore succeed even if indices are missing in the snapshot.
|
||||
|
||||
==== Further Arguments
|
||||
|
||||
The following arguments can optionally be provided:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-masterTimeout]
|
||||
--------------------------------------------------
|
||||
<1> Timeout to connect to the master node as a `TimeValue`
|
||||
<2> Timeout to connect to the master node as a `String`
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-waitForCompletion]
|
||||
--------------------------------------------------
|
||||
<1> Boolean indicating whether to wait until the snapshot has been restored.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-partial]
|
||||
--------------------------------------------------
|
||||
<1> Boolean indicating whether the entire snapshot should succeed although one
|
||||
or more indices participating in the snapshot don’t have all primary
|
||||
shards available.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-include-global-state]
|
||||
--------------------------------------------------
|
||||
<1> Boolean indicating whether restored templates that don’t currently exist
|
||||
in the cluster are added and existing templates with the same name are
|
||||
replaced by the restored templates. The restored persistent settings are
|
||||
added to the existing persistent settings.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-request-include-aliases]
|
||||
--------------------------------------------------
|
||||
<1> Boolean to control whether aliases should be restored. Set to `false` to
|
||||
prevent aliases from being restored together with associated indices.
|
||||
|
||||
[[java-rest-high-snapshot-restore-snapshot-sync]]
|
||||
==== Synchronous Execution
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute]
|
||||
--------------------------------------------------
|
||||
|
||||
[[java-rest-high-snapshot-restore-snapshot-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
The asynchronous execution of a restore snapshot request requires both the
|
||||
`RestoreSnapshotRequest` instance and an `ActionListener` instance to be
|
||||
passed to the asynchronous method:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute-async]
|
||||
--------------------------------------------------
|
||||
<1> The `RestoreSnapshotRequest` to execute and the `ActionListener`
|
||||
to use when the execution completes
|
||||
|
||||
The asynchronous method does not block and returns immediately. Once it is
|
||||
completed the `ActionListener` is called back using the `onResponse` method
|
||||
if the execution successfully completed or using the `onFailure` method if
|
||||
it failed.
|
||||
|
||||
A typical listener for `RestoreSnapshotResponse` looks like:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-execute-listener]
|
||||
--------------------------------------------------
|
||||
<1> Called when the execution is successfully completed. The response is
|
||||
provided as an argument.
|
||||
<2> Called in case of a failure. The raised exception is provided as an argument.
|
||||
|
||||
[[java-rest-high-cluster-restore-snapshot-response]]
|
||||
==== Restore Snapshot Response
|
||||
|
||||
The returned `RestoreSnapshotResponse` allows to retrieve information about the
|
||||
executed operation as follows:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/SnapshotClientDocumentationIT.java[restore-snapshot-response]
|
||||
--------------------------------------------------
|
||||
<1> The `RestoreInfo` contains details about the restored snapshot like the indices or
|
||||
the number of successfully restored and failed shards.
|
|
@ -54,10 +54,14 @@ The Java High Level REST Client supports the following Miscellaneous APIs:
|
|||
* <<java-rest-high-main>>
|
||||
* <<java-rest-high-ping>>
|
||||
* <<java-rest-high-x-pack-info>>
|
||||
* <<java-rest-high-x-pack-watcher-put-watch>>
|
||||
* <<java-rest-high-x-pack-watcher-delete-watch>>
|
||||
|
||||
include::miscellaneous/main.asciidoc[]
|
||||
include::miscellaneous/ping.asciidoc[]
|
||||
include::miscellaneous/x-pack-info.asciidoc[]
|
||||
include::x-pack/x-pack-info.asciidoc[]
|
||||
include::x-pack/watcher/put-watch.asciidoc[]
|
||||
include::x-pack/watcher/delete-watch.asciidoc[]
|
||||
|
||||
== Indices APIs
|
||||
|
||||
|
@ -185,3 +189,12 @@ The Java High Level REST Client supports the following Scripts APIs:
|
|||
|
||||
include::script/get_script.asciidoc[]
|
||||
include::script/delete_script.asciidoc[]
|
||||
|
||||
|
||||
== Licensing APIs
|
||||
|
||||
The Java High Level REST Client supports the following Licensing APIs:
|
||||
|
||||
* <<java-rest-high-put-license>>
|
||||
|
||||
include::licensing/put-license.asciidoc[]
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
[[java-rest-high-x-pack-watcher-delete-watch]]
|
||||
=== X-Pack Delete Watch API
|
||||
|
||||
[[java-rest-high-x-pack-watcher-delete-watch-execution]]
|
||||
==== Execution
|
||||
|
||||
A watch can be deleted as follows:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute]
|
||||
--------------------------------------------------
|
||||
|
||||
[[java-rest-high-x-pack-watcher-delete-watch-response]]
|
||||
==== Response
|
||||
|
||||
The returned `DeleteWatchResponse` contains `found`, `id`,
|
||||
and `version` information.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-response]
|
||||
--------------------------------------------------
|
||||
<1> `_id` contains id of the watch
|
||||
<2> `found` is a boolean indicating whether the watch was found
|
||||
<3> `_version` returns the version of the deleted watch
|
||||
|
||||
[[java-rest-high-x-pack-watcher-delete-watch-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
This request can be executed asynchronously:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute-async]
|
||||
--------------------------------------------------
|
||||
<1> The `DeleteWatchRequest` to execute and the `ActionListener` to use when
|
||||
the execution completes
|
||||
|
||||
The asynchronous method does not block and returns immediately. Once it is
|
||||
completed the `ActionListener` is called back using the `onResponse` method
|
||||
if the execution successfully completed or using the `onFailure` method if
|
||||
it failed.
|
||||
|
||||
A typical listener for `DeleteWatchResponse` looks like:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-delete-watch-execute-listener]
|
||||
--------------------------------------------------
|
||||
<1> Called when the execution is successfully completed. The response is
|
||||
provided as an argument
|
||||
<2> Called in case of failure. The raised exception is provided as an argument
|
|
@ -0,0 +1,55 @@
|
|||
[[java-rest-high-x-pack-watcher-put-watch]]
|
||||
=== X-Pack Put Watch API
|
||||
|
||||
[[java-rest-high-x-pack-watcher-put-watch-execution]]
|
||||
==== Execution
|
||||
|
||||
General information about the installed {watcher} features can be retrieved
|
||||
using the `watcher()` method:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute]
|
||||
--------------------------------------------------
|
||||
<1> Allows to store the watch, but to not trigger it. Defaults to `true`
|
||||
|
||||
[[java-rest-high-x-pack-watcher-put-watch-response]]
|
||||
==== Response
|
||||
|
||||
The returned `PutWatchResponse` contains `created`, `id`,
|
||||
and `version` information.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-response]
|
||||
--------------------------------------------------
|
||||
<1> `_id` contains id of the watch
|
||||
<2> `created` is a boolean indicating whether the watch was created for the first time
|
||||
<3> `_version` returns the newly created version
|
||||
|
||||
[[java-rest-high-x-pack-watcher-put-watch-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
This request can be executed asynchronously:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute-async]
|
||||
--------------------------------------------------
|
||||
<1> The `PutWatchRequest` to execute and the `ActionListener` to use when
|
||||
the execution completes
|
||||
|
||||
The asynchronous method does not block and returns immediately. Once it is
|
||||
completed the `ActionListener` is called back using the `onResponse` method
|
||||
if the execution successfully completed or using the `onFailure` method if
|
||||
it failed.
|
||||
|
||||
A typical listener for `PutWatchResponse` looks like:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/WatcherDocumentationIT.java[x-pack-put-watch-execute-listener]
|
||||
--------------------------------------------------
|
||||
<1> Called when the execution is successfully completed. The response is
|
||||
provided as an argument
|
||||
<2> Called in case of failure. The raised exception is provided as an argument
|
|
@ -0,0 +1,54 @@
|
|||
[[java-rest-high-x-pack-usage]]
|
||||
=== X-Pack Usage API
|
||||
|
||||
[[java-rest-high-x-pack-usage-execution]]
|
||||
==== Execution
|
||||
|
||||
Detailed information about the usage of features from {xpack} can be
|
||||
retrieved using the `usage()` method:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute]
|
||||
--------------------------------------------------
|
||||
|
||||
[[java-rest-high-x-pack-info-response]]
|
||||
==== Response
|
||||
|
||||
The returned `XPackUsageResponse` contains a `Map` keyed by feature name.
|
||||
Every feature map has an `available` key, indicating whether that
|
||||
feature is available given the current license, and an `enabled` key,
|
||||
indicating whether that feature is currently enabled. Other keys
|
||||
are specific to each feature.
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-response]
|
||||
--------------------------------------------------
|
||||
|
||||
[[java-rest-high-x-pack-usage-async]]
|
||||
==== Asynchronous Execution
|
||||
|
||||
This request can be executed asynchronously:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute-async]
|
||||
--------------------------------------------------
|
||||
<1> The call to execute the usage api and the `ActionListener` to use when
|
||||
the execution completes
|
||||
|
||||
The asynchronous method does not block and returns immediately. Once it is
|
||||
completed the `ActionListener` is called back using the `onResponse` method
|
||||
if the execution successfully completed or using the `onFailure` method if
|
||||
it failed.
|
||||
|
||||
A typical listener for `XPackUsageResponse` looks like:
|
||||
|
||||
["source","java",subs="attributes,callouts,macros"]
|
||||
--------------------------------------------------
|
||||
include-tagged::{doc-tests}/MiscellaneousDocumentationIT.java[x-pack-usage-execute-listener]
|
||||
--------------------------------------------------
|
||||
<1> Called when the execution is successfully completed. The response is
|
||||
provided as an argument
|
||||
<2> Called in case of failure. The raised exception is provided as an argument
|
|
@ -1,9 +1,6 @@
|
|||
[[painless-contexts]]
|
||||
== Painless contexts
|
||||
|
||||
:es_version: https://www.elastic.co/guide/en/elasticsearch/reference/master
|
||||
:xp_version: https://www.elastic.co/guide/en/x-pack/current
|
||||
|
||||
A Painless script is evaluated within a context. Each context has values that
|
||||
are available as local variables, a whitelist that controls the available
|
||||
classes, and the methods and fields within those classes (API), and
|
||||
|
@ -18,41 +15,41 @@ specialized code may define new ways to use a Painless script.
|
|||
| Name | Painless Documentation
|
||||
| Elasticsearch Documentation
|
||||
| Update | <<painless-update-context, Painless Documentation>>
|
||||
| {es_version}/docs-update.html[Elasticsearch Documentation]
|
||||
| {ref}/docs-update.html[Elasticsearch Documentation]
|
||||
| Update by query | <<painless-update-by-query-context, Painless Documentation>>
|
||||
| {es_version}/docs-update-by-query.html[Elasticsearch Documentation]
|
||||
| {ref}/docs-update-by-query.html[Elasticsearch Documentation]
|
||||
| Reindex | <<painless-reindex-context, Painless Documentation>>
|
||||
| {es_version}/docs-reindex.html[Elasticsearch Documentation]
|
||||
| {ref}/docs-reindex.html[Elasticsearch Documentation]
|
||||
| Sort | <<painless-sort-context, Painless Documentation>>
|
||||
| {es_version}/search-request-sort.html[Elasticsearch Documentation]
|
||||
| {ref}/search-request-sort.html[Elasticsearch Documentation]
|
||||
| Similarity | <<painless-similarity-context, Painless Documentation>>
|
||||
| {es_version}/index-modules-similarity.html[Elasticsearch Documentation]
|
||||
| Weight | <<painless-similarity-context, Painless Documentation>>
|
||||
| {es_version}/index-modules-similarity.html[Elasticsearch Documentation]
|
||||
| {ref}/index-modules-similarity.html[Elasticsearch Documentation]
|
||||
| Weight | <<painless-weight-context, Painless Documentation>>
|
||||
| {ref}/index-modules-similarity.html[Elasticsearch Documentation]
|
||||
| Score | <<painless-score-context, Painless Documentation>>
|
||||
| {es_version}/query-dsl-function-score-query.html[Elasticsearch Documentation]
|
||||
| {ref}/query-dsl-function-score-query.html[Elasticsearch Documentation]
|
||||
| Field | <<painless-field-context, Painless Documentation>>
|
||||
| {es_version}/search-request-script-fields.html[Elasticsearch Documentation]
|
||||
| {ref}/search-request-script-fields.html[Elasticsearch Documentation]
|
||||
| Filter | <<painless-filter-context, Painless Documentation>>
|
||||
| {es_version}/query-dsl-script-query.html[Elasticsearch Documentation]
|
||||
| {ref}/query-dsl-script-query.html[Elasticsearch Documentation]
|
||||
| Minimum should match | <<painless-min-should-match-context, Painless Documentation>>
|
||||
| {es_version}/query-dsl-terms-set-query.html[Elasticsearch Documentation]
|
||||
| {ref}/query-dsl-terms-set-query.html[Elasticsearch Documentation]
|
||||
| Metric aggregation initialization | <<painless-metric-agg-init-context, Painless Documentation>>
|
||||
| {es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| Metric aggregation map | <<painless-metric-agg-map-context, Painless Documentation>>
|
||||
| {es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| Metric aggregation combine | <<painless-metric-agg-combine-context, Painless Documentation>>
|
||||
| {es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| Metric aggregation reduce | <<painless-metric-agg-reduce-context, Painless Documentation>>
|
||||
| {es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[Elasticsearch Documentation]
|
||||
| Bucket aggregation | <<painless-bucket-agg-context, Painless Documentation>>
|
||||
| {es_version}/search-aggregations-pipeline-bucket-script-aggregation.html[Elasticsearch Documentation]
|
||||
| {ref}/search-aggregations-pipeline-bucket-script-aggregation.html[Elasticsearch Documentation]
|
||||
| Ingest processor | <<painless-ingest-processor-context, Painless Documentation>>
|
||||
| {es_version}/script-processor.html[Elasticsearch Documentation]
|
||||
| {ref}/script-processor.html[Elasticsearch Documentation]
|
||||
| Watcher condition | <<painless-watcher-condition-context, Painless Documentation>>
|
||||
| {xp_version}/condition-script.html[Elasticsearch Documentation]
|
||||
| {xpack-ref}/condition-script.html[Elasticsearch Documentation]
|
||||
| Watcher transform | <<painless-watcher-transform-context, Painless Documentation>>
|
||||
| {xp_version}/transform-script.html[Elasticsearch Documentation]
|
||||
| {xpack-ref}/transform-script.html[Elasticsearch Documentation]
|
||||
|====
|
||||
|
||||
include::painless-contexts/index.asciidoc[]
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Bucket aggregation context
|
||||
|
||||
Use a Painless script in an
|
||||
{es_version}/search-aggregations-pipeline-bucket-script-aggregation.html[bucket aggregation]
|
||||
{ref}/search-aggregations-pipeline-bucket-script-aggregation.html[bucket aggregation]
|
||||
to calculate a value as a result in a bucket.
|
||||
|
||||
*Variables*
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Field context
|
||||
|
||||
Use a Painless script to create a
|
||||
{es_version}/search-request-script-fields.html[script field] to return
|
||||
{ref}/search-request-script-fields.html[script field] to return
|
||||
a customized value for each document in the results of a query.
|
||||
|
||||
*Variables*
|
||||
|
@ -14,7 +14,7 @@ a customized value for each document in the results of a query.
|
|||
Contains the fields of the specified document where each field is a
|
||||
`List` of values.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
Contains extracted JSON in a `Map` and `List` structure for the fields
|
||||
existing in a stored document.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-filter-context]]
|
||||
=== Filter context
|
||||
|
||||
Use a Painless script as a {es_version}/query-dsl-script-query.html[filter] in a
|
||||
Use a Painless script as a {ref}/query-dsl-script-query.html[filter] in a
|
||||
query to include and exclude documents.
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-ingest-processor-context]]
|
||||
=== Ingest processor context
|
||||
|
||||
Use a Painless script in an {es_version}/script-processor.html[ingest processor]
|
||||
Use a Painless script in an {ref}/script-processor.html[ingest processor]
|
||||
to modify documents upon insertion.
|
||||
|
||||
*Variables*
|
||||
|
@ -9,10 +9,10 @@ to modify documents upon insertion.
|
|||
`params` (`Map`, read-only)::
|
||||
User-defined parameters passed in as part of the query.
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`] (`String`)::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`] (`String`)::
|
||||
The name of the index.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`] (`String`)::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`] (`String`)::
|
||||
The type of document within an index.
|
||||
|
||||
`ctx` (`Map`)::
|
||||
|
@ -21,10 +21,10 @@ to modify documents upon insertion.
|
|||
|
||||
*Side Effects*
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`]::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`]::
|
||||
Modify this to change the destination index for the current document.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`]::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`]::
|
||||
Modify this to change the type for the current document.
|
||||
|
||||
`ctx` (`Map`, read-only)::
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Metric aggregation combine context
|
||||
|
||||
Use a Painless script to
|
||||
{es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[combine]
|
||||
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[combine]
|
||||
values for use in a scripted metric aggregation. A combine script is run once
|
||||
per shard following a <<painless-metric-agg-map-context, map script>> and is
|
||||
optional as part of a full metric aggregation.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Metric aggregation initialization context
|
||||
|
||||
Use a Painless script to
|
||||
{es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[initialize]
|
||||
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[initialize]
|
||||
values for use in a scripted metric aggregation. An initialization script is
|
||||
run prior to document collection once per shard and is optional as part of the
|
||||
full metric aggregation.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Metric aggregation map context
|
||||
|
||||
Use a Painless script to
|
||||
{es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[map]
|
||||
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[map]
|
||||
values for use in a scripted metric aggregation. A map script is run once per
|
||||
collected document following an optional
|
||||
<<painless-metric-agg-init-context, initialization script>> and is required as
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Metric aggregation reduce context
|
||||
|
||||
Use a Painless script to
|
||||
{es_version}/search-aggregations-metrics-scripted-metric-aggregation.html[reduce]
|
||||
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[reduce]
|
||||
values to produce the result of a scripted metric aggregation. A reduce script
|
||||
is run once on the coordinating node following a
|
||||
<<painless-metric-agg-combine-context, combine script>> (or a
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Minimum should match context
|
||||
|
||||
Use a Painless script to specify the
|
||||
{es_version}/query-dsl-terms-set-query.html[minimum] number of terms that a
|
||||
{ref}/query-dsl-terms-set-query.html[minimum] number of terms that a
|
||||
specified field needs to match with for a document to be part of the query
|
||||
results.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-reindex-context]]
|
||||
=== Reindex context
|
||||
|
||||
Use a Painless script in a {es_version}/docs-reindex.html[reindex] operation to
|
||||
Use a Painless script in a {ref}/docs-reindex.html[reindex] operation to
|
||||
add, modify, or delete fields within each document in an original index as its
|
||||
reindexed into a target index.
|
||||
|
||||
|
@ -13,22 +13,22 @@ reindexed into a target index.
|
|||
`ctx['_op']` (`String`)::
|
||||
The name of the operation.
|
||||
|
||||
{es_version}/mapping-routing-field.html[`ctx['_routing']`] (`String`)::
|
||||
{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`)::
|
||||
The value used to select a shard for document storage.
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`] (`String`)::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`] (`String`)::
|
||||
The name of the index.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`] (`String`)::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`] (`String`)::
|
||||
The type of document within an index.
|
||||
|
||||
{es_version}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
The unique document id.
|
||||
|
||||
`ctx['_version']` (`int`)::
|
||||
The current version of the document.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
Contains extracted JSON in a `Map` and `List` structure for the fields
|
||||
existing in a stored document.
|
||||
|
||||
|
@ -39,22 +39,22 @@ reindexed into a target index.
|
|||
specify no operation or `delete` to delete the current document from
|
||||
the index.
|
||||
|
||||
{es_version}/mapping-routing-field.html[`ctx['_routing']`]::
|
||||
{ref}/mapping-routing-field.html[`ctx['_routing']`]::
|
||||
Modify this to change the routing value for the current document.
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`]::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`]::
|
||||
Modify this to change the destination index for the current document.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`]::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`]::
|
||||
Modify this to change the type for the current document.
|
||||
|
||||
{es_version}/mapping-id-field.html[`ctx['_id']`]::
|
||||
{ref}/mapping-id-field.html[`ctx['_id']`]::
|
||||
Modify this to change the id for the current document.
|
||||
|
||||
`ctx['_version']` (`int`)::
|
||||
Modify this to modify the version for the current document.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`]::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`]::
|
||||
Modify the values in the `Map/List` structure to add, modify, or delete
|
||||
the fields of a document.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Score context
|
||||
|
||||
Use a Painless script in a
|
||||
{es_version}/query-dsl-function-score-query.html[function score] to apply a new
|
||||
{ref}/query-dsl-function-score-query.html[function score] to apply a new
|
||||
score to documents returned from a query.
|
||||
|
||||
*Variables*
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Similarity context
|
||||
|
||||
Use a Painless script to create a
|
||||
{es_version}/index-modules-similarity.html[similarity] equation for scoring
|
||||
{ref}/index-modules-similarity.html[similarity] equation for scoring
|
||||
documents in a query.
|
||||
|
||||
*Variables*
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Sort context
|
||||
|
||||
Use a Painless script to
|
||||
{es_version}/search-request-sort.html[sort] the documents in a query.
|
||||
{ref}/search-request-sort.html[sort] the documents in a query.
|
||||
|
||||
*Variables*
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Update by query context
|
||||
|
||||
Use a Painless script in an
|
||||
{es_version}/docs-update-by-query.html[update by query] operation to add,
|
||||
{ref}/docs-update-by-query.html[update by query] operation to add,
|
||||
modify, or delete fields within each of a set of documents collected as the
|
||||
result of query.
|
||||
|
||||
|
@ -14,22 +14,22 @@ result of query.
|
|||
`ctx['_op']` (`String`)::
|
||||
The name of the operation.
|
||||
|
||||
{es_version}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only)::
|
||||
{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only)::
|
||||
The value used to select a shard for document storage.
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only)::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only)::
|
||||
The name of the index.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only)::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only)::
|
||||
The type of document within an index.
|
||||
|
||||
{es_version}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
The unique document id.
|
||||
|
||||
`ctx['_version']` (`int`, read-only)::
|
||||
The current version of the document.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
Contains extracted JSON in a `Map` and `List` structure for the fields
|
||||
existing in a stored document.
|
||||
|
||||
|
@ -40,7 +40,7 @@ result of query.
|
|||
specify no operation or `delete` to delete the current document from
|
||||
the index.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`]::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`]::
|
||||
Modify the values in the `Map/List` structure to add, modify, or delete
|
||||
the fields of a document.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-update-context]]
|
||||
=== Update context
|
||||
|
||||
Use a Painless script in an {es_version}/docs-update.html[update] operation to
|
||||
Use a Painless script in an {ref}/docs-update.html[update] operation to
|
||||
add, modify, or delete fields within a single document.
|
||||
|
||||
*Variables*
|
||||
|
@ -12,16 +12,16 @@ add, modify, or delete fields within a single document.
|
|||
`ctx['_op']` (`String`)::
|
||||
The name of the operation.
|
||||
|
||||
{es_version}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only)::
|
||||
{ref}/mapping-routing-field.html[`ctx['_routing']`] (`String`, read-only)::
|
||||
The value used to select a shard for document storage.
|
||||
|
||||
{es_version}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only)::
|
||||
{ref}/mapping-index-field.html[`ctx['_index']`] (`String`, read-only)::
|
||||
The name of the index.
|
||||
|
||||
{es_version}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only)::
|
||||
{ref}/mapping-type-field.html[`ctx['_type']`] (`String`, read-only)::
|
||||
The type of document within an index.
|
||||
|
||||
{es_version}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
{ref}/mapping-id-field.html[`ctx['_id']`] (`int`, read-only)::
|
||||
The unique document id.
|
||||
|
||||
`ctx['_version']` (`int`, read-only)::
|
||||
|
@ -30,7 +30,7 @@ add, modify, or delete fields within a single document.
|
|||
`ctx['_now']` (`long`, read-only)::
|
||||
The current timestamp in milliseconds.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`] (`Map`)::
|
||||
Contains extracted JSON in a `Map` and `List` structure for the fields
|
||||
existing in a stored document.
|
||||
|
||||
|
@ -41,7 +41,7 @@ add, modify, or delete fields within a single document.
|
|||
specify no operation or `delete` to delete the current document from
|
||||
the index.
|
||||
|
||||
{es_version}/mapping-source-field.html[`ctx['_source']`]::
|
||||
{ref}/mapping-source-field.html[`ctx['_source']`]::
|
||||
Modify the values in the `Map/List` structure to add, modify, or delete
|
||||
the fields of a document.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-watcher-condition-context]]
|
||||
=== Watcher condition context
|
||||
|
||||
Use a Painless script as a {xp_version}/condition-script.html[watcher condition]
|
||||
Use a Painless script as a {xpack-ref}/condition-script.html[watcher condition]
|
||||
to test if a response is necessary.
|
||||
|
||||
*Variables*
|
||||
|
@ -26,7 +26,7 @@ to test if a response is necessary.
|
|||
|
||||
`ctx['payload']` (`Map`, read-only)::
|
||||
The accessible watch data based upon the
|
||||
{xp_version}/input.html[watch input].
|
||||
{xpack-ref}/input.html[watch input].
|
||||
|
||||
*Return*
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[painless-watcher-transform-context]]
|
||||
=== Watcher transform context
|
||||
|
||||
Use a Painless script to {xp_version}/transform-script.html[transform] watch
|
||||
Use a Painless script to {xpack-ref}/transform-script.html[transform] watch
|
||||
data into a new payload for use in a response to a condition.
|
||||
|
||||
*Variables*
|
||||
|
@ -26,7 +26,7 @@ data into a new payload for use in a response to a condition.
|
|||
|
||||
`ctx['payload']` (`Map`, read-only)::
|
||||
The accessible watch data based upon the
|
||||
{xp_version}/input.html[watch input].
|
||||
{xpack-ref}/input.html[watch input].
|
||||
|
||||
|
||||
*Return*
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
=== Weight context
|
||||
|
||||
Use a Painless script to create a
|
||||
{es_version}/index-modules-similarity.html[weight] for use in a
|
||||
{ref}/index-modules-similarity.html[weight] for use in a
|
||||
<<painless-similarity-context, similarity script>>. Weight is used to prevent
|
||||
recalculation of constants that remain the same across documents.
|
||||
|
||||
|
|
|
@ -9,23 +9,24 @@ The Painless execute API allows an arbitrary script to be executed and a result
|
|||
.Parameters
|
||||
[options="header"]
|
||||
|======
|
||||
| Name | Required | Default | Description
|
||||
| `script` | yes | - | The script to execute
|
||||
| `context` | no | `painless_test` | The context the script should be executed in.
|
||||
| Name | Required | Default | Description
|
||||
| `script` | yes | - | The script to execute
|
||||
| `context` | no | `painless_test` | The context the script should be executed in.
|
||||
| `context_setup` | no | - | Additional parameters to the context.
|
||||
|======
|
||||
|
||||
==== Contexts
|
||||
|
||||
Contexts control how scripts are executed, what variables are available at runtime and what the return type is.
|
||||
|
||||
===== Painless test script context
|
||||
===== Painless test context
|
||||
|
||||
The `painless_test` context executes scripts as is and do not add any special parameters.
|
||||
The only variable that is available is `params`, which can be used to access user defined values.
|
||||
The result of the script is always converted to a string.
|
||||
If no context is specified then this context is used by default.
|
||||
|
||||
==== Example
|
||||
====== Example
|
||||
|
||||
Request:
|
||||
|
||||
|
@ -52,4 +53,124 @@ Response:
|
|||
"result": "0.1"
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE
|
||||
// TESTRESPONSE
|
||||
|
||||
===== Filter context
|
||||
|
||||
The `filter` context executes scripts as if they were executed inside a `script` query.
|
||||
For testing purposes a document must be provided that will be indexed temporarily in-memory and
|
||||
is accessible to the script being tested. Because of this the _source, stored fields and doc values
|
||||
are available in the script being tested.
|
||||
|
||||
The following parameters may be specified in `context_setup` for a filter context:
|
||||
|
||||
document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script.
|
||||
index:: The name of an index containing a mapping that is compatable with the document being indexed.
|
||||
|
||||
====== Example
|
||||
|
||||
[source,js]
|
||||
----------------------------------------------------------------
|
||||
PUT /my-index
|
||||
{
|
||||
"mappings": {
|
||||
"_doc": {
|
||||
"properties": {
|
||||
"field": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
POST /_scripts/painless/_execute
|
||||
{
|
||||
"script": {
|
||||
"source": "doc['field'].value.length() <= params.max_length",
|
||||
"params": {
|
||||
"max_length": 4
|
||||
}
|
||||
},
|
||||
"context": "filter",
|
||||
"context_setup": {
|
||||
"index": "my-index",
|
||||
"document": {
|
||||
"field": "four"
|
||||
}
|
||||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// CONSOLE
|
||||
|
||||
Response:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"result": true
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE
|
||||
|
||||
|
||||
===== Score context
|
||||
|
||||
The `score` context executes scripts as if they were executed inside a `script_score` function in
|
||||
`function_score` query.
|
||||
|
||||
The following parameters may be specified in `context_setup` for a score context:
|
||||
|
||||
document:: Contains the document that will be temporarily indexed in-memory and is accessible from the script.
|
||||
index:: The name of an index containing a mapping that is compatable with the document being indexed.
|
||||
query:: If `_score` is used in the script then a query can specified that will be used to compute a score.
|
||||
|
||||
====== Example
|
||||
|
||||
[source,js]
|
||||
----------------------------------------------------------------
|
||||
PUT /my-index
|
||||
{
|
||||
"mappings": {
|
||||
"_doc": {
|
||||
"properties": {
|
||||
"field": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"rank": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
POST /_scripts/painless/_execute
|
||||
{
|
||||
"script": {
|
||||
"source": "doc['rank'].value / params.max_rank",
|
||||
"params": {
|
||||
"max_rank": 5.0
|
||||
}
|
||||
},
|
||||
"context": "score",
|
||||
"context_setup": {
|
||||
"index": "my-index",
|
||||
"document": {
|
||||
"rank": 4
|
||||
}
|
||||
}
|
||||
}
|
||||
----------------------------------------------------------------
|
||||
// CONSOLE
|
||||
|
||||
Response:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"result": 0.8
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue