Build: Remove shadowing from benchmarks (#32475)

Removes shadowing from the benchmarks. It isn't *strictly* needed. We do
have to rework the documentation on how to run the benchmark, but it
still seems to work if you run everything through gradle.
This commit is contained in:
Nik Everett 2018-07-31 17:31:13 -04:00 committed by GitHub
parent b2c2c94741
commit 21eb9695af
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 45 additions and 59 deletions

View File

@ -11,22 +11,25 @@ The microbenchmark suite is also handy for ad-hoc microbenchmarks but please rem
## Getting Started ## Getting Started
Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result. Just run `gradlew -p benchmarks run` from the project root
directory. It will build all microbenchmarks, execute them and print
the result.
## Running Microbenchmarks ## Running Microbenchmarks
Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`. Running via an IDE is not supported as the results are meaningless
because we have no control over the JVM running the benchmarks.
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks). If you want to run a specific benchmark class like, say,
`MemoryStatsBenchmark`, you can use `--args`:
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:
``` ```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar gradlew -p benchmarks run --args ' MemoryStatsBenchmark'
``` ```
JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options. Everything in the `'` gets sent on the command line to JMH. The leading ` `
inside the `'`s is important. Without it parameters are sometimes sent to
gradle.
## Adding Microbenchmarks ## Adding Microbenchmarks

View File

@ -18,11 +18,8 @@
*/ */
apply plugin: 'elasticsearch.build' apply plugin: 'elasticsearch.build'
apply plugin: 'application'
// order of this section matters, see: https://github.com/johnrengelman/shadow/issues/336
apply plugin: 'application' // have the shadow plugin provide the runShadow task
mainClassName = 'org.openjdk.jmh.Main' mainClassName = 'org.openjdk.jmh.Main'
apply plugin: 'com.github.johnrengelman.shadow' // build an uberjar with all benchmarks
// Not published so no need to assemble // Not published so no need to assemble
tasks.remove(assemble) tasks.remove(assemble)
@ -50,10 +47,8 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
// needs to be added separately otherwise Gradle will quote it and javac will fail // needs to be added separately otherwise Gradle will quote it and javac will fail
compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"]) compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])
forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes // classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true forbiddenApisMain.enabled = false
}
// No licenses for our benchmark deps (we don't ship benchmarks) // No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false dependencyLicenses.enabled = false
@ -69,20 +64,3 @@ thirdPartyAudit.excludes = [
'org.openjdk.jmh.profile.HotspotRuntimeProfiler', 'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils' 'org.openjdk.jmh.util.Utils'
] ]
runShadow {
executable = new File(project.runtimeJavaHome, 'bin/java')
}
// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)
task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)

View File

@ -2,10 +2,18 @@
1. Build `client-benchmark-noop-api-plugin` with `gradle :client:client-benchmark-noop-api-plugin:assemble` 1. Build `client-benchmark-noop-api-plugin` with `gradle :client:client-benchmark-noop-api-plugin:assemble`
2. Install it on the target host with `bin/elasticsearch-plugin install file:///full/path/to/client-benchmark-noop-api-plugin.zip` 2. Install it on the target host with `bin/elasticsearch-plugin install file:///full/path/to/client-benchmark-noop-api-plugin.zip`
3. Start Elasticsearch on the target host (ideally *not* on the same machine) 3. Start Elasticsearch on the target host (ideally *not* on the machine
4. Build an uberjar with `gradle :client:benchmark:shadowJar` and execute it. that runs the benchmarks)
4. Run the benchmark with
```
./gradlew -p client/benchmark run --args ' params go here'
```
Repeat all steps above for the other benchmark candidate. Everything in the `'` gets sent on the command line to JMH. The leading ` `
inside the `'`s is important. Without it parameters are sometimes sent to
gradle.
See below for some example invocations.
### Example benchmark ### Example benchmark
@ -13,15 +21,18 @@ In general, you should define a few GC-related settings `-Xms8192M -Xmx8192M -XX
#### Bulk indexing #### Bulk indexing
Download benchmark data from http://benchmarks.elastic.co/corpora/geonames/documents.json.bz2 and decompress them. Download benchmark data from http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames and decompress them.
Example command line parameters: Example invocation:
``` ```
rest bulk 192.168.2.2 ./documents.json geonames type 8647880 5000 wget http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames/documents-2.json.bz2
bzip2 -d documents-2.json.bz2
mv documents-2.json client/benchmark/build
gradlew -p client/benchmark run --args ' rest bulk localhost build/documents-2.json geonames type 8647880 5000'
``` ```
The parameters are in order: The parameters are all in the `'`s and are in order:
* Client type: Use either "rest" or "transport" * Client type: Use either "rest" or "transport"
* Benchmark type: Use either "bulk" or "search" * Benchmark type: Use either "bulk" or "search"
@ -33,12 +44,12 @@ The parameters are in order:
* bulk size * bulk size
#### Bulk indexing #### Search
Example command line parameters: Example invocation:
``` ```
rest search 192.168.2.2 geonames "{ \"query\": { \"match_phrase\": { \"name\": \"Sankt Georgen\" } } }\"" 500,1000,1100,1200 gradlew -p client/benchmark run --args ' rest search localhost geonames {"query":{"match_phrase":{"name":"Sankt Georgen"}}} 500,1000,1100,1200'
``` ```
The parameters are in order: The parameters are in order:
@ -49,5 +60,3 @@ The parameters are in order:
* name of the index * name of the index
* a search request body (remember to escape double quotes). The `TransportClientBenchmark` uses `QueryBuilders.wrapperQuery()` internally which automatically adds a root key `query`, so it must not be present in the command line parameter. * a search request body (remember to escape double quotes). The `TransportClientBenchmark` uses `QueryBuilders.wrapperQuery()` internally which automatically adds a root key `query`, so it must not be present in the command line parameter.
* A comma-separated list of target throughput rates * A comma-separated list of target throughput rates

View File

@ -18,9 +18,6 @@
*/ */
apply plugin: 'elasticsearch.build' apply plugin: 'elasticsearch.build'
// build an uberjar with all benchmarks
apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application' apply plugin: 'application'
group = 'org.elasticsearch.client' group = 'org.elasticsearch.client'
@ -32,7 +29,6 @@ build.dependsOn.remove('assemble')
archivesBaseName = 'client-benchmarks' archivesBaseName = 'client-benchmarks'
mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain' mainClassName = 'org.elasticsearch.client.benchmark.BenchmarkMain'
// never try to invoke tests on the benchmark project - there aren't any // never try to invoke tests on the benchmark project - there aren't any
test.enabled = false test.enabled = false