OpenSearch/benchmarks
Alpar Torok 84660ffd5f
Solve Gradle deprecation warnings around shadowJar (#30483)
* Solve Gradle deprecation warnings around shadowJar

- Apply workaround for: johnrengelman/shadow#336
- bump plugin to 2.0.4

Changes between 2.0.2 and 2.0.4 of the plugin:
```
477db40 12 days ago john.engelman@target.com Release 2.0.4
3e3da37 3 weeks ago john.engelman@target.com Remove internal Gradle API and annotation internal getters on shadow jar.
31e2380 3 weeks ago john.engelman@target.com Close input streams. Closes #364
f712cc8 3 weeks ago john.engelman@target.com Upgrade ASM to 6.1.1 to address perf issues. Closes #374
2f94b2b 3 weeks ago john.engelman@target.com next version
23bbf3d 7 weeks ago john.r.engelman@gmail.com Add some gradle versions. Update changelog for 2.0.3
7435c74 7 weeks ago john.r.engelman@gmail.com Merge pull request #367 from ttsiebzehntt/366-java10
325c002 7 weeks ago info@martinsadowski.de Update ASM to 6.1
94550e5 3 months ago john.r.engelman@gmail.com Merge pull request #356 from sgnewson/update-file-to-files
66b691e 4 months ago john.r.engelman@gmail.com Merge pull request #358 from 3flex/patch-1
14761b1 4 months ago 3flex@users.noreply.github.com fix markdown for User Guide URL in issue template
a3f6984 4 months ago newson@synopsys.com update inputs.file to inputs.files, to remove warning
```

closes #30389

* Improove comment as suggested
2018-05-10 12:49:41 +03:00
..
src/main Limit AllocationService dependency injection hack (#24479) 2017-05-05 08:39:18 +02:00
README.md Refine wording in benchmark README and correct typos 2016-06-15 23:01:56 +02:00
build.gradle Solve Gradle deprecation warnings around shadowJar (#30483) 2018-05-10 12:49:41 +03:00

README.md

Elasticsearch Microbenchmark Suite

This directory contains the microbenchmark suite of Elasticsearch. It relies on JMH.

Purpose

We do not want to microbenchmark everything but the kitchen sink and should typically rely on our macrobenchmarks with Rally. Microbenchmarks are intended to spot performance regressions in performance-critical components. The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.

Getting Started

Just run gradle :benchmarks:jmh from the project root directory. It will build all microbenchmarks, execute them and print the result.

Running Microbenchmarks

Benchmarks are always run via Gradle with gradle :benchmarks:jmh.

Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).

If you want to run a specific benchmark class, e.g. org.elasticsearch.benchmark.MySampleBenchmark or have special requirements generate the uberjar with gradle :benchmarks:jmhJar and run it directly with:

java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar

JMH supports lots of command line parameters. Add -h to the command above to see the available command line options.

Adding Microbenchmarks

Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the JMH samples.

In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and end the class name of a benchmark with Benchmark. To have JMH execute a benchmark, annotate the respective methods with @Benchmark.

Tips and Best Practices

To get realistic results, you should exercise care when running benchmarks. Here are a few tips:

Do

  • Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary runtime jitter. Watch the Error column in the benchmark results to see the run-to-run variance.
  • Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
  • Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use taskset.
  • Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use cpufreq-set and the performance CPU governor.
  • Vary the problem input size with @Param.
  • Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
    • Run the generated uberjar directly and use -prof gc to check whether the garbage collector runs during a microbenchmarks and skews your results. If so, try to force a GC between runs (-gc true) but watch out for the caveats.
    • Use -prof perf or -prof perfasm (both only available on Linux) to see hotspots.
  • Have your benchmarks peer-reviewed.

Don't

  • Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with -prof perfasm.
  • Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
  • Look only at the Score column and ignore Error. Instead take countermeasures to keep Error low / variance explainable.