2016-06-15 10:48:02 -04:00
|
|
|
# Elasticsearch Microbenchmark Suite
|
|
|
|
|
|
|
|
This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/).
|
|
|
|
|
|
|
|
## Purpose
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
|
|
|
|
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
|
|
|
|
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
|
2016-06-15 17:01:34 -04:00
|
|
|
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
|
2016-06-15 10:48:02 -04:00
|
|
|
|
|
|
|
## Getting Started
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
Just run `gradlew -p benchmarks run` from the project root
|
|
|
|
directory. It will build all microbenchmarks, execute them and print
|
|
|
|
the result.
|
2016-06-15 10:48:02 -04:00
|
|
|
|
|
|
|
## Running Microbenchmarks
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
Running via an IDE is not supported as the results are meaningless
|
|
|
|
because we have no control over the JVM running the benchmarks.
|
2016-06-15 10:48:02 -04:00
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
If you want to run a specific benchmark class like, say,
|
|
|
|
`MemoryStatsBenchmark`, you can use `--args`:
|
2016-06-15 10:48:02 -04:00
|
|
|
|
|
|
|
```
|
2018-07-31 17:31:13 -04:00
|
|
|
gradlew -p benchmarks run --args ' MemoryStatsBenchmark'
|
2016-06-15 10:48:02 -04:00
|
|
|
```
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
Everything in the `'` gets sent on the command line to JMH. The leading ` `
|
|
|
|
inside the `'`s is important. Without it parameters are sometimes sent to
|
|
|
|
gradle.
|
2016-06-15 10:48:02 -04:00
|
|
|
|
|
|
|
## Adding Microbenchmarks
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
|
2016-06-15 10:48:02 -04:00
|
|
|
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
|
2016-06-15 10:48:02 -04:00
|
|
|
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.
|
|
|
|
|
|
|
|
## Tips and Best Practices
|
|
|
|
|
2016-06-15 17:01:34 -04:00
|
|
|
To get realistic results, you should exercise care when running benchmarks. Here are a few tips:
|
2016-06-15 10:48:02 -04:00
|
|
|
|
|
|
|
### Do
|
|
|
|
|
2018-07-31 17:31:13 -04:00
|
|
|
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
|
2016-06-15 10:48:02 -04:00
|
|
|
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
|
2016-06-15 17:01:34 -04:00
|
|
|
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
|
2016-06-15 10:48:02 -04:00
|
|
|
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
|
2018-07-31 17:31:13 -04:00
|
|
|
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
|
2016-06-15 10:48:02 -04:00
|
|
|
`performance` CPU governor.
|
2016-06-15 17:01:34 -04:00
|
|
|
* Vary the problem input size with `@Param`.
|
2016-06-15 10:48:02 -04:00
|
|
|
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
|
2020-05-07 09:10:51 -04:00
|
|
|
* Add `-prof gc` to the options to check whether the garbage collector runs during a microbenchmarks and skews
|
2016-06-15 17:01:34 -04:00
|
|
|
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
|
2020-05-07 09:10:51 -04:00
|
|
|
* Add `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
|
2016-06-15 10:48:02 -04:00
|
|
|
* Have your benchmarks peer-reviewed.
|
|
|
|
|
|
|
|
### Don't
|
|
|
|
|
2016-06-15 17:01:34 -04:00
|
|
|
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
|
|
|
|
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
|
2018-07-31 17:31:13 -04:00
|
|
|
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
|
2020-08-10 12:45:34 -04:00
|
|
|
|
|
|
|
## Disassembling
|
|
|
|
|
|
|
|
Disassembling is fun! Maybe not always useful, but always fun! Generally, you'll want to install `perf` and FCML's `hsdis`.
|
|
|
|
`perf` is generally available via `apg-get install perf` or `pacman -S perf`. FCML is a little more involved. This worked
|
|
|
|
on 2020-08-01:
|
|
|
|
|
|
|
|
```
|
|
|
|
wget https://github.com/swojtasiak/fcml-lib/releases/download/v1.2.2/fcml-1.2.2.tar.gz
|
|
|
|
tar xf fcml*
|
|
|
|
cd fcml*
|
|
|
|
./configure
|
|
|
|
make
|
|
|
|
cd example/hsdis
|
|
|
|
make
|
|
|
|
cp .libs/libhsdis.so.0.0.0
|
|
|
|
sudo cp .libs/libhsdis.so.0.0.0 /usr/lib/jvm/java-14-adoptopenjdk/lib/hsdis-amd64.so
|
|
|
|
```
|
|
|
|
|
|
|
|
If you want to disassemble a single method do something like this:
|
|
|
|
|
|
|
|
```
|
|
|
|
gradlew -p benchmarks run --args ' MemoryStatsBenchmark -jvmArgs "-XX:+UnlockDiagnosticVMOptions -XX:CompileCommand=print,*.yourMethodName -XX:PrintAssemblyOptions=intel"
|
|
|
|
```
|
|
|
|
|
|
|
|
If you want `perf` to find the hot methods for you then do add `-prof:perfasm`.
|