angular-cn/modules/benchmarks
Joey Perrott 95383ee6a9 ci: don't run benchmark_tests on remote executors (#34996)
Since benchmarks are meant to test in a consistent environment, we
cannot execute the benchmark on RBE executors as executors do not
run in calibrated environments.

PR Close #34996
2020-01-29 14:15:25 -08:00
..
e2e_test/old build: convert largeform benchmarks to bazel (#28645) 2019-02-13 12:15:01 -08:00
src test: disambiguate e2e and perf spec files (#34753) 2020-01-29 09:22:27 -08:00
BUILD.bazel build: update instructions to run benchmark tests (#28645) 2019-02-13 12:15:01 -08:00
README.md test: ensure global options for benchmark tests can be set in bazel (#34753) 2020-01-29 09:22:27 -08:00
benchmark_test.bzl ci: don't run benchmark_tests on remote executors (#34996) 2020-01-29 14:15:25 -08:00
e2e_test.bzl test: ensure global options for benchmark tests can be set in bazel (#34753) 2020-01-29 09:22:27 -08:00
start-server.js build: update to nodejs rules 0.34.0 and bazel 0.28.1 (#31824) 2019-07-26 15:01:25 -07:00
tsconfig-build.json build: serve benchmark tree examples with bazel (#28568) 2019-02-08 13:37:36 -08:00
tsconfig-e2e.json build: convert largeform benchmarks to bazel (#28645) 2019-02-13 12:15:01 -08:00

README.md

How to run the benchmarks locally

Run in the browser

yarn bazel run modules/benchmarks/src/tree/{name}:devserver

# e.g. "ng2" tree benchmark:
yarn bazel run modules/benchmarks/src/tree/ng2:devserver

Run e2e tests

# Run e2e tests of individual applications:
yarn bazel test modules/benchmarks/src/tree/ng2/...

# Run all e2e tests:
yarn bazel test modules/benchmarks/...

Use of *_aot.ts files

The *_aot.ts files are used as entry-points within Google to run the benchmark tests. These are still built as part of the corresponding ng_module rule.

Specifying benchmark options

There are options that can be specified in order to control how a given benchmark target runs. The following options can be set through test environment variables:

  • PERF_SAMPLE_SIZE: Benchpress performs measurements until scriptTime predictively no longer decreases. It does this by using a simple linear regression with the amount of samples specified. Defaults to 20 samples.
  • PERF_FORCE_GC: If set to true, @angular/benchpress will run run the garbage collector before and after performing measurements. Benchpress will measure and report the garbage collection time.
  • PERF_DRYRUN: If set to true, no results are printed and stored in a json file. Also benchpress only performs a single measurement (unlike with the simple linear regression).

Here is an example command that sets the PERF_DRYRUN option:

yarn bazel test modules/benchmarks/src/tree/baseline:perf --test_env=PERF_DRYRUN=true