angular-cn/packages/core/test/render3/perf
Pawel Kozlowski e89c2dd8d0 perf(ivy): add ngIf-like directive to the ng_template benchmark (#33595)
PR Close #33595
2019-11-06 18:00:48 +00:00
..
class_binding test(ivy): clean up class_binding perf tests for readability (#33413) 2019-10-25 15:18:07 -07:00
directive_instantiate refactor(ivy): Intruduce LFrame to store global instruction information (#33178) 2019-10-24 14:42:15 -07:00
element_text_create refactor(ivy): Intruduce LFrame to store global instruction information (#33178) 2019-10-24 14:42:15 -07:00
interpolation perf(ivy): convert all node-based benchmark to use a testing harness (#32699) 2019-09-16 09:31:15 -07:00
listeners refactor(ivy): Intruduce LFrame to store global instruction information (#33178) 2019-10-24 14:42:15 -07:00
map_based_style_and_class_bindings refactor(ivy): move `styling/instructions.ts` to `instructions/styling.ts` (#32731) 2019-09-18 15:06:39 -07:00
ng_template perf(ivy): add ngIf-like directive to the ng_template benchmark (#33595) 2019-11-06 18:00:48 +00:00
noop_change_detection refactor(ivy): Intruduce LFrame to store global instruction information (#33178) 2019-10-24 14:42:15 -07:00
property_binding perf(ivy): convert all node-based benchmark to use a testing harness (#32699) 2019-09-16 09:31:15 -07:00
property_binding_update refactor(ivy): improve micro-benchmark profiling (#32647) 2019-09-12 13:06:38 -07:00
style_and_class_bindings perf(ivy): move attributes array into component def (#32798) 2019-10-09 13:16:55 -07:00
style_binding refactor(ivy): move `styling/instructions.ts` to `instructions/styling.ts` (#32731) 2019-09-18 15:06:39 -07:00
BUILD.bazel perf(ivy): add new benchmark focused on template creation (#33511) 2019-10-31 22:53:29 +00:00
README.md build: add ng_benchmark macro to run perf benchmarks (#33389) 2019-10-25 13:13:32 -07:00
micro_bench.ts test(ivy): clean up class_binding perf tests for readability (#33413) 2019-10-25 15:18:07 -07:00
noop_renderer.ts test(ivy): support `className` in micro benchmarks (#33392) 2019-10-25 09:17:52 -07:00
noop_renderer_spec.ts test(ivy): support `className` in micro benchmarks (#33392) 2019-10-25 09:17:52 -07:00
profile_all.js build: restore functionality of the micro benchmarks `profile_all.js` script (#33494) 2019-11-01 17:47:49 +00:00
profile_in_browser.html test(core): support running performance benchmarks in browser (#33340) 2019-10-24 14:07:25 -07:00
setup.ts test(ivy): support `className` in micro benchmarks (#33392) 2019-10-25 09:17:52 -07:00

README.md

Build

yarn bazel build //packages/core/test/render3/perf:${BENCHMARK}.min_debug.es2015.js --define=compile=aot

Run

node dist/bin/packages/core/test/render3/perf/${BENCHMARK}.min_debug.es2015.js

Profile

node --no-turbo-inlining --inspect-brk dist/bin/packages/core/test/render3/perf/${BENCHMARK}.min_debug.es2015.js

then connect with a debugger (the --inspect-brk option will make sure that benchmark execution doesn't start until a debugger is connected and the code execution is manually resumed).

The actual benchmark code has calls that will start (console.profile) and stop (console.profileEnd) a profiling session.

Deoptigate

yarn add deoptigate
yarn deoptigate dist/bin/packages/core/test/render3/perf/${BENCHMARK}.min_debug.es2015.js

Run All

To run all of the benchmarks use the profile_all.js script:

node packages/core/test/render3/perf/profile_all.js

NOTE: This command will build all of the tests, so there is no need to do so manually.

Optionally use the --write command to save the run result to a file for later comparison.

node packages/core/test/render3/perf/profile_all.js --write baseline.json

Comparing Runs

If you have saved the baseline (as described in the step above) you can use it to get change in performance like so:

node packages/core/test/render3/perf/profile_all.js --read baseline.json

The resulting output should look something like this:

┌────────────────────────────────────┬─────────┬──────┬───────────┬───────────┬───────┐
│              (index)               │  time   │ unit │ base_time │ base_unit │   %   │
├────────────────────────────────────┼─────────┼──────┼───────────┼───────────┼───────┤
│       directive_instantiate        │ 276.652 │ 'ms' │  286.292  │   'ms'    │ -3.37 │
│        element_text_create         │ 262.868 │ 'ms' │  260.031  │   'ms'    │ 1.09  │
│           interpolation            │ 257.733 │ 'us' │  260.489  │   'us'    │ -1.06 │
│             listeners              │  1.997  │ 'us' │   1.985   │   'us'    │  0.6  │
│ map_based_style_and_class_bindings │  10.07  │ 'ms' │   9.786   │   'ms'    │  2.9  │
│       noop_change_detection        │ 93.256  │ 'us' │  91.745   │   'us'    │ 1.65  │
│          property_binding          │ 290.777 │ 'us' │  280.586  │   'us'    │ 3.63  │
│      property_binding_update       │ 588.545 │ 'us' │  583.334  │   'us'    │ 0.89  │
│      style_and_class_bindings      │  1.061  │ 'ms' │   1.047   │   'ms'    │ 1.34  │
│           style_binding            │ 543.841 │ 'us' │  545.385  │   'us'    │ -0.28 │
└────────────────────────────────────┴─────────┴──────┴───────────┴───────────┴───────┘

Notes

To run the benchmark use bazel run <benchmark_target>, example:

  • yarn bazel run --define=compile=aot //packages/core/test/render3/perf:noop_change_detection

To profile, append _profile to the target name and attach a debugger via chrome://inspect, example:

  • yarn bazel run --define=compile=aot //packages/core/test/render3/perf:noop_change_detection_profile

To interactively edit/rerun benchmarks use ibazel instead of bazel.