Previously, when the benchmark tests ran outside of Bazel, developers
had the posibility to control how the tests run through command line
options. e.g. `--dryrun`. This no longer works reliable in Bazel where
command line arguments are not passed to the text executable.
To make the global options still usable (as they could be still useful
in some cases), we just pass them through the Bazel `--test_env`. This
reduces the code we need to read the command line, but still preserves
the flexibility in a Bazel idiomatic way.
PR Close#34753
Currently we run all benchmark perf tests in CircleCI. Since we do not
collect any results, we unnecessarily waste CI/RBE resources. Instead,
we should just not run benchmark perf tests in CI, but still run the
functionality e2e tests which ensure that benchmarks are not broken.
We can do this by splitting the perf and e2e tests into separate
files/targets.
PR Close#34753