Currently we run all benchmark perf tests in CircleCI. Since we do not collect any results, we unnecessarily waste CI/RBE resources. Instead, we should just not run benchmark perf tests in CI, but still run the functionality e2e tests which ensure that benchmarks are not broken. We can do this by splitting the perf and e2e tests into separate files/targets. PR Close #34753
45 lines
655 B
Plaintext
45 lines
655 B
Plaintext
.DS_STORE
|
|
|
|
/dist/
|
|
/bazel-out
|
|
/integration/bazel/bazel-*
|
|
*.log
|
|
node_modules
|
|
|
|
# Include when developing application packages.
|
|
pubspec.lock
|
|
.c9
|
|
.idea/
|
|
.devcontainer/*
|
|
!.devcontainer/README.md
|
|
!.devcontainer/recommended-devcontainer.json
|
|
!.devcontainer/recommended-Dockerfile
|
|
.settings/
|
|
.vscode/launch.json
|
|
.vscode/settings.json
|
|
.vscode/tasks.json
|
|
*.swo
|
|
modules/.settings
|
|
modules/.vscode
|
|
.vimrc
|
|
.nvimrc
|
|
|
|
# Don't check in secret files
|
|
*secret.js
|
|
|
|
# Ignore npm/yarn debug log
|
|
npm-debug.log
|
|
yarn-error.log
|
|
|
|
# build-analytics
|
|
.build-analytics
|
|
|
|
# rollup-test output
|
|
/modules/rollup-test/dist/
|
|
|
|
# User specific bazel settings
|
|
.bazelrc.user
|
|
|
|
.notes.md
|
|
baseline.json
|