Switches our tslint setup to the standard `tslint.json` linter excludes.
The set of files that need to be linted is specified through a Yarn script.
For IDEs, open files are linted with the closest tslint configuration, if the
tslint IDE extension is set up, and the source file is not excluded.
We cannot use the language service plugin for tslint as we have multiple nested
tsconfig files, and we don't want to add the plugin to each tsconfig. We
could reduce that bloat by just extending from a top-level tsconfig that
defines the language service plugin, but unfortunately the tslint plugin does
not allow the use of tslint configs which are not part of the tsconfig project.
This is problematic since the tslint configuration is at the project root, and we
don't want to copy tslint configurations next to each tsconfig file.
Additionally, linting of `d.ts` files has been re-enabled. This has been
disabled in the past and a TODO has been left. This commit fixes the
lint issues and re-enables linting.
PR Close#35800
The changes to yarn.lock for the `$localize` typings
support are not filtering through because the cache
contains nested packages that are causing compilation
errors.
This change will force the cache to invalidate and allows
us to have a clean node_modules folder.
PR Close#35711
Move bazel-in-bazel them to test job & increase it is 2xlarge+. test_integration_bazel is removed. Overall CI credit usage is reduced.
test: include ng_elements_schematics in legacy integration tests temporarily
This test was recently added and use a new pattern that doesn't work with npm_integration_test out of the box. It needs some refactoring to work. Left a TODO for this
PR Close#33927
Now that large integration tests are running locally in parallel (they can't run on RBE yet as they require network access for yarn install), this test is running out of memory consistently with the xlarge machine
PR Close#33927
Install java runtime which is required by some integration tests such as `//integration:hello_world__closure_test`, `//integration:i18n_test` and `//integration:ng_elements_test` to run the closure compiler.
PR Close#33927
* it's tricky to get out of the runfiles tree with `bazel test` as `BUILD_WORKSPACE_DIRECTORY` is not set but I employed a trick to read the `DO_NOT_BUILD_HERE` file that is one level up from `execroot` and that contains the workspace directory. This is experimental and if `bazel test //:test.debug` fails than `bazel run` is still guaranteed to work as `BUILD_WORKSPACE_DIRECTORY` will be set in that context
* test //integration:bazel_test and //integration:bazel-schematics_test exclusively
* run "exclusive" and "manual" bazel-in-bazel integration tests in their own CI job as they take 8m+ to execute
```
//integration:bazel-schematics_test PASSED in 317.2s
//integration:bazel_test PASSED in 167.8s
```
* Skip all integration tests that are now handled by angular_integration_test except the tests that are tracked for payload size; these are:
- cli-hello-world*
- hello_world__closure
* add & pin @babel deps as newer versions of babel break //packages/localize/src/tools/test:test
@babel/core dep had to be pinned to 7.6.4 or else //packages/localize/src/tools/test:test failed. Also //packages/localize uses @babel/generator, @babel/template, @babel/traverse & @babel/types so these deps were added to package.json as they were not being hoisted anymore from @babel/core transitive.
NB: integration/hello_world__systemjs_umd test must run with systemjs 0.20.0
NB: systemjs must be at 0.18.10 for legacy saucelabs job to pass
NB: With Bazel 2.0, the glob for the files to test `"integration/bazel/**"` is empty if integation/bazel is in .bazelignore. This glob worked under these conditions with 1.1.0. I did not bother testing with 1.2.x as not having integration/bazel in .bazelignore is correct.
PR Close#33927
Previously, we needed to manually specify a ChromeDriver version to
download on CI that would be compatible with the browser version
provided by the docker image used to run the tests. This was kept in the
`CI_CHROMEDRIVER_VERSION_ARG` environment variable.
With recent commits, we use the browser provided by `puppeteer` and can
determine the correct ChromeDriver version programmatically. Therefore,
we no longer need the `CI_CHROMEDRIVER_VERSION_ARG` environment
variable.
NOTE:
There is still one place (the `bazel-schematics` integration project)
where a hard-coded ChromeDriver version is necessary. Since I am not
sure what is the best way to refactor the tests to not rely on a
hard-coded version, I left it as a TODO for a follow-up PR.
PR Close#35381
In #35049, integration and AIO tests were changed to use the browser
provided by `puppeteer` in tests. This commit switches the docs examples
tests to use the same setup.
IMPLEMENTATION NOTE:
The examples are used to create ZIP archives that docs users can
download to experiment with. Since we want the downloaded projects to
resemble an `@angular/cli` generated project, we do not want to affect
the project's Protractor configuration in order to use `puppeteer`.
To achieve this, a second Protractor configuration is created (which is
ignored when creating the ZIP archives) that extends the original one
and passes the approperiate arguments to use the browser provided by
`puppeteer`. This new configuration (`protractor-puppeteer.conf.js`) is
used when running the docs examples tests (on CI or locally during
development).
PR Close#35381
This is a follow-up to #35049 with a few minor fixes related to using
the browser provided by `puppeteer` to run tests. Included fixes:
- Make the `webdriver-manager-update.js` really portable. (Previously,
it needed to be run from the directory that contained the
`node_modules/` directory. Now, it can be executed from a subdirectory
and will correctly resolve dependencies.)
- Use the `puppeteer`-based setup in AIO unit and e2e tests to ensure
that the downloaded ChromeDriver version matches the browser version
used in tests.
- Use the `puppeteer`-based setup in the `aio_monitoring_stable` CI job
(as happens with `aio_monitoring_next`).
- Use the [recommended way][1] of getting the browser port when using
`puppeteer` with `lighthouse` and avoid hard-coding the remote
debugging port (to be able to handle multiple instances running
concurrently).
[1]: https://github.com/GoogleChrome/lighthouse/blame/51df179a0/docs/puppeteer.md#L49
PR Close#35381
Verify that all files in the repo are covered by the pullapprove config
and that all rules in the pullapprove config match at least one file
in the repo.
PR Close#35060
This means integration tests no longer need to depend on a $CI_CHROMEDRIVER_VERSION_ARG environment variable to specify which chromedriver version to download to match the locally installed chrome. This was bad DX and not having it specified was not reliable as webdriver-manager would not always download the chromedriver version to work with the locally installed chrome.
webdriver-manager update --gecko=false --standalone=false $CI_CHROMEDRIVER_VERSION_ARG is now replaced with node webdriver-manager-update.js in the root package.json, which checks which version of chrome puppeteer has come bundled with & downloads informs webdriver-manager to download the corresponding chrome driver version.
Integration tests now use "webdriver-manager": "file:../../node_modules/webdriver-manager" so they don't have to waste time calling webdriver-manager update in postinstall
"// resolutions": "Ensure a single version of webdriver-manager which comes from root node_modules that has already run webdriver-manager update",
"resolutions": {
"**/webdriver-manager": "file:../../node_modules/webdriver-manager"
}
This should speed up each integration postinstall by a few seconds.
Further, integration test package.json files link puppeteer via file:../../node_modules/puppeteer which is the ideal situation as the puppeteer post-install won't download chrome if it is already downloaded. In CI, since node_modules is cached it should not need to download Chrome either unless the node_modules cache is busted.
NB: each version of puppeteer comes bundles with a specific version of chrome. Root package.json & yarn.lock currently pull down puppeteer 2.1.0 which comes with chrome 80. See https://github.com/puppeteer/puppeteer#q-which-chromium-version-does-puppeteer-use for more info.
Only two references to CI_CHROMEDRIVER_VERSION_ARG left in integration tests at integration/bazel-schematics/test.sh which I'm not entirely sure how to get rid of it
Use a lightweight puppeteer=>chrome version mapping instead of launching chrome and calling browser.version()
Launching puppeteer headless chrome and calling browser.version() was a heavy-handed approach to determine the Chrome version. A small and easy to update mappings file is a better solution and it means that the `yarn install` step does not require chrome shared libs available on the system for its postinstall step
PR Close#35049
We temporarily disabled the components-repo-unit-tests job as part of
a ngcc PR: #35079. This was necessary because the components repository
used postinstall patches for `@angular/compiler-cli/ngcc`. Due to
changes in ngcc, these patches no longer worked and caused the
`components-repo-unit-tests` job to fail.
The postinstall patch has been removed in the components repo, so the
job can be re-enabled.
PR Close#35123
Updates the commit the `components-repo-unit-tests` job runs against.
We need at least 2ec7254f88
which fixes a Ngcc postinstall patch conflict that required us to
temporarily disable the job in #35079.
PR Close#35123
In the past we had connecitivity issues on Saucelabs. Browsers on
mobile devices were not able to properly resolve the `localhost`
hostname through the tunnel. This is because the device resolves
`localhost` or `127.0.0.1` to the actual Saucelabs device, while it
should resolve to the tunnel host machine (in our case the CircleCI VM).
In the past, we simply disabled the failing devices and re-enabled the
devices later. At this point, the Saucelabs team claimed that the
connecitivy/proxy issues were fixed.
Saucelabs seems to have a process for VMs which ensures that requests to
`localhost` / `127.0.0.1` are properly resolved through the tunnel. This
process is not very reliable and can cause tests to fail. Related issues have been
observed/mentioned in the Saucelabs support docs. e.g.
https://support.saucelabs.com/hc/en-us/articles/115002212447-Unable-to-Reach-Application-on-localhost-for-Tests-Run-on-Safari-8-and-9-and-Edgehttps://support.saucelabs.com/hc/en-us/articles/225106887-Safari-and-Internet-Explorer-Won-t-Load-Website-When-Using-Sauce-Connect-on-Localhost
In order to ensure that requests are always resolved through the tunnel,
we add our own domain alias in the CircleCI's hosts file, and enforce that
it is always resolved through the tunnel (using the `--tunnel-domains` SC flag).
Saucelabs devices by default will never resolve this domain/hostname to the
actual local Saucelabs device.
PR Close#35171
Now that bazel respects the yarn-path value found in .yarnrc, we can
remove the last remaining reliances on our vendoring in
//third_party/github.com/yarnpkg/yarn/
PR Close#35083
In #35004, we started ignoring yarn's engines check for `yarn install`
in AIO's `test-production.sh` script to fix a failure in the
`aio_monitoring_stable` CI job. (See #35004 for details.)
It turns out that the version of yarn used on the stable branch (1.17.3)
`--ignore-engines` is needed on all yarn commands (including `yarn
run`). Thus, #35004 is not enough to fix the failures.
New example failure: https://circleci.com/gh/angular/angular/604341
This commit turns of the engines check for the whole
`aio_monitoring_stable` CI job to fix the failure and make the job more
robust.
PR Close#35033
* Added a /tools/saucelabs/sauce-service.sh script that manages the sauce-connect as a service which is used by the karma-saucelabs.js wrapper to start the service.
* Added /tools/saucelabs/README.md that covers the details of SauceLabs karma testing with Bazel.
PR Close#34769
We are migrating to PullApprove for our PR review management in an attempt
to allow for more granular and equitable code review assignments across the
team. Currently this migration is equivalent in the review assignments
it will create. Once stable, our expectation is that we will be able to
take advantage of PullApproves additional features for things like staged
reviews.
PR Close#34814
The `aio_monitoring_stable` CI job needs to check out (and execute
commands on) some files from the stable branch. Therefore, it needs to
use a version of `yarn` that is compatible with the `engines` version
specified in `aio/package.json`.
In #34902 (and subsequently #34952 for the 8.2.x branch), the way to
access our vendored version of yarn changed. This commit ensures that we
checkout the necessary files from the stable branch to use an
appropriate yarn version.
PR Close#34954
We rename the `material-unit-tests` job to `components-repo-unit-tests`
because the job runs all unit tests found in the Angular Components repository.
This includes the Angular CDK, Angular Material and more. Also the repository has
been renamed from `angular/material2` to `angular/components` in the past.
PR Close#34898
Creates a Bazel macro that can be used to test packages for
circular dependencies. We face one limitation with Bazel:
* Built packages use module imports, and not relative source file
paths. This means we need custom resolution.
Fortunately, tools like `madge` support custom resolution.
Also removes the outdated `check-cycles` gulp task that
didn't catch circular dependencies. It seems like the test
became broken when we switched the packages-dist output to Bazel. It
breaks because the Bazel output doesn't use relative paths, but uses
the module imports. This will be handled in the new Bazel macro/rule.
PR Close#34774
Since we always set up bazel on both our windows and linux CI
runs, we should consider this configuration part of the environment
setup.
PR Close#34834
Since we always execute our bazel commands using RBE on CI
we should not have separete tasks of configuring bazel
and setting up RBE for bazel. Enabling RBE for bazel should
be part of our bazel configuration process on CI.
PR Close#34834
Previously, we added patches to the `angular/components` repository in
the `material-unit-tests` job in order to be able to run their tests
with TypeScript 3.7. This is no longer needed since the components
repository updated to Ts 3.7 already. Hence we can remove the patches.
PR Close#34863
Updates the `material-unit-tests` job commit to the latest
available commit. We need to update since the `angular/components`
repository updated to TypeScript 3.7, which means that we can remove
the TS 3.7 workarounds we applied to get the job green when framework
updated to TS 3.7.
PR Close#34863
Updates the SHA that will be tested against in the `material-unit-tests` job to the latest commit in the components repository. SHA 71955d2e194bfc5561f25daea16e68af266d6ff9 is needed in order to compile repository with typescript 3.7
PR Close#33717
The components repository has a Yarn resolution to ensure that
dgeni-packages uses a specific TypeScript version. This resolution
causes the specified TS version to be considered as candidate for the
`@angular/bazel` peer dependency. Ultimately, Yarn decides to use
the TypeScript version from the resolution for `@angular/bazel`,
and builds will fail due to a version mismatch.
This is because `tsickle` will use the hoisted top-level TS
version (set to `3.7.4` ), while `@angular/bazel` uses the
version from the resolution (at the time of writing: v3.6.4)
PR Close#33717
Updates the commit SHA the `material-unit-tests` CircleCI
job runs against. We need to include a commit that makes
the node module installation more determinisitc (i.e. ensuring
that `tsickle` is always hoisted at the node module root).
31a50f7ad2
PR Close#33717
Also add comment that cache_key version should be bumped when switching forks or branches and added a comment from @devversion explaining how the fallback cache key works.
PR Close#34589
The `aio_monitoring_stable` CI job is triggered as a cronjob on the
master branch and its purpose is to run some e2e tests against the
deployed stable version of the docs web-app at https://angular.io/. In
order for the tests to be compatible with the deployed version of the
web-app (which gets deployed from the stable branch), the stable branch
is checked out in git as part of the CI job.
Previously, we only checked out the `aio/` directory from the stable
branch, leaving the rest of the code at master. This doesn't matter as
long as the commands used to run the tests do not rely on code outside
of `aio/`. However, it turns out that there _is_ code outside of `aio/`
that affects the executed commands: It is our vendored version of yarn
(in `third_party/github.com/yarnpkg/`), which overwrites the global yarn
installed on the docker image on CI and must match the version range
specified in `aio/package.json > engines`.
Using the yarn version checked out from the master branch with the
`aio/` code checked out from the stable branch can lead to failures
such as [this one][1].
This commit fixes the problem by checking out both the `aio/` and
`third_party/github.com/yarnpkg/` directories from the stable branch and
re-running the steps to overwrite the global yarn executable with our
own version from `third_party/github.com/yarnpkg/`. This ensures that
the version of yarn used will be compatible with the version range
specified in `aio/package.json > engines`.
NOTE:
We cannot checkout everything from the stable branch, since the CI
config (`.circleci/config.yml` from the master branch) may try to run
certain scripts (such as `.circleci/get-vendored-yarn-path.js`) that are
not available on the stable branch. Therefore, we should only check out
the necessary bits from the stable branch.
[1]: https://circleci.com/gh/angular/angular/567315
PR Close#34451
Updates the material-unit-tests job commit SHA to the most recent
commit at the time of writing. The goal is to run the unit tests
with 6ae74a0eb2
that improved stability of a few menu tests that were flaky.
e.g. https://circleci.com/gh/angular/angular/564650
PR Close#34430
Currently the `saucelabs_view_engine` job fails because
the Saucelabs Bazel run script thinks that `--config=saucelabs`
is a flag targeting the actual script. This is not the case and
the flag should be actually part of the bazel command.
PR Close#34429
Currently we only run Saucelabs on PRs using the legacy View Engine
build. Switching that build to Ivy is not trivial and there are various
options:
1. Updating the R3 switches to use POST_R3 by default. At first glance,
this doesn't look easy because the current ngtsc switch logic seems to
be unidirectional (only PRE_R3 to POST_R3).
2. Updating the legacy setup to run with Ivy. This sounds like the easiest
solution at first.. but it turns out to be way more complicated. Packages
would need to be built with ngtsc using legacy tools (i.e. first building
the compiler-cli; and then building packages) and View Engine only tests
would need to be determined and filtered out. Basically it will result in
re-auditing all test targets. This is contradictory to the fact that we have
this information in Bazel already.
3. Creating a new job that runs tests on Saucelabs with Bazel. We specify
fine-grained test targets that should run. This would be a good start
(e.g. acceptance tests) and also would mean that we do not continue maintaining
the legacy setup..
This commit implements the third option as it allows us to move forward
with the general Bazel migration. We don't want to spend too much time
on our legacy setup since it will be removed anyway in the future.
PR Close#34277
We keep a version of yarn in the repo, at
`third_party/github.com/yarnpkg/`. All CI jobs should use that version
for consistency (and easier updates).
Previously, the Windows jobs did not use the local version. They used
the version that came pre-installed on the docker image that we used.
This made it more difficult to update the yarn version (something that
we might want to do independently of updating other dependencies, such
as Node.js).
This commit fixes this by setting up the Windows CI jobs to also use the
local, vendored version of yarn.
PR Close#34384