The partial compiler will add a version number to the objects that are
generated so that the linker can select the appropriate partial linker
class to process the metadata.
Previously this version matching was a simple number check. Now
the partial compilation writes the current Angular compiler version
into the generated metadata, and semantic version ranges are used
to select the appropriate partial linker.
PR Close#39847
This commit adds support in the Ivy Language Service for autocompletion in a
global context - e.g. a {{foo|}} completion.
Support is added both for the primary function `getCompletionsAtPosition` as
well as the detail functions `getCompletionEntryDetails` and
`getCompletionEntrySymbol`. These latter operations are not used yet as an
upstream change to the extension is required to advertise and support this
capability.
PR Close#39250
The newly built compliance test runner was not using the shared source
file cache that was added in b627f7f02e,
which offers a significant performance boost to the compliance test
targets.
PR Close#39956
When the compiler is invoked via ngc or the Angular CLI, its APIs are used
under the assumption that Angular analysis/diagnostics are only requested if
the program has no TypeScript-level errors. A result of this assumption is
that the incremental engine has not needed to resolve changes via its
dependency graph when the program contained broken imports, since broken
imports are a TypeScript error.
The Angular Language Service for Ivy is using the compiler as a backend, and
exercising its incremental compilation APIs without enforcing this
assumption. As a result, the Language Service has run into issues where
broken imports cause incremental compilation to fail and produce incorrect
results.
This commit introduces a mechanism within the compiler to keep track of
files for which dependency analysis has failed, and to always treat such
files as potentially affected by future incremental steps. This is tested
via the Language Service infrastructure to ensure that the compiler is doing
the right thing in the case of invalid imports.
PR Close#39923
Previously, if a component had an external template with a hard error, the
compiler would "forget" the link between that component and its NgModule.
Additionally, the NgModule would be marked as being in error, because the
template issue would prevent the compiler from registering the component
class as a component, so from the NgModule it would look like a declaration
of a non-directive/pipe class. As a combined result, the next incremental
step could fix the template error, but would not refresh diagnostics for the
NgModule, leading to an incrementality issue.
The various facets of this problem were fixed in prior commits. This commit
adds a test verifying the above case works now as expected.
PR Close#39923
To avoid overwhelming a user with secondary diagnostics that derive from a
"root cause" error, the compiler has the notion of a "poisoned" NgModule.
An NgModule becomes poisoned when its declaration contains semantic errors:
declarations which are not components or pipes, imports which are not other
NgModules, etc. An NgModule also becomes poisoned if it imports or exports
another poisoned NgModule.
Previously, the compiler tracked this poisoned status as an alternate state
for each scope. Either a correct scope could be produced, or the entire
scope would be set to a sentinel error value. This meant that the compiler
would not track any information about a scope that was determined to be in
error.
This method presents several issues:
1. The compiler is unable to support the language service and return results
when a component or its module scope is poisoned.
This is fine for compilation, since diagnostics will be produced showing the
error(s), but the language service needs to still work for incorrect code.
2. `getComponentScopes()` does not return components with a poisoned scope,
which interferes with resource tracking of incremental builds.
If the component isn't included in that list, then the NgModule for it will
not have its dependencies properly tracked, and this can cause future
incremental build steps to produce incorrect results.
This commit changes the tracking of poisoned module scopes to use a flag on
the scope itself, rather than a sentinel value that replaces the scope. This
means that the scope itself will still be tracked, even if it contains
semantic errors. A test is added to the language service which verifies that
poisoned scopes can still be used in template type-checking.
PR Close#39923
Previously, if a trait's analysis step resulted in diagnostics, the trait
would be considered "errored" and no further operations, including register,
would be performed. Effectively, this meant that the compiler would pretend
the class in question was actually undecorated.
However, this behavior is problematic for several reasons:
1. It leads to inaccurate diagnostics being reported downstream.
For example, if a component is put into the error state, for example due to
a template error, the NgModule which declares the component would produce a
diagnostic claiming that the declaration is neither a directive nor a pipe.
This happened because the compiler wouldn't register() the component trait,
so the component would not be recorded as actually being a directive.
2. It can cause incorrect behavior on incremental builds.
This bug is more complex, but the general issue is that if the compiler
fails to associate a component and its module, then incremental builds will
not correctly re-analyze the module when the component's template changes.
Failing to register the component as such is one link in the larger chain of
issues that result in these kinds of issues.
3. It lumps together diagnostics produced during analysis and resolve steps.
This is not causing issues currently as the dependency graph ensures the
right classes are re-analyzed when needed, instead of showing stale
diagnostics. However, the dependency graph was not intended to serve this
role, and could potentially be optimized in ways that would break this
functionality.
This commit removes the concept of an "errored" trait entirely from the
trait system. Instead, analyzed and resolved traits have corresponding (and
separate) diagnostics, in addition to potentially `null` analysis results.
Analysis (but not resolution) diagnostics are carried forward during
incremental build operations. Compilation (emit) is only performed when
a trait reaches the resolved state with no diagnostics.
This change is functionally different than before as the `register` step is
now performed even in the presence of analysis errors, as long as analysis
results are also produced. This fixes problem 1 above, and is part of the
larger solution to problem 2.
PR Close#39923
If the testcase has not specified that errors were expected, then any
errors that have occurred should be reported. These errors may have
prevented an output file from being generated, which resulted in hard
to debug test failures due to missing files.
PR Close#39862
The Language Service "find references" currently uses the
`ngtypecheck.ts` suffix to determine if a file is a shim file. Instead,
a better API would be to expose a method in the template type checker
that does this verification so that the LS does not have to "know" about
the typecheck suffix. This also fixes an issue (albeit unlikely) whereby a file
in the user's program that _actually_ is named with the `ngtypecheck.ts`
suffix would have been interpreted as a shim file.
PR Close#39768
This commit adds "find references" functionality to the Ivy integrated
language service. The basic approach is as follows:
1. Generate shims for all files to ensure we find references in shims
throughout the entire program
2. Determine if the position for the reference request is within a
template.
* Yes, it is in a template: Find which node in the template AST the
position refers to. Then find the position in the shim file for that
template node. Pass the shim file and position in the shim file along
to step 3.
* No, the request for references was made outside a template: Forward
the file and position to step 3.
3. (`getReferencesAtTypescriptPosition`): Call the native TypeScript LS
`getReferencesAtPosition`. For each reference that is in a shim file, map those
back to a template location, otherwise return it as-is.
PR Close#39768
There were two issues with the current TCB:
1. The logic for only wrapping the right hand side of the property write
if it was not already a parenthesized expression was incorrect. A
parenthesized expression could still have a trailing comment, and if
that were the case, that span comment would still be ambiguous, as explained
by the comment in the code before `wrapForTypeChecker`.
2. The right hand side of keyed writes was not wrapped in parens at all
PR Close#39768
In order to map the a safe property read's method access in the type check block
directly back to the property in the template source, we need to
include the `SafePropertyRead`'s `nameSpan` with the `ts.propertyAccess` for
the pipe's transform method.
Note that this is specifically relevant to the Language Service's "find
references" feature. As an example, with something like `{{a?.value}}`,
when calling "find references" on the 'value' we want the text
span of the reference to just be `value` rather than the entire source
`a?.value`.
PR Close#39768
In order to map the pipe's `transform` method in the type check block
directly back to the pipe name in the template source, we need to
include the `BindingPipe`'s `nameSpan` with the `ts.methodAccess` for
the pipe's transform method.
Note that this is specifically relevant to the Language Service's "find
references" feature. As an example, with something like `-2.5 | number:'1.0-0'`,,
when calling "find references" on the 'number' pipe we want the text
span of the reference to just be `number` rather than the entire binding
pipe's source `-2.5 | number:'1.0-0'`.
PR Close#39768
Previously this would have just printed that `false` was not equal to
`true`, which, although true, is not very helpful. This commit adds
details about which special check failed together with the generated
code, for easier debugging.
PR Close#39863
This commit provides the machinery for the new file-based compliance test
approach for i18n tests, and migrates the i18n tests to this new format.
PR Close#39661
This commit implements partial compilation of components, together with
linking the partial declaration into its full AOT output.
This commit does not yet enable accurate source maps into external
templates. This requires additional work to account for escape sequences
which is non-trivial. Inline templates that were represented using a
string or template literal are transplated into the partial declaration
output, so their source maps should be accurate. Note, however, that
the accuracy of source maps is not currently verified in tests; this is
also left as future work.
The golden files of partial compilation output have been updated to
reflect the generated code for components. Please note that the current
output should not yet be considered stable.
PR Close#39707
In production mode this flag defaults to `true`, but the compliance
tests override this to `false` unless it is provided. As such, the
linker should also adhere to this default as otherwise the compilation
output would not align with the output of the full tests.
There are still tests that exercise the value of this flag, together
with it being `undefined` to verify the behavior of the actual default
value.
PR Close#39707
The linker does not currently support outputting ES5 syntax, so any
compliance tests that request ES5 output cannot be run in partial
compilation mode. This commit marks these tests as pending.
PR Close#39707
This commit adds the `i18nUseExternalIds` option to the linker options,
as the compliance tests exercise compilation results with and without
this flag enabled. We therefore need to configure the linker to take
this option into account, as otherwise the compliance test output would
not be identical.
Additionally, this commit switches away from spread syntax to set
the default options. This introduced a problem when the user-provided
options object did specify the keys, but with an undefined value. This
would have prevented the default options from being applied.
PR Close#39707
The metadata specification of queries allows for the boolean properties
`first`, `descendants` and `static` to be missing, but the linker did
not account for their omission.
This fix is tested in subsequent commits that implement compilation of
components, at which point this will be covered by the compliance tests.
PR Close#39707
The compilation result of components may have inserted template
functions into the constant pool, which would be inserted into the Babel
AST upon program exit. Babel will then proceed with visiting this newly
inserted subtree, but we have already cleaned up the linker instance
when exiting the program. Any call expressions within the template
functions would then fail to be processed, as a file linker would no
longer be available.
Since the inserted AST subtree is known not to contain yet more partial
declarations, it is safe to skip visiting call expressions when no
file linker is available.
PR Close#39707
The type checker had to do extensive work in resolving the
`NodePath.get` method call for the `NodePath` that had an intersection
type of `ts.VariableDeclarator&{init:t.Expression}`. The `NodePath.get`
method is typed using a conditional type which became expensive to
compute with this intersection type. As a workaround, the original
`init` property is explicitly omitted which avoids the performance
cliff. This brings down the compile time by 15s.
PR Close#39707
The JSON schema reference was off-by-one, preventing IDEs from finding
the file and offering suggestions and documentation. Additionally the
name of the golden file was slightly off.
PR Close#39707
If a template declares a reference to a missing target then referring to
that reference from elsewhere in the template would crash the template
type checker, due to a regression introduced in #38618. This commit
fixes the crash by ensuring that the invalid reference will resolve to
a variable of type any.
Fixes#39744
PR Close#39805
When the `preserveWhitespaces` is not true, the template parser will
process the parsed AST nodes to remove excess whitespace. Since the
generated `goog.getMsg()` statements rely upon the AST nodes after
this whitespace is removed, the i18n extraction must make a second pass.
Previously this resulted in innacurrate source-spans for the i18n text and
placeholder nodes that were extracted in the second pass.
This commit fixes this by reusing the source-spans from the first pass
when extracting the nodes in the second pass.
Fixes#39671
PR Close#39717
Consumers of the `TemplateTypeChecker` API could be interested in
mapping from a shim location back to the original source location in the
template. One concrete example of this use-case is for the "find
references" action in the Language Service. This will return locations
in the TypeScript shim file, and we will then need to be able to map the
result back to the template.
PR Close#39715
Both `ReferenceSymbol` and `VariableSymbol` have two locations of
interest to an external consumer.
1. The location for the initializers of the local TCB variables allow consumers
to query the TypeScript Language Service for information about the initialized type of the variable.
2. The location of the local variable itself (i.e. `_t1`) allows
consumers to query the TypeScript LS for references to that variable
from within the template.
PR Close#39715
The 15.x versions of `yargs` relied upon a version of `y18n` that
has a SNYK vulnerability.
This commit updates the overall project, and therefore also the
`localize` and `compiler-cli` packages to use the latest version
of `yargs` that does not depend upon the vulnerable `y18n`
version.
The AIO project was already on the latest `yargs` version and so
does not need upgrading.
Fixes#39743
PR Close#39749
Currently `readConfiguration` relies on the file system to perform disk
utilities needed to read determine a project configuration file and read
it. This poses a challenge for the language service, which would like to
use `readConfiguration` to watch and read configurations dependent on
extended tsconfigs (#39134). Challenges are at least twofold:
1. To test this, the langauge service would need to provide to the
compiler a mock file system.
2. The language service uses file system utilities primarily through
TypeScript's `Project` abstraction. In general this should correspond
to the underlying file system, but it may differ and it is better to
go through one channel when possible.
This patch alleviates the concern by directly providing to the compiler
a "ParseConfigurationHost" with read-only "file system"-like utilties.
For the language service, this host is derived from the project owned by
the language service.
For more discussion see
https://docs.google.com/document/d/1TrbT-m7bqyYZICmZYHjnJ7NG9Vzt5Rd967h43Qx8jw0/edit?usp=sharing
PR Close#39619
ngtsc's testing infrastructure uses a mock version of @angular/core, which
allows tests to run without requiring the real version of core to be built.
This commit adds a mock version of @angular/common as well, as the language
service tests are written to test against common.
Only a handful of directives/pipes from common are currently supported.
PR Close#39594
ngtsc has a robust suite of testing utilities, designed for in-memory
testing of a TypeScript compiler. Previously these utilities lived in the
`test` directory for the compiler-cli package.
This commit moves those utilities to an `ngtsc/testing` package, enabling
them to be depended on separately and opening the door for using them from
the upcoming language server testing infrastructure.
As part of this refactoring, the `fake_core` package (a lightweight API
replacement for @angular/core) is expanded to include functionality needed
for Language Service test use cases.
PR Close#39594
Currently when we encounter an implicit method call (e.g. `{{ foo(1) }}`) and we manage to resolve
its receiver to something within the template, we assume that the method is on the receiver itself
so we generate a type checking code to reflect it. This assumption is true in most cases, but it
breaks down if the call is on an implicit receiver and the receiver itself is being invoked. E.g.
```
<div *ngFor="let fn of functions">{{ fn(1) }}</div>
```
These changes resolve the issue by generating a regular function call if the method call's receiver
is pointing to `$implicit`.
Fixes#39634.
PR Close#39686
In order to more accurately map from a node in the TCB to a template position,
we need to provide more span information in the TCB. These changes are necessary
for the Language Service to map from a TCB node back to a specific
locations in the template for actions like "find references" and
"refactor/rename". After the TS "find references" returns results,
including those in the TCB, we need to map specifically to the matching
key/value spans in the template rather than the entire source span.
This also has the benefit of producing diagnostics which align more
closely with what TypeScript produces.
The following example shows TS code and the diagnostic produced by an invalid assignment to a property:
```
let a: {age: number} = {} as any;
a.age = 'laksjdf';
^^^^^ <-- Type 'string' is not assignable to type 'number'.
```
A corollary to this in a template file would be [age]="'someString'". The diagnostic we currently produce for this is:
```
Type 'number' is not assignable to type 'string'.
1 <app-hello [greeting]="1"></app-hello>
~~~~~~~~~~~~~~
```
Notice that the underlined text includes the entire span.
If we included the keySpan for the assignment to the property,
this diagnostic underline would be more similar to the one produced by TypeScript;
that is, it would only underline “greeting”.
[design/discussion doc]
(https://docs.google.com/document/d/1FtaHdVL805wKe4E6FxVTnVHl38lICoHIjS2nThtRJ6I/edit?usp=sharing)
PR Close#39665
ngtsc will avoid emitting generated imports that would create an import
cycle in the user's program. The main way such imports can arise is when
a component would ordinarily reference its dependencies in its component
definition `directiveDefs` and `pipeDefs`. This requires adding imports,
which run the risk of creating a cycle.
When ngtsc detects that adding such an import would cause this to occur, it
instead falls back on a strategy called "remote scoping", where a side-
effectful call to `setComponentScope` in the component's NgModule file is
used to patch `directiveDefs` and `pipeDefs` onto the component. Since the
NgModule file already imports all of the component's dependencies (to
declare them in the NgModule), this approach does not risk adding a cycle.
It has several large downsides, however:
1. it breaks under `sideEffects: false` logic in bundlers including the CLI
2. it breaks tree-shaking for the given component and its dependencies
See this doc for further details: https://hackmd.io/Odw80D0pR6yfsOjg_7XCJg?view
In particular, the impact on tree-shaking was exacerbated by the naive logic
ngtsc used to employ here. When this feature was implemented, at the time of
generating the side-effectful `setComponentScope` call, the compiler did not
know which of the component's declared dependencies were actually used in
its template. This meant that unlike the generation of `directiveDefs` in
the component definition itself, `setComponentScope` calls had to list the
_entire_ compilation scope of the component's NgModule, including directives
and pipes which were not actually used in the template. This made the tree-
shaking impact much worse, since if the component's NgModule made use of any
shared NgModules (e.g. `CommonModule`), every declaration therein would
become un-treeshakable.
Today, ngtsc does have the information on which directives/pipes are
actually used in the template, but this was not being used during the remote
scoping operation. This commit modifies remote scoping to take advantage of
the extra context and only list used dependencies in `setComponentScope`
calls, which should ameliorate the tree-shaking impact somewhat.
PR Close#39662