Previously the lexer would break out of consuming a text token if it contains
a `<` character. Then if the next characters did not indicate an HTML syntax
item, such as a tag or comment, then it would start a new text token. These
consecutive text tokens are then merged into each other in a post tokenization
step.
In the commit before this, interpolation no longer leaks across text tokens.
The approach given above to handling `<` characters that appear in text is
no longer adequate. This change ensures that the lexer only breaks out of
a text token if the next characters indicate a valid HTML tag, comment,
CDATA etc.
PR Close#42605
When consuming a text token, the lexer tracks whether it is reading characters
from inside an interpolation so that it can identify invalid ICU expressions.
Inside an interpolation there will be no ICU expression so it is safe to
have unmatched `{` characters, but outside an interpolation this is an error.
Previously, if an interpolation was started, by an opening marker (e.g. `{{`)
in a text token but the text came to an end before the closing marker (e.g. `}}`)
then the lexer was not clearing its internal state that tracked that it was
inside an interpolation. When the next text token was being consumed,
the lexer, incorrectly thought it was already within an interpolation.
This resulted in invalid ICU expression errors not being reported.
For example, in the following snippet, the first text block has a prematurely
ended interpolation, and the second text block contains an invalid `{` character.
```
<div>{{</div>
<div>{</div>
```
Previously, the lexer would not have identified this as an error. Now there
will be an EOF error that looks like:
```
TS-995002: Unexpected character "EOF"
(Do you have an unescaped "{" in your template? Use "{{ '{' }}") to escape it.)
```
PR Close#42605
This commit updates the parser logic to continue to try to match an end
tag to an unclosed open tag on the stack. Previously, it would only
push an error to the list and stop looking at unclosed elements.
For example, the invalid HTML of `<li><div></li>`, has an unclosed
element stack of [`li`, `div`] when it encounters the close `li` tag.
We compare against the previously unclosed tag `div` and see that this is
unexpected. Instead of simply giving up here, we continue to move up the
unclosed tags until we find a match (if there is one).
PR Close#42554
The compiler's parsing code has logic to recover from incomplete open
tags (i.e. `<div`) but the recovery logic does not handle when the
incomplete tag is terminated by an EOF. This commit updates the logic to
allow for the EOF character to be interpreted as the end of the tag open
so that the parser can continue processing. It will then fail to find
the end tag and recover by marking the open tag as incomplete.
Part of https://github.com/angular/vscode-ng-language-service/issues/1140
PR Close#41054
If the template parse option `leadingTriviaChars` is configured to
consider whitespace as trivia, any trailing whitespace of an element
would be considered as leading trivia of the subsequent element, such
that its `start` span would start _after_ the whitespace. This means
that the start span cannot be used to mark the end of the current
element, as its trailing whitespace would then be included in its span.
Instead, the full start of the subsequent element should be used.
To harden the tests that for the Ivy parser, the test utility `parseR3`
has been adjusted to use the same configuration for `leadingTriviaChars`
as would be the case in its production counterpart `parseTemplate`. This
uncovered another bug in offset handling of the interpolation parser,
where the absolute offset was computed from the start source span
(which excludes leading trivia) whereas the interpolation expression
would include the leading trivia. As such, the absolute offset now also
uses the full start span.
Fixes#39148
PR Close#40513
The parser has a list of tag definitions that it uses when parsing the template. Each tag has a
`contentType` which tells the parser what kind of content the tag should contain. The problem is
that the browser has two separate `title` tags (`HTMLTitleElement` and `SVGTitleElement`) and each
of them has to have a different `contentType`, otherwise the parser will throw an error further down
the pipeline.
These changes update the tag definitions so that each tag name can have multiple content types
associated with it and the correct one can be returned based on the element's prefix.
Fixes#31503.
PR Close#40259
This commit ensures that when leading whitespace is skipped by
the tokenizer, the original start location (before skipping) is captured
in the `fullStart` property of the token's source-span.
PR Close#39486
Let's say we have a code like
```html
<div<span>123</span>
```
Currently this gets parsed into a tree with the element tag `div<span`.
This has at least two downsides:
- An incorrect diagnostic that `</span>` doesn't close an element is
emitted.
- A consumer of the parse tree using it for editor services is unable to
provide correct completions for the opening `<span>` tag.
This patch attempts to fix both issues by instead parsing the code into
the same tree that would be parsed for `<div></div><span>123</span>`.
In particular, we do this by optimistically scanning an open tag as
usual, but if we do not notice a terminating '>', we mark the tag as
"incomplete". A parser then emits an error for the incomplete tag and
adds a synthetic (recovered) element node to the tree with the
incomplete open tag's name.
What's the downside of this? For one, a breaking change.
<ol>
<li>
The first breaking change is that `<` symbols that are ambiguously text
or opening tags will be parsed as opening tags instead of text in
element bodies. Take the code
```html
<p>a<b</p>
```
Clearly we cannot have the best of both worlds, and this patch chooses
to swap the parsing strategy to support the new feature. Of course, `<`
can still be inserted as text via the `<` entity.
</li>
</ol>
Part of #38596
PR Close#38681
Previously, the `sourceSpan` and `startSourceSpan` were the same
object, which meant that you had the following situation:
```
element = <div>some content</div>
sourceSpan = <div>
startSourceSpan = <div>
endSourceSpan = </div>
```
This made `sourceSpan` redundant and meant that if you
wanted a span for the whole element including its content
and closing tag, it had to be computed.
Now `sourceSpan` is separated from `startSourceSpan`
resulting in:
```
element = <div>some content</div>
sourceSpan = <div>some content</div>
startSourceSpan = <div>
endSourceSpan = </div>
```
PR Close#38581
Previously, the `startSourceSpan` property could be null
but in reality it is always well defined - except for a legacy
case in the old i18n extraction/merging code, where the
typings for source-spans are already being undermined.
Making this property non-null, simplifies code elsewhere
in the project.
PR Close#38581
Previously the lexer was responsible for deciding whether an "inline"
template should also have its line-endings normalized.
Now this decision is made higher up in the call stack to allow more
flexibility in the parser/lexer.
PR Close#38581
Within an angular template, when a character entity is unable to be parsed, previously a generic
unexpected character error was thrown. This does not properly express the issue that was discovered
as the issue is actually caused by the discovered character making the whole of the entity unparsable.
The compiler will now instead inform via the error message what string was attempted to be parsed
and what it was attempted to be parsed as.
Example, for this template:
```
<p>
ģp
</p>
```
Before this change:
`Unexpected character "p"`
After this change:
`Unable to parse entity "ģp" - hexadecimal character reference entities must end with ";"`
Fixes#26067
PR Close#38319
HTML is very lenient when it comes to closing elements, so Angular's parser has
rules that specify which elements are implicitly closed when closing a tag.
The parser keeps track of the nesting of tag names using a stack and parsing
a closing tag will pop as many elements off the stack as possible, provided
that the elements can be implicitly closed.
For example, consider the following templates:
- `<div><br></div>`, the `<br>` is implicitly closed when parsing `</div>`,
because `<br>` is a void element.
- `<div><p></div>`, the `<p>` is implicitly closed when parsing `</div>`,
as `<p>` is allowed to be closed by the closing of its parent element.
- `<ul><li>A <li>B</ul>`, the first `<li>` is implicitly closed when parsing
the second `<li>`, whereas the second `<li>` would be implicitly closed when
parsing the `</ul>`.
In all the cases above the parsed structure would be correct, however the source
span of the closing `</div>` would incorrectly be assigned to the element that
is implicitly closed. The problem was that closing an element would associate
the source span with the element at the top of the stack, however this may not
be the element that is actually being closed if some elements would be
implicitly closed.
This commit fixes the issue by assigning the end source span with the element
on the stack that is actually being closed. Any implicitly closed elements that
are popped off the stack will not be assigned an end source span, as the
implicit closing implies that no ending element is present.
Note that there is a difference between self-closed elements such as `<input/>`
and implicitly closed elements such as `<input>`. The former does have an end
source span (identical to its start source span) whereas the latter does not.
Fixes#36118
Resolves FW-2004
PR Close#38126
The html parser already normalizes line endings (converting `\r\n` to `\n`)
for most text in templates but it was missing the expressions of ICU expansions.
In ViewEngine backticked literal strings, used to define inline templates,
were already normalized by the TypeScript parser.
In Ivy we are parsing the raw text of the source file directly so the line
endings need to be manually normalized.
This change ensures that inline templates have the line endings of ICU
expression normalized correctly, which matches the ViewEngine.
In ViewEngine external templates, defined in HTML files, the behavior was
different, since TypeScript was not normalizing the line endings.
Specifically, ICU expansion "expressions" are not being normalized.
This is a problem because it means that i18n message ids can be different on
different machines that are setup with different line ending handling,
or if the developer moves a template from inline to external or vice versa.
The goal is always to normalize line endings, whether inline or external.
But this would be a breaking change since it would change i18n message ids
that have been previously computed. Therefore this commit aligns the ivy
template parsing to have the same "buggy" behavior for external templates.
There is now a compiler option `i18nNormalizeLineEndingsInICUs`, which
if set to `true` will ensure the correct non-buggy behavior. For the time
being this option defaults to `false` to ensure backward compatibility while
allowing opt-in to the desired behavior. This option's default will be
flipped in a future breaking change release.
Further, when this option is set to `false`, any ICU expression tokens,
which have not been normalized, are added to the `ParseResult` from the
`HtmlParser.parse()` method. In the future, this collection of tokens could
be used to diagnose and encourage developers to migrate their i18n message
ids. See FW-2106.
Closes#36725
PR Close#36741
Previously, an expansion case could only start with an alpha numeric character.
This commit fixes this by allowing an expansion case to start with any character
except `}`.
The [ICU spec](http://userguide.icu-project.org/formatparse/messages) is pretty vague:
> Use a "select" argument to select sub-messages via a fixed set of keywords.
It does not specify what can be a "keyword" but from looking at the surrounding syntax it
appears that it can indeed be any string that does not contain a `}` character.
Closes#31586
PR Close#36123
The major one that affects the angular repo is the removal of the bootstrap attribute in nodejs_binary, nodejs_test and jasmine_node_test in favor of using templated_args --node_options=--require=/path/to/script. The side-effect of this is that the bootstrap script does not get the require.resolve patches with explicitly loading the targets _loader.js file.
PR Close#34736
The major one that affects the angular repo is the removal of the bootstrap attribute in nodejs_binary, nodejs_test and jasmine_node_test in favor of using templated_args --node_options=--require=/path/to/script. The side-effect of this is that the bootstrap script does not get the require.resolve patches with explicitly loading the targets _loader.js file.
PR Close#34589
This is a breaking change in nodejs rules 0.40.0 as part of the API review & cleanup for the 1.0 release. Their APIs are identical as ts_web_test was just karma_web_test without the config_file attribute.
PR Close#33802
Similar to interpolation, we do not want to completely remove whitespace
nodes that are siblings of an expansion.
For example, the following template
```html
<div>
<strong>items left<strong> {count, plural, =1 {item} other {items}}
</div>
```
was being collapsed to
```html
<div><strong>items left<strong>{count, plural, =1 {item} other {items}}</div>
```
which results in the text looking like
```
items left4
```
instead it should be collapsed to
```html
<div><strong>items left<strong> {count, plural, =1 {item} other {items}}</div>
```
which results in the text looking like
```
items left 4
```
---
**Analysis of the code and manual testing has shown that this does not cause
the generated ids to change, so there is no breaking change here.**
PR Close#31962
Leading trivia, such as whitespace or comments, is
confusing for developers looking at source-mapped
templates, since they expect the source-map segment
to start after the trivia.
This commit adds skipping trivial characters to the lexer;
and then implements that in the template parser.
PR Close#30095
This PR alligns markup language lexer with the previous behaviour in version 7.x:
https://stackblitz.com/edit/angular-iancj2
While this behaviour is not perfect (we should be giving users an error message
here about invalid HTML instead of assuming text node) this is probably best we
can do without more substential re-write of lexing / parsing infrastructure.
This PR just fixes#29231 and restores VE behaviour - a more elaborate fix will
be done in a separate PR as it requries non-trivial rewrites.
PR Close#29328
This PR alligns markup language lexer with the previous behaviour in version 7.x:
https://stackblitz.com/edit/angular-iancj2
While this behaviour is not perfect (we should be giving users an error message
here about invalid HTML instead of assuming text node) this is probably best we
can do without more substential re-write of lexing / parsing infrastructure.
This PR just fixes#29231 and restores VE behaviour - a more elaborate fix will
be done in a separate PR as it requries non-trivial rewrites.
PR Close#29328
BREAKING CHANGE:
Certain elements (like `<tr>` or `<col>`) require parent elements to be of a certain type by the HTML specification
(ex. <tr> can only be inside <tbody> / <thead>). Before this change Angular template parser was auto-correcting
"invalid" HTML using the following rules:
- `<tr>` would be wrapped in `<tbody>` if not inside `<tbody>`, `<tfoot>` or `<thead>`;
- `<col>` would be wrapped in `<colgroup>` if not inside `<colgroup>`.
This meachanism of automatic wrapping / auto-correcting was problematic for several reasons:
- it is non-obvious and arbitrary (ex. there are more HTML elements that has rules for parent type);
- it is incorrect for cases where `<tr>` / `<col>` are at the root of a component's content, ex.:
```html
<projecting-tr-inside-tbody>
<tr>...</tr>
</projecting-tr-inside-tbody>
```
In the above example the `<projecting-tr-inside-tbody>` component culd be "surprised" to see additional
`<tbody>` elements inserted by Angular HTML parser.
PR Close#29219
Previously the start of a character indicated by an escape sequence
was being incorrectly computed by the lexer, which caused tokens
to include the start of the escaped character sequence in the
preceding token. In particular this affected the name extracted
from opening tags if the name was terminated by an escape sequence.
For example, `<t\n>` would have the name `t\` rather than `t`.
This fix refactors the lexer to use a "cursor" object to iterate over
the characters in the template source. There are two cursor implementations,
one expects a simple string, the other expects a string that contains
JavaScript escape sequences that need to be unescaped.
PR Close#28978
The parts of a token are supposed to be an array of not-null strings,
but we were using `null` for tags that had no prefix. This has been
fixed to use the empty string in such cases, which allows the `null !`
hack to be removed.
PR Close#28978
When tokenizing markup (e.g. HTML) element attributes
can have quoted or unquoted values (e.g. `a=b` or `a="b"`).
The `ATTR_VALUE` tokens were capturing the quotes, which
was inconsistent and also affected source-mapping.
Now the tokenizer captures additional `ATTR_QUOTE` tokens,
which the HTML related parsers understand and factor into their
token parsing.
PR Close#28055
In order to support source mapping of templates, we need
to be able to tokenize the template in its original context.
When the template is defined inline as a JavaScript string
in a TS/JS source file, the tokenizer must be able to handle
string escape sequences, such as `\n` and `\"` as they
appear in the original source file.
This commit teaches the lexer how to unescape these
sequences, but only when the `escapedString` option is
set to true. Otherwise there is no change to the tokenizing
behaviour.
PR Close#28055
The lexer that does the tokenizing can now process only a part the source
string, by passing a `range` property in the `options` argument. The
locations of the nodes that are tokenized will now take into account the
position of the span in the context of the original source string.
This `range` option is, in turn, exposed from the template parser as well.
Being able to process parts of files helps to enable SourceMap support
when compiling inline component templates.
PR Close#28055
This commit consolidates the options that can modify the
parsing of text (e.g. HTML, Angular templates, CSS, i18n)
into an AST for further processing into a single `options`
hash.
This makes the code cleaner and more readable, but also
enables us to support further options to parsing without
triggering wide ranging changes to code that should not
be affected by these new options. Specifically, it will let
us pass information about the placement of a template
that is being parsed in its containing file, which is essential
for accurate SourceMap processing.
PR Close#28055
Most of the specs in these tests are not relevant to Ivy:
//packages/compiler/test:test
//packages/compiler/test:test_web_chromium-local
However, a few test pieces of the compiler infrastructure that are used in
Ivy, and new BUILD.bazel files are created to separate them from the above
disabled targets:
//packages/compiler/test/css_parser:css_parser
//packages/compiler/test/css_parser:css_parser_web
//packages/compiler/test/expression_parser:expression_parser
//packages/compiler/test/expression_parser:expression_parser_web
//packages/compiler/test/ml_parser:ml_parser
//packages/compiler/test/ml_parser:ml_parser_web
//packages/compiler/test/selector:selector
//packages/compiler/test/selector:selector_web
PR Close#27301
BREAKING CHANGE:
The `<template>` tag was deprecated in Angular v4 to avoid collisions (i.e. when
using Web Components).
This commit removes support for `<template>`. `<ng-template>` should be used
instead.
BEFORE:
<!-- html template -->
<template>some template content</template>
# tsconfig.json
{
# ...
"angularCompilerOptions": {
# ...
# This option is no more supported and will have no effect
"enableLegacyTemplate": [true|false]
}
}
AFTER:
<!-- html template -->
<ng-template>some template content</ng-template>
PR Close#22783