Updated PEP 649 to v2.

This commit is contained in:
Larry Hastings 2021-04-12 06:27:13 -07:00
parent 4134c46262
commit c11c646746
1 changed files with 442 additions and 142 deletions

View File

@ -5,7 +5,7 @@ Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 11-Jan-2021
Post-History: 11-Jan-2021
Post-History: 11-Jan-2021, 11-Apr-2021
Abstract
@ -14,37 +14,69 @@ Abstract
As of Python 3.9, Python supports two different behaviors
for annotations:
* original Python semantics, in which annotations are evaluated
at the time they are bound, and
* original or "stock" Python semantics, in which annotations
are evaluated at the time they are bound, and
* PEP 563 semantics, currently enabled per-module by
``from __future__ import annotations``, in which annotations
are converted back into strings and must be parsed by ``eval()``
to be used.
are converted back into strings and must be reparsed and
executed by ``eval()`` to be used.
Original Python semantics created a circular references problem
for static typing analysis. PEP 563 solved that problem, but
its novel semantics introduced new problems.
for static typing analysis. PEP 563 solved that problem--but
its novel semantics introduced new problems, including its
restriction that annotations can only reference names at
module-level scope.
This PEP proposes a third way that embodies the best of both
previous approaches. It solves the same circular reference
problems solved by PEP 563, while preserving Python's original
straightforward runtime semantics for annotations.
problems solved by PEP 563, while otherwise preserving Python's
original annotation semantics, including allowing annotations
to refer to local and class variables.
In this new approach, the code to generate the annotations
dict is written to its own callable, and ``__annotations__``
is a "data descriptor" which calls the callable once and
preserves the result.
dict is written to its own function which computes and returns
the annotations dict. Then, ``__annotations__`` is a "data
descriptor" which calls this annotation function once and
retains the result. This delays the evaluation of annotations
expressions until the annotations are examined, at which point
all circular references have likely been resolved. And if
the annotations are never examined, the function is never
called and the annotations are never computed.
Annotations defined using this PEP's semantics have the same
visibility into the symbol table as annotations under "stock"
semantics--any name visible to an annotation in Python 3.9
is visible to an annotation under this PEP. In addition,
annotations under this PEP can refer to names defined *after*
the annotation is defined, as long as the name is defined in
a scope visible to the annotation. Specifically, when this PEP
is active:
* An annotation can refer to a local variable defined in the
current function scope.
* An annotation can refer to a local variable defined in an
enclosing function scope.
* An annotation can refer to a class variable defined in the
current class scope.
* An annotation can refer to a global variable.
And in all four of these cases, the variable referenced by
the annotation needn't be defined at the time the annotation
is defined--it can be defined afterwards. The only restriction
is that the name or variable be defined before the annotation
is *evaluated.*
If accepted, these new semantics for annotations would initially
be gated behind ``from __future__ import co_annotations``. However,
these semantics would eventually be promoted to be the default behavior.
Thus this PEP would *supersede* PEP 563, and PEP 563's behavior would
be deprecated and eventually removed.
be gated behind ``from __future__ import co_annotations``.
However, these semantics would eventually be promoted to be
Python's default behavior. Thus this PEP would *supersede*
PEP 563, and PEP 563's behavior would be deprecated and
eventually removed.
Overview
========
.. note:: The code presented in this section is highly simplified
.. note:: The code presented in this section is simplified
for clarity. The intention is to communicate the high-level
concepts involved without getting lost in with the details.
The actual details are often quite different. See the
@ -128,11 +160,13 @@ like this::
The important change is that the code constructing the
annotations dict now lives in a function—here, called
``foo_annotations__fn()``. But this function isn't called
``foo_annotations_fn()``. But this function isn't called
until we ask for the value of ``foo.__annotations__``,
and we don't do that until *after* the definition of ``MyType``.
So this code also runs successfully, and ``foo_y_type`` now
has the correct value, the class ``MyType``.
has the correct value--the class ``MyType``--even though
``MyType`` wasn't defined until *after* the annotation was
defined.
Motivation
@ -154,13 +188,18 @@ them to their actual Python values. This has several drawbacks:
to CPython was complicated, and this complicated code would
need to be reimplemented independently by every other Python
implementation.
* It requires that all annotations be evaluated at module-level
scope. Annotations under PEP 563 can no longer refer to
* class variables,
* local variables in the current function, or
* local variables in enclosing functions.
* It requires a code change every time existing code uses an
annotation, to handle converting the stringized
annotation back into a useful value.
* ``eval()`` is slow.
* ``eval()`` isn't always available; it's sometimes removed
from Python for space reasons.
* In order to evaluate the annotations stored with a class,
* In order to evaluate the annotations on a class,
it requires obtaining a reference to that class's globals,
which PEP 563 suggests should be done by looking up that class
by name in ``sys.modules``—another surprising requirement for
@ -168,7 +207,7 @@ them to their actual Python values. This has several drawbacks:
* It adds an ongoing maintenance burden to Python implementations.
Every time the language adds a new feature available in expressions,
the implementation's stringizing code must be updated in
tandem to support decompiling it.
tandem in order to support decompiling it.
This PEP also solves the forward reference problem outlined in
PEP 563 while avoiding the problems listed above:
@ -176,32 +215,37 @@ PEP 563 while avoiding the problems listed above:
* Python implementations would generate annotations as code
objects. This is simpler than stringizing, and is something
Python implementations are already quite good at. This means:
- alternate implementations would need to write less code
to implement this feature, and
- the implementation would be simpler overall, which should
reduce its ongoing maintenance cost.
* Existing annotations would not need to be changed to only
use global scope. Actually, annotations would become much
easier to use, as they would now also handle forward
references.
* Code examining annotations at runtime would no longer need
to use ``eval()`` or anything else—it would automatically
get the correct values. This is easier, almost certainly
faster, and removes the dependency on ``eval()``.
see the correct values. This is easier, faster, and
removes the dependency on ``eval()``.
Backwards Compatibility
=======================
PEP 563 changed the semantics of annotations. When its semantics
are active, annotations must assume they will be evaluated in
are active, annotations must assume they will be evaluated in
*module-level* scope. They may no longer refer directly
to local variables or class attributes. This PEP retains that
semantic change, also requiring that annotations be evaluated in
*module-level* scope. Thus, code changed so its annotations are
compatible with PEP 563 should *already* compatible with this
aspect of this PEP and would not need further change. Modules
still using stock semantics would have to be revised so its
annotations evaluate properly in module-level scope, in the same
way they would have to be to achieve compatibility with PEP 563.
to local variables or class attributes.
This PEP removes that restriction; annotations may refer to globals,
local variables inside functions, local variables defined in enclosing
functions, and class members in the current class. In addition,
annotations may refer to any of these that haven't been defined yet
at the time the annotation is defined, as long as the not-yet-defined
name is created normally (in such a way that it is known to the symbol
table for the relevant block, or is a global or class variable found
using normal name resolution). Thus, this PEP demonstrates *improved*
backwards compatibility over PEP 563.
PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()``
to examine annotations. Code updated to work with PEP 563 that calls
@ -210,19 +254,14 @@ to examine annotations. Code updated to work with PEP 563 that calls
continue to work unchanged, though future use of that function
would become optional in most cases.
Because this PEP makes the same backwards-compatible change
to annotation scoping as PEP 563, this PEP will be initially gated
with a per-module ``from __future__ import co_annotations``
before it eventually becomes the default behavior.
Because this PEP makes semantic changes to how annotations are
evaluated, this PEP will be initially gated with a per-module
``from __future__ import co_annotations`` before it eventually
becomes the default behavior.
Apart from these two changes already discussed:
* the evaluation of values in annotation dicts will be
delayed until the ``__annotations__`` attribute is evaluated, and
* annotations are now evaluated in module-level scope,
this PEP preserves nearly all existing behavior of annotations
dicts. Specifically:
Apart from the delay in evaluating values stored in annotations
dicts, this PEP preserves nearly all existing behavior of
annotations dicts. Specifically:
* Annotations dicts are mutable, and any changes to them are
preserved.
@ -251,7 +290,7 @@ regression test suite. They are:
code simply creates a local ``__annotations__`` dict, then sets
mappings in it as needed. It's also possible for user code
to directly modify this dict, though this doesn't seem like it's
an intentional feature. Although it'd be possible to support
an intentional feature. Although it would be possible to support
this after a fashion when this PEP was active, the semantics
would likely be surprising and wouldn't make anyone happy.
@ -261,10 +300,38 @@ declare that both are at the very least unsupported, and their
use results in undefined behavior. It might be worth making a
small effort to explicitly prohibit them with compile-time checks.
There's one more idiom that's actually somewhat common when
dealing with class annotations, and which will become
more problematic when this PEP is active: code often accesses
class annotations via ``cls.__dict__.get("__annotations__", {})``
In addition, there are a few operators that would no longer be
valid for use in annotations, because their side effects would
affect the *annotation function* instead of the
class/function/module the annotation was nominally defined in:
* ``:=`` (aka the "walrus operator"),
* ``yield`` and ``yield from``, and
* ``await``.
Use of any of these operators in an annotation will result in a
compile-time error.
Since delaying the evaluation of annotations until they are
evaluated changes the semantics of the language, it's observable
from within the language. Therefore it's possible to write code
that behaves differently based on whether annotations are
evaluated at binding time or at access time, e.g.
mytype = str
def foo(a:mytype): pass
mytype = int
print(foo.__annotations__['a'])
This will print ``<class 'str'>`` with stock semantics
and ``<class 'int'>`` when this PEP is active. Since
this is poor programming style to begin with, it seems
acceptable that this PEP changes its behavior.
Finally, there's a standard idiom that's actually somewhat common
when accessing class annotations, and which will become more
problematic when this PEP is active: code often accesses class
annotations via ``cls.__dict__.get("__annotations__", {})``
rather than simply ``cls.__annotations__``. It's due to a flaw
in the original design of annotations themselves. This topic
will be examined in a separate discussion; the outcome of
@ -293,8 +360,8 @@ module-level scope:
This led to a short discussion about extending lambda-ized
annotations for methods to be able to refer to class-level
definitions, by maintaining a reference to the class-level scope.
This idea, too, was quickly rejected.
definitions, by maintaining a reference to the class-level
scope. This idea, too, was quickly rejected.
PEP 563 summarizes the above discussion here:
@ -315,13 +382,8 @@ assumption had apparently been abandoned. And it looks like
"implicit lambda expressions" were never reconsidered in this
new light.
PEP 563 semantics have shipped in three major Python releases.
These semantics are now widely used in organizations depending
on static type analysis. Evaluating annotations at module-level
scope is clearly acceptable to all interested parties. Therefore,
delayed evaluation of annotations with code using the same scoping
rules is obviously also completely viable.
In any case, annotations are still able to refer to class-level
definitions under this PEP, rendering the objection moot.
.. _Implementation:
@ -335,7 +397,8 @@ There's a prototype implementation of this PEP, here:
As of this writing, all features described in this PEP are
implemented, and there are some rudimentary tests in the
test suite. There are still some broken tests, and the
repo is many months behind.
``co_annotations`` repo is many months behind the
CPython repo.
from __future__ import co_annotations
@ -356,7 +419,9 @@ implement this PEP is much the same for all three with only minor
variations.
With this PEP, each of these types adds a new attribute,
``__co_annotations__``, with the following semantics:
``__co_annotations__``. ``__co_annotations__`` is a function:
it takes no arguments, and must return either ``None`` or a dict
(or subclass of dict). It adds the following semantics:
* ``__co_annotations__`` is always set, and may contain either
``None`` or a callable.
@ -372,13 +437,13 @@ With this PEP, each of these types adds a new attribute,
Internally, ``__co_annotations__`` is a "data descriptor",
where functions are called whenever user code gets, sets,
or deletes the attribute. In all three cases, the object
has a separate internal place to store the current value
has separate internal storage for the current value
of the ``__co_annotations__`` attribute.
``__annotations__`` is also reimplemented as a data descriptor,
with its own separate internal storage for its internal value.
The code implementing the "get" for ``__annotations__`` works
something like this::
``__annotations__`` is also as a data descriptor, with its own
separate internal storage for its internal value. The code
implementing the "get" for ``__annotations__`` works something
like this::
if (the internal value is set)
return the internal annotations dict
@ -397,38 +462,38 @@ Unbound code objects
When Python code defines one of these three objects with
annotations, the Python compiler generates a separate code
object which builds and returns the appropriate annotations
dict. The "annotation code object" is then stored *unbound*
as the internal value of ``__co_annotations__``; it is then
bound on demand when the user asks for ``__annotations__``.
dict. Wherever possible, the "annotation code object" is
then stored *unbound* as the internal value of
``__co_annotations__``; it is then bound on demand when
the user asks for ``__annotations__``.
This is an important optimization, for both speed and
memory consumption. Python processes rarely examine
annotations at runtime. Therefore, pre-binding these
code objects to function objects would be a waste of
resources in nearly all cases.
This is a useful optimization for both speed and memory
consumption. Python processes rarely examine annotations
at runtime. Therefore, pre-binding these code objects to
function objects would usually be a waste of resources.
Note that user code isn't permitted to see these unbound code
objects. If the user gets the value of ``__co_annotations__``,
and the internal value of ``__co_annotations__`` is an unbound
code object, it is bound, and the resulting function object is
stored as the new value of ``__co_annotations__``.
When is this optimization not possible?
* When an annotation function contains references to
free variables, in the current function or in an
outer function.
* When an annotation function is defined on a method
(a function defined inside a class) and the annotations
possibly refer directly to class variables.
Note that user code isn't permitted to directly access these
unbound code objects. If the user "gets" the value of
``__co_annotations__``, and the internal value of
``__co_annotations__`` is an unbound code object,
it immediately binds the code object, and the resulting
function object is stored as the new value of
``__co_annotations__`` and returned.
(However, these unbound code objects *are* stored in the
``.pyc`` file. So a determined user could examine them
should that be necessary for some reason.)
The annotations function
------------------------
Annotations functions take no arguments and
must return either None or a dict (or subclass of dict).
The bytecode generated for annotations code objects
always uses the ``BUILD_CONST_KEY_MAP`` opcode to build the
dict. Stock and PEP 563 semantics only uses this bytecode
for function annotations; for class and module annotations,
they generate a longer and slightly-less-efficient stanza
of bytecode.
Also, when generating the bytecode for an annotations code
object, all ``LOAD_*`` opcodes are forced to be ``LOAD_GLOBAL``.
Function Annotations
@ -436,13 +501,13 @@ Function Annotations
When compiling a function, the CPython bytecode compiler
visits the annotations for the function all in one place,
starting with ``compiler_visit_annotations()``. If there
are any annotations, they create the scope for the annotations
function on demand, and ``compiler_visit_annotations()``
assembles it.
starting with ``compiler_visit_annotations()`` in ``compile.c``.
If there are any annotations, they create the scope for
the annotations function on demand, and
``compiler_visit_annotations()`` assembles it.
The code object is passed in in place of the
annotations dict for the ``MAKE_FUNCTION`` bytecode.
The code object is passed in in place of the annotations dict
for the ``MAKE_FUNCTION`` bytecode instruction.
``MAKE_FUNCTION`` supports a new bit in its oparg
bitfield, ``0x10``, which tells it to expect a
``co_annotations`` code object on the stack.
@ -453,12 +518,11 @@ When binding an unbound annotation code object, a function will
use its own ``__globals__`` as the new function's globals.
One quirk of Python: you can't actually remove the annotations
from a function object.
If you delete the ``__annotations__`` attribute of a function,
then get its ``__annotations__`` member,
from a function object. If you delete the ``__annotations__``
attribute of a function, then get its ``__annotations__`` member,
it will create an empty dict and use that as its
``__annotations__``. Naturally the implementation of this
PEP maintains this quirk.
``__annotations__``. The implementation of this PEP maintains
this quirk for backwards compatibility.
Class Annotations
@ -510,8 +574,105 @@ The main difference is, a module uses its own dict as the
``__globals__`` when binding the function.
If you delete the ``__annotations__`` attribute of a class,
then get its ``__annotations__`` member,
the module will raise ``AttributeError``.
then get its ``__annotations__`` member, the module will
raise ``AttributeError``.
Annotations With Closures
-------------------------
It's possible to write annotations that refer to
free variables, and even free variables that have yet
to be defined. For example:
from __future__ import co_annotations
def outer():
def middle():
def inner(a:mytype, b:mytype2): pass
mytype = str
return inner
mytype2 = int
return middle()
fn = outer()
print(fn.__annotations__)
At the time ``fn`` is set, ``inner.__co_annotations__()``
hasn't been run. So it has to retain a reference to
the *future* definitions of ``mytype`` and ``mytype2`` if
it is to correctly evaluate its annotations.
If an annotation function refers to a local variable
from the current function scope, or a free variable
from an enclosing function scope--if, in CPython, the
annotation function code object contains one or more
``LOAD_DEREF`` opcodes--then the annotation code object
is bound at definition time with references to these
variables. ``LOAD_DEREF`` instructions require the annotation
function to be bound with special run-time information
(in CPython, a ``freevars`` array). Rather than store
that separately and use that to later lazy-bind the
function object, the current implementation simply
early-binds the function object.
Note that, since the annotation function ``inner.__co_annotations__()``
is defined while parsing ``outer()``, from Python's perspective
the annotation function is a "nested function". So "local
variable inside the 'current' function" and "free variable
from an enclosing function" are, from the perspective of
the annotation function, the same thing.
Annotations That Refer To Class Variables
-----------------------------------------
It's possible to write annotations that refer to
class variables, and even class variables that haven't
yet been defined. For example:
from __future__ import co_annotations
class C:
def method(a:mytype): pass
mytype = str
print(C.method.__annotations__)
Internally, annotation functions are defined as
a new type of "block" in CPython's symbol table
called an ``AnnotationBlock``. An ``AnnotationBlock``
is almost identical to a ``FunctionBlock``. It differs
in that it's permitted to see names from an enclosing
class scope. (Again: annotation functions are functions,
and they're defined *inside* the same scope as
the thing they're being defined on. So in the above
example, the annotation function for ``C.method()``
is defined inside ``C``.)
If it's possible that an annotation function refers
to class variables--if all these conditions are true:
* The annotation function is being defined inside
a class scope.
* The generated code for the annotation function
has at least one ``LOAD_NAME`` instruction.
Then the annotation function is bound at the time
it's set on the class/function, and this binding
includes a reference to the class dict. The class
dict is pushed on the stack, and the ``MAKE_FUNCTION``
bytecode instruction takes a new second bitfield (0x20)
indicating that it should consume that stack argument
and store it as ``__locals__`` on the newly created
function object.
Then, at the time the function is executed, the
``f_locals`` field of the frame object is set to
the function's ``__locals__``, if set. This permits
``LOAD_NAME`` opcodes to work normally, which means
the code generated for annotation functions is nearly
identical to that generated for conventional Python
functions.
Interactive REPL Shell
@ -532,19 +693,23 @@ But it gets complicated quickly, and for a nearly-non-existent
use case.)
Local Annotations Inside Functions
----------------------------------
Annotations On Local Variables Inside Functions
-----------------------------------------------
Python supports syntax for local variable annotations inside
functions. However, these annotations have no runtime effect.
Thus this PEP doesn't need to do anything to support them.
functions. However, these annotations have no runtime
effect--they're discarded at compile-time. Therefore, this
PEP doesn't need to do anything to support them, the same
as stock semantics and PEP 563.
Performance
-----------
Performance with this PEP should be favorable. In general,
resources are only consumed on demand—"you only pay for what you use".
Performance Comparison
----------------------
Performance with this PEP should be favorable, when compared with either
stock behavior or PEP 563. In general, resources are only consumed
on demand—"you only pay for what you use".
There are three scenarios to consider:
@ -561,43 +726,167 @@ generated for it. This requires no runtime processor time and
consumes no memory.
When annotations are defined but not referenced, the runtime cost
of Python with this PEP should be slightly faster than either
original Python semantics or PEP 563 semantics. With those, the
annotations dicts are built but never examined; with this PEP,
the annotations dicts won't even be built. All that happens at
runtime is the loading of a single constant (a simple code
object) which is then set as an attribute on an object. Since
the annotations are never referenced, the code object is never
bound to a function, the code to create the dict is never
executed, and the dict is never constructed.
of Python with this PEP should be roughly equal to or slightly better
than PEP 563 semantics, and slightly better than "stock" Python
semantics. The specifics depend on the object being annotated:
* With stock semantics, the annotations dict is always built, and
set as an attribute of the object being annotated.
* In PEP 563 semantics, for function objects, a single constant
(a tuple) is set as an attribute of the function. For class and
module objects, the annotations dict is always built and set as
an attribute of the class or module.
* With this PEP, a single object is set as an attribute of the
object being annotated. Most often, this object is a constant
(a code object). In cases where the annotation refers to local
variables or class variables, the code object will be bound to
a function object, and the function object is set as the attribute
of the object being annotated.
When annotations are both defined and referenced, code using
this PEP should be much faster than code using PEP 563 semantics,
and roughly the same as original Python semantics. PEP 563
semantics requires invoking ``eval()`` for every value inside
an annotations dict, which is much slower. And, as already
mentioned, this PEP generates more efficient bytecode for class
and module annotations than either stock or PEP 563 semantics.
and equivalent to or slightly improved over original Python
semantics. PEP 563 semantics requires invoking ``eval()`` for
every value inside an annotations dict, which is enormously slow.
And, as already mentioned, this PEP generates measurably more
efficient bytecode for class and module annotations than stock
semantics; for function annotations, this PEP and stock semantics
should be roughly equivalent.
Memory use should also be comparable in all three scenarios across
all three semantic contexts. In the first and third scenarios,
memory usage should be roughly equivalent in all cases.
In the second scenario, when annotations are defined but not
referenced, using this PEP's semantics will mean the
function/class/module will store one unused code object; with
the other two semantics, they'll store one unused dictionary.
function/class/module will store one unused code object (possibly
bound to an unused function object); with the other two semantics,
they'll store one unused dictionary (or constant tuple).
Bytecode Comparison
-------------------
The bytecode generated for annotations functions with
this PEP uses the efficient ``BUILD_CONST_KEY_MAP`` opcode
to build the dict for all annotatable objects:
functions, classes, and modules.
Stock semantics also uses ``BUILD_CONST_KEY_MAP`` bytecode
for function annotations. PEP 563 has an even more efficient
method for building annotations dicts on functions, leveraging
the fact that its annotations dicts only contain strings for
both keys and values. At compile-time it constructs a tuple
containing pairs of keys and values at compile-time, then
at runtime it converts that tuple into a dict on demand.
This is a faster technique than either stock semantics
or this PEP can employ, because in those two cases
annotations dicts can contain Python values of any type.
Of course, this performance win is negated if the
annotations are examined, due to the overhead of ``eval()``.
For class and module annotations, both stock semantics
and PEP 563 generate a longer and slightly-less-efficient
stanza of bytecode, creating the dict and setting the
annotations individually.
For Future Discussion
=====================
__globals__
-----------
Circular Imports
----------------
There is one unfortunately-common scenario where PEP 563
currently provides a better experience, and it has to do
with large code bases, with circular dependencies and
imports, that examine their annotations at run-time.
PEP 563 permitted defining *and examining* invalid
expressions as annotations. Its implementation requires
annotations to be legal Python expressions, which it then
converts into strings at compile-time. But legal Python
expressions may not be computable at runtime, if for
example the expression references a name that isn't defined.
This is a problem for stringized annotations if they're
evaluated, e.g. with ``typing.get_type_hints()``. But
any stringized annotation may be examined harmlessly at
any time--as long as you don't evaluate it, and only
examine it as a string.
Some large organizations have code bases that unfortunately
have circular dependency problems with their annotations--class
A has methods annotated with class B, but class B has methods
annotated with class A--that can be difficult to resolve.
Since PEP 563 stringizes their annotations, it allows them
to leave these circular dependencies in place, and they can
sidestep the circular import problem by never importing the
module that defines the types used in the annotations. Their
annotations can no longer be evaluated, but this appears not
to be a concern in practice. They can then examine the
stringized form of the annotations at runtime and this seems
to be sufficient for their needs.
This PEP allows for many of the same behaviors.
Annotations must be legal Python expressions, which
are compiled into a function at compile-time.
And if the code never examines an annotation, it won't
have any runtime effect, so here too annotations can
harmlessly refer to undefined names. (It's exactly
like defining a function that refers to undefined
names--then never calling that function. Until you
call the function, nothing bad will happen.)
But examining an annotation when this PEP is active
means evaluating it, which means the names evaluated
in that expression must be defined. An undefined name
will throw a ``NameError`` in an annotation function,
just as it would with a stringized annotation passed
in to ``typing.get_type_hints()``, and just like any
other context in Python where an expression is evaluated.
In discussions we have yet to find a solution to this
problem that makes all the participants in the
conversation happy. There are various avenues to explore
here:
* One workaround is to continue to stringize one's
annotations, either by hand or done automatically
by the Python compiler (as it does today with
``from __future__ import annotations``). This might
mean preserving Python's current stringizing annotations
going forward, although leaving it turned off by default,
only available by explicit request (though likely with
a different mechanism than
``from __future__ import annotations``).
* Another possible workaround involves importing
the circularly-dependent modules separately, then
externally adding ("monkey-patching") their dependencies
to each other after the modules are loaded. As long
as the modules don't examine their annotations until
after they are completely loaded, this should work fine
and be maintainable with a minimum of effort.
* A third and more radical approach would be to change the
semantics of annotations so that they don't raise a
``NameError`` when an unknown name is evaluated,
but instead create some sort of proxy "reference" object.
* Of course, even if we do deprecate PEP 563, it will be
several releases before the before the functionality
is removed, giving us several years in which to to
research and innovate new solutions for this problem.
In any case, the participants of the discussion agree that
this PEP should still move forward, even as this issue remains
currently unresolved [1]_.
.. [1] https://github.com/larryhastings/co_annotations/issues/1
cls.__globals__ and fn.__locals__
---------------------------------
Is it permissible to add the ``__globals__`` reference to class
objects as proposed here? It's not clear why this hasn't already
been done; PEP 563 could have made use of class globals, but instead
makes do with looking up classes inside ``sys.modules``. Yet Python
made do with looking up classes inside ``sys.modules``. Python
seems strangely allergic to adding a ``__globals__`` reference to
class objects.
@ -632,6 +921,11 @@ anyway, and not make it visible to the user (via this new
__globals__ attribute). There's possibly already a good place to
put it anyway--``ht_module``.
Similarly, this PEP adds one new dunder member to functions,
classes, and modules (``__co_annotations__``), and a second new
dunder member to functions (``__locals__``). This might be
considered excessive.
Bikeshedding the name
---------------------
@ -649,12 +943,19 @@ perhaps the name of the attribute and the name of the
Acknowledgements
================
Thanks to Barry Warsaw, Eric V. Smith, and Mark Shannon
for feedback and encouragement. Thanks in particular to
Mark Shannon for two key suggestions—build the entire
annotations dict inside a single code object, and only
bind it to a function on demand—that quickly became
among the best aspects of this proposal.
Thanks to Barry Warsaw, Eric V. Smith, Mark Shannon,
and Guido van Rossum for feedback and encouragement.
Thanks in particular to Mark Shannon for two key
suggestions—build the entire annotations dict inside
a single code object, and only bind it to a function
on demand—that quickly became among the best aspects
of this proposal. Also, thanks in particular to Guido
van Rossum for suggesting that ``__co_annotations__``
functions should duplicate the name visibility rules of
annotations under "stock" semantics--this resulted in
a sizeable improvement to the second draft. Finally,
special thanks to Jelle Zijlstra, who contributed not
just feedback--but code!
Copyright
@ -664,7 +965,6 @@ This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text