python-peps/pep-0646.rst

1070 lines
38 KiB
ReStructuredText
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

PEP: 646
Title: Variadic Generics
Author: Mark Mendoza <mendoza.mark.a@gmail.com>,
Matthew Rahtz <mrahtz@google.com>,
Pradeep Kumar Srinivasan <gohanpra@gmail.com>,
Vincent Siles <vsiles@fb.com>
Sponsor: Guido van Rossum <guido@python.org>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 16-Sep-2020
Python-Version: 3.10
Post-History: 07-Oct-2020, 23-Dec-2020, 29-Dec-2020
Abstract
========
PEP 484 introduced ``TypeVar``, enabling creation of generics parameterised
with a single type. In this PEP, we introduce ``TypeVarTuple``, enabling parameterisation
with an *arbitrary* number of types - that is, a *variadic* type variable,
enabling *variadic* generics. This enables a wide variety of use cases.
In particular, it allows the type of array-like structures
in numerical computing libraries such as NumPy and TensorFlow to be
parameterised with the array *shape*, enabling static type checkers
to catch shape-related bugs in code that uses these libraries.
Motivation
==========
Variadic generics have long been a requested feature, for a myriad of
use cases [#typing193]_. One particular use case - a use case with potentially
large impact, and the main case this PEP targets - concerns typing in
numerical libraries.
In the context of numerical computation with libraries such as NumPy and TensorFlow,
the *shape* of variables is often just as important as the variable *type*.
For example, consider the following function which converts a batch [#batch]_
of videos to grayscale:
::
def to_gray(videos: Array): ...
From the signature alone, it is not obvious what shape of array [#array]_
we should pass for the ``videos`` argument. Possibilities include, for
example,
batch × time × height × width × channels
and
time × batch × channels × height × width. [#timebatch]_
This is important for three reasons:
* **Documentation**. Without the required shape being clear in the signature,
the user must hunt in the docstring or the code in question to determine
what the input/output shape requirements are.
* **Catching shape bugs before runtime**. Ideally, use of incorrect shapes
should be an error we can catch ahead of time using static analysis.
(This is particularly important for machine learning code, where iteration
times can be slow.)
* **Preventing subtle shape bugs**. In the worst case, use of the wrong shape
will result in the program appearing to run fine, but with a subtle bug
that can take days to track down. (See `this exercise`_ in a popular machine learning
tutorial for a particularly pernicious example.)
Ideally, we should have some way of making shape requirements explicit in
type signatures. Multiple proposals [#numeric-stack]_ [#typing-ideas]_
[#syntax-proposal]_ have suggested the use of the standard generics syntax for
this purpose. We would write:
::
def to_gray(videos: Array[Time, Batch, Height, Width, Channels]): ...
However, note that arrays can be of arbitrary rank - ``Array`` as used above is
generic in an arbitrary number of axes. One way around this would be to use a different
``Array`` class for each rank...
::
Axis1 = TypeVar('Axis1')
Axis2 = TypeVar('Axis2')
class Array1(Generic[Axis1]): ...
class Array2(Generic[Axis1, Axis2]): ...
...but this would be cumbersome, both for users (who would have to sprinkle 1s and 2s
and so on throughout their code) and for the authors of array libraries (who would have to duplicate implementations throughout multiple classes).
Variadic generics are necessary for an ``Array`` that is generic in an arbitrary
number of axes to be cleanly defined as a single class.
Summary Examples
================
Cutting right to the chase, this PEP allows an ``Array`` class that is generic
in its shape (and datatype) to be defined using a newly-introduced
arbitrary-length type variable, ``TypeVarTuple``, as follows:
::
from typing import TypeVar, TypeVarTuple
DType = TypeVar('DType')
Shape = TypeVarTuple('Shape')
class Array(Generic[DType, *Shape]):
def __abs__(self) -> Array[DType, *Shape]: ...
def __add__(self, other: Array[DType, *Shape]) -> Array[DType, *Shape]: ...
Such an ``Array`` can be used to support a number of different kinds of
shape annotations. For example, we can add labels describing the
semantic meaning of each axis:
::
from typing import NewType
Height = NewType('Height', int)
Width = NewType('Width', int)
x: Array[float, Height, Width] = Array()
We could also add annotations describing the actual size of each axis:
::
from typing import Literal as L
x: Array[float, L[480], L[640]] = Array()
For consistency, we use semantic axis annotations as the basis of the examples
in this PEP, but this PEP is agnostic about which of these two (or possibly other)
ways of using ``Array`` is preferable; that decision is left to library authors.
(Note also that for the rest of this PEP, for conciseness of example, we use
a simpler version of ``Array`` which is generic only in the shape - *not* the
data type.)
Specification
=============
In order to support the above use cases, we introduce ``TypeVarTuple``. This serves as a placeholder not for a single type but for an *arbitrary* number of types, and behaving like a number of ``TypeVar`` instances packed in a ``Tuple``.
In addition, we introduce a new use for the star operator: to 'unpack'
``TypeVarTuple`` instances, in order to access the type variables
contained in the tuple.
Type Variable Tuples
--------------------
In the same way that a normal type variable is a stand-in for a single type,
a type variable *tuple* is a stand-in for an arbitrary number of types (zero or
more) in a flat ordered list.
Type variable tuples are created with:
::
from typing import TypeVarTuple
Ts = TypeVarTuple('Ts')
Type variable tuples behave like a number of individual type variables packed in a
``Tuple``. To understand this, consider the following example:
::
Shape = TypeVarTuple('Shape')
class Array(Generic[*Shape]): ...
Height = NewType('Height', int)
Width = NewType('Width', int)
x: Array[Height, Width] = Array()
The ``Shape`` type variable tuple here behaves like ``Tuple[T1, T2]``,
where ``T1`` and ``T2`` are type variables. To use these type variables
as type parameters of ``Array``, we must *unpack* the type variable tuple using
the star operator: ``*Shape``. The signature of ``Array`` then behaves
as if we had simply written ``class Array(Generic[T1, T2]): ...``.
In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows
us to parameterise the class with an *arbitrary* number of type parameters.
That is, in addition to being able to define rank-2 arrays such as
``Array[Height, Width]``, we could also define rank-3 arrays, rank-4 arrays,
and so on:
::
Time = NewType('Time', int)
Batch = NewType('Batch', int)
y: Array[Batch, Height, Width] = Array()
z: Array[Time, Batch, Height, Width] = Array()
Type variable tuples can be used anywhere a normal ``TypeVar`` can.
This includes class definitions, as shown above, as well as function
signatures and variable annotations:
::
class Array(Generic[*Shape]):
def __init__(self, shape: Tuple[*Shape]):
self._shape: Tuple[*Shape] = shape
def get_shape(self) -> Tuple[*Shape]:
return self._shape
shape = (Height(480), Width(640))
x: Array[Height, Width] = Array(shape)
y = abs(x) # Inferred type is Array[Height, Width]
z = x + x # ... is Array[Height, Width]
Type Variable Tuples Must Always be Unpacked
''''''''''''''''''''''''''''''''''''''''''''
Note that in the previous example, the ``shape`` argument to ``__init__``
was annotated as ``Tuple[*Shape]``. Why is this necessary - if ``Shape``
behaves like ``Tuple[T1, T2, ...]``, couldn't we have annotated the ``shape``
argument as ``Shape`` directly?
This is, in fact, deliberately not possible: type variable tuples must
*always* be used unpacked (that is, prefixed by the star operator). This is
for two reasons:
* To avoid potential confusion about whether to use a type variable tuple
in a packed or unpacked form ("Hmm, should I write '``-> Shape``',
or '``-> Tuple[Shape]``', or '``-> Tuple[*Shape]``'...?")
* To improve readability: the star also functions as an explicit visual
indicator that the type variable tuple is not a normal type variable.
``Unpack`` for Backwards Compatibility
''''''''''''''''''''''''''''''''''''''
Note that the use of the star operator in this context requires a grammar change,
and is therefore available only in new versions of Python. To enable use of type
variable tuples in older versions of Python, we introduce the ``Unpack`` type
operator that can be used in place of the star operator:
::
# Unpacking using the star operator in new versions of Python
class Array(Generic[*Shape]): ...
# Unpacking using ``Unpack`` in older versions of Python
class Array(Generic[Unpack[Shape]]): ...
Variance, Type Constraints and Type Bounds: Not (Yet) Supported
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
To keep this PEP minimal, ``TypeVarTuple`` does not yet support specification of:
* Variance (e.g. ``TypeVar('T', covariant=True)``)
* Type constraints (``TypeVar('T', int, float)``)
* Type bounds (``TypeVar('T', bound=ParentClass)``)
We leave the decision of how these arguments should behave to a future PEP, when variadic generics have been tested in the field. As of this PEP, type variable tuples are
invariant.
Behaviour when Type Parameters are not Specified
''''''''''''''''''''''''''''''''''''''''''''''''
When a generic class parameterised by a type variable tuple is used without
any type parameters, it behaves as if its type parameters are '``Any, ...``'
(an arbitrary number of ``Any``):
::
def takes_any_array(arr: Array): ...
x: Array[Height, Width]
takes_any_array(x) # Valid
y: Array[Time, Height, Width]
takes_any_array(y) # Also valid
This enables gradual typing: existing functions accepting, for example,
a plain TensorFlow ``Tensor`` will still be valid even if ``Tensor`` is made
generic and calling code passes a ``Tensor[Height, Width]``.
This also works in the opposite direction:
::
def takes_specific_array(arr: Array[Height, Width]): ...
z: Array
takes_specific_array(z)
This way, even if libraries are updated to use types like ``Array[Height, Width]``,
users of those libraries won't be forced to also apply type annotations to
all of their code; users still have a choice about what parts of their code
to type and which parts to not.
Type Variable Tuples Must Have Known Length
'''''''''''''''''''''''''''''''''''''''''''
Type variables tuples may not be bound to a type with unknown length.
That is:
::
def foo(x: Tuple[*Ts]): ...
x: Tuple[float, ...]
foo(x) # NOT valid; Ts would be bound to ``Tuple[float, ...]``
If this is confusing - didn't we say that type variable tuples are a stand-in
for an *arbitrary* number of types? - note the difference between the
length of the type variable tuple *itself*, and the length of the type it is
*bound* to. Type variable tuples themselves can be of arbitrary length -
that is, they can be bound to ``Tuple[int]``, ``Tuple[int, int]``, and
so on - but the types they are bound to must be of known length -
that is, ``Tuple[int, int]``, but not ``Tuple[int, ...]``.
Note that, as a result of this rule, omitting the type parameter list is the
*only* way of instantiating a generic type with an arbitrary number of
type parameters. (We plan to introduce a more deliberate syntax for this
case in a future PEP.) For example, an unparameterised ``Array`` may
*behave* like ``Array[Any, ...]``, but it cannot be instantiated using
``Array[Any, ...]``, because this would bind its type variable tuple to ``Tuple[Any, ...]``:
::
x: Array # Valid
y: Array[int, ...] # Error
z: Array[Any, ...] # Error
Type Variable Tuple Equality
''''''''''''''''''''''''''''
If the same ``TypeVarTuple`` instance is used in multiple places in a signature
or class, a valid type inference might be to bind the ``TypeVarTuple`` to
a ``Tuple`` of a ``Union`` of types:
::
def foo(arg1: Tuple[*Ts], arg2: Tuple[*Ts]): ...
a = (0,)
b = ('0',)
foo(a, b) # Can Ts be bound to Tuple[int | str]?
We do *not* allow this; type unions may *not* appear within the ``Tuple``.
If a type variable tuple appears in multiple places in a signature,
the types must match exactly (the list of type parameters must be the same
length, and the type parameters themselves must be identical):
::
def pointwise_multiply(
x: Array[*Shape],
y: Array[*Shape]
) -> Array[*Shape]: ...
x: Array[Height]
y: Array[Width]
z: Array[Height, Width]
pointwise_multiply(x, x) # Valid
pointwise_multiply(x, y) # Error
pointwise_multiply(x, z) # Error
Multiple Type Variable Tuples: Not Allowed
''''''''''''''''''''''''''''''''''''''''''
As of this PEP, only a single type variable tuple may appear in a type parameter list:
::
class Array(Generic[*Ts1, *Ts2]): ... # Error
Type Concatenation
------------------
Type variable tuples don't have to be alone; normal types can be
prefixed and/or suffixed:
::
Shape = TypeVarTuple('Shape')
Batch = NewType('Batch', int)
Channels = NewType('Channels', int)
def add_batch_axis(x: Array[*Shape]) -> Array[Batch, *Shape]: ...
def del_batch_axis(x: Array[Batch, *Shape]) -> Array[*Shape]: ...
def add_batch_channels(
x: Array[*Shape]
) -> Array[Batch, *Shape, Channels]: ...
a: Array[Height, Width]
b = add_batch_axis(a) # Inferred type is Array[Batch, Height, Width]
c = del_batch_axis(b) # Array[Height, Width]
d = add_batch_channels(a) # Array[Batch, Height, Width, Channels]
Normal ``TypeVar`` instances can also be prefixed and/or suffixed:
::
T = TypeVar('T')
Ts = TypeVarTuple('Ts')
def prefix_tuple(
x: T,
y: Tuple[*Ts]
) -> Tuple[T, *Ts]: ...
z = prefix_tuple(x=0, y=(True, 'a'))
# Inferred type of z is Tuple[int, bool, str]
``*args`` as a Type Variable Tuple
----------------------------------
PEP 484 states that when a type annotation is provided for ``*args``, every argument
must be of the type annotated. That is, if we specify ``*args`` to be type ``int``,
then *all* arguments must be of type ``int``. This limits our ability to specify
the type signatures of functions that take heterogeneous argument types.
If ``*args`` is annotated as a type variable tuple, however, the types of the
individual arguments become the types in the type variable tuple:
::
Ts = TypeVarTuple('Ts')
def args_to_tuple(*args: *Ts) -> Tuple[*Ts]: ...
args_to_tuple(1, 'a') # Inferred type is Tuple[int, str]
If no arguments are passed, the type variable tuple behaves like an
empty tuple, ``Tuple[()]``.
Note that, in keeping with the rule that type variable tuples must always
be used unpacked, annotating ``*args`` as being a plain type variable tuple
instance is *not* allowed:
::
def foo(*args: Ts): ... # NOT valid
``*args`` is the only case where an argument can be annotated as ``*Ts`` directly;
other arguments should use ``*Ts`` to parameterise something else, e.g. ``Tuple[*Ts]``.
If ``*args`` itself is annotated as ``Tuple[*Ts]``, the old behaviour still applies:
all arguments must be a ``Tuple`` parameterised with the same types.
::
def foo(*args: Tuple[*Ts]): ...
foo((0,), (1,)) # Valid
foo((0,), (1, 2)) # Error
foo((0,), ('1',)) # Error
Following `Type Variable Tuples Must Have Known Length`_, note
that the following should *not* type-check as valid (even though it is, of
course, valid at runtime):
::
def foo(*args: *Ts): ...
def bar(x: Tuple[int, ...]):
foo(*x) # NOT valid
Finally, note that a type variable tuple may *not* be used as the type of
``**kwargs``. (We do not yet know of a use case for this feature, so we prefer
to leave the ground fresh for a potential future PEP.)
::
# NOT valid
def foo(**kwargs: *Ts): ...
Type Variable Tuples with ``Callable``
--------------------------------------
Type variable tuples can also be used in the arguments section of a
``Callable``:
::
class Process:
def __init__(
self,
target: Callable[[*Ts], Any],
args: Tuple[*Ts]
): ...
def func(arg1: int, arg2: str): ...
Process(target=func, args=(0, 'foo')) # Valid
Process(target=func, args=('foo', 0)) # Error
Other types and normal type variables can also be prefixed/suffixed
to the type variable tuple:
::
T = TypeVar('T')
def foo(f: Callable[[int, *Ts, T], Tuple[T, *Ts]]): ...
Aliases
-------
Generic aliases can be created using a type variable tuple in
a similar way to regular type variables:
::
IntTuple = Tuple[int, *Ts]
NamedArray = Tuple[str, Array[*Ts]]
IntTuple[float, bool] # Equivalent to Tuple[int, float, bool]
NamedArray[Height] # Equivalent to Tuple[str, Array[Height]]
As this example shows, all type parameters passed to the alias are
bound to the type variable tuple.
Importantly for our original ``Array`` example (see `Summary Examples`_), this
allows us to define convenience aliases for arrays of a fixed shape
or datatype:
::
Shape = TypeVarTuple('Shape')
DType = TypeVar('DType')
class Array(Generic[DType, *Shape]):
# E.g. Float32Array[Height, Width, Channels]
Float32Array = Array[np.float32, *Shape]
# E.g. Array1D[np.uint8]
Array1D = Array[DType, Any]
If an explicitly empty type parameter list is given, the type variable
tuple in the alias is set empty:
::
IntTuple[()] # Equivalent to Tuple[int]
NamedArray[()] # Equivalent to Tuple[str, Array[()]]
If the type parameter list is omitted entirely, the alias is
compatible with arbitrary type parameters:
::
def takes_float_array_of_any_shape(x: Float32Array): ...
x: Float32Array[Height, Width] = Array()
takes_float_array_of_any_shape(x) # Valid
def takes_float_array_with_specific_shape(
y: Float32Array[Height, Width]
): ...
y: Float32Array = Array()
takes_float_array_with_specific_shape(y) # Valid
Normal ``TypeVar`` instances can also be used in such aliases:
::
T = TypeVar('T')
Foo = Tuple[T, *Ts]
# T bound to str, Ts to Tuple[int]
Foo[str, int]
# T bound to float, Ts to Tuple[()]
Foo[float]
# T bound to Any, Ts to an arbitrary number of Any
Foo
Overloads for Accessing Individual Types
----------------------------------------
For situations where we require access to each individual type in the type variable tuple,
overloads can be used with individual ``TypeVar`` instances in place of the type variable tuple:
::
Shape = TypeVarTuple('Shape')
Axis1 = TypeVar('Axis1')
Axis2 = TypeVar('Axis2')
Axis3 = TypeVar('Axis3')
class Array(Generic[*Shape]):
@overload
def transpose(
self: Array[Axis1, Axis2]
) -> Array[Axis2, Axis1]: ...
@overload
def transpose(
self: Array[Axis1, Axis2, Axis3]
) -> Array[Axis3, Axis2, Axis1]: ...
(For array shape operations in particular, having to specify
overloads for each possible rank is, of course, a rather cumbersome
solution. However, it's the best we can do without additional type
manipulation mechanisms. We plan to introduce these in a future PEP.)
Rationale and Rejected Ideas
============================
Shape Arithmetic
----------------
Considering the use case of array shapes in particular, note that as of
this PEP, it is not yet possible to describe arithmetic transformations
of array dimensions - for example,
``def repeat_each_element(x: Array[N]) -> Array[2*N]``. We consider
this out-of-scope for the current PEP, but plan to propose additional
mechanisms that *will* enable this in a future PEP.
Supporting Variadicity Through Aliases
--------------------------------------
As noted in the introduction, it *is* possible to avoid variadic generics
by simply defining aliases for each possible number of type parameters:
::
class Array1(Generic[Axis1]): ...
class Array2(Generic[Axis1, Axis2]): ...
However, this seems somewhat clumsy - it requires users to unnecessarily
pepper their code with 1s, 2s, and so on for each rank necessary.
Construction of ``TypeVarTuple``
--------------------------------
``TypeVarTuple`` began as ``ListVariadic``, based on its naming in
an early implementation in Pyre.
We then changed this to ``TypeVar(list=True)``, on the basis that a)
it better emphasises the similarity to ``TypeVar``, and b) the meaning
of 'list' is more easily understood than the jargon of 'variadic'.
Once we'd decided that a variadic type variable should behave like a ``Tuple``,
we also considered ``TypeVar(bound=Tuple)``, which is similarly intuitive
and accomplishes most what we wanted without requiring any new arguments to
``TypeVar``. However, we realised this may constrain us in the future, if
for example we want type bounds or variance to function slightly differently
for variadic type variables than what the semantics of ``TypeVar`` might
otherwise imply. Also, we may later wish to support arguments that should not be supported by regular type variables (such as ``arbitrary_len`` [#arbitrary_len]_).
We therefore settled on ``TypeVarTuple``.
Behaviour when Type Parameters are not Specified
------------------------------------------------
In order to support gradual typing, this PEP states that *both*
of the following examples should type-check correctly:
::
def takes_any_array(x: Array): ...
x: Array[Height, Width]
takes_any_array(x)
def takes_specific_array(y: Array[Height, Width]): ...
y: Array
takes_specific_array(y)
Note that this is in contrast to the behaviour of the only currently-existing
variadic type in Python, ``Tuple``:
::
def takes_any_tuple(x: Tuple): ...
x: Tuple[int, str]
takes_any_tuple(x) # Valid
def takes_specific_tuple(y: Tuple[int, str]): ...
y: Tuple
takes_specific_tuple(y) # Error
The rules for ``Tuple`` were deliberately chosen such that the latter case
is an error: it was thought to be more likely that the programmer has made a
mistake than that the function expects a specific kind of ``Tuple`` but the
specific kind of ``Tuple`` passed is unknown to the type checker. Additionally,
``Tuple`` is something of a special case, in that it is used to represent
immutable sequences. That is, if an object's type is inferred to be an
unparameterised ``Tuple``, it is not necessarily because of incomplete typing.
In contrast, if an object's type is inferred to be an unparameterised ``Array``,
it is much more likely that the user has simply not yet fully annotated their
code, or that the signature of a shape-manipulating library function cannot yet
be expressed using the typing system and therefore returning a plain ``Array``
is the only option. We rarely deal with arrays of truly arbitrary shape;
in certain cases, *some* parts of the shape will be arbitrary - for example,
when dealing with sequences, the first two parts of the shape are often
'batch' and 'time' - but we plan to support these cases explicitly in a
future PEP with a syntax such as ``Array[Batch, Time, ...]``.
We therefore made the decision to have variadic generics *other* than
``Tuple`` behave differently, in order to give the user more flexibility
in how much of their code they wish to annotate, and to enable compatibility
between old unannotated code and new versions of libraries which do use
these type annotations.
Alternatives
============
It should be noted that the approach outlined in this PEP to solve the
issue of shape checking in numerical libraries is *not* the only approach
possible. Examples of lighter-weight alternatives based on *runtime* checking include
ShapeGuard [#shapeguard]_, tsanley [#tsanley]_, and PyContracts [#pycontracts]_.
While these existing approaches improve significantly on the default
situation of shape checking only being possible through lengthy and verbose
assert statements, none of them enable *static* analysis of shape correctness.
As mentioned in `Motivation`_, this is particularly desirable for
machine learning applications where, due to library and infrastructure complexity,
even relatively simple programs must suffer long startup times; iterating
by running the program until it crashes, as is necessary with these
existing runtime-based approaches, can be a tedious and frustrating
experience.
Our hope with this PEP is to begin to codify generic type annotations as
an official, language-supported way of dealing with shape correctness.
With something of a standard in place, in the long run, this will
hopefully enable a thriving ecosystem of tools for analysing and verifying
shape properties of numerical computing programs.
Backwards Compatibility
=======================
In order to use the star operator for unpacking of ``TypeVarTuple`` instances,
we would need to make two grammar changes:
1. Star expressions must be made valid in at least index operations.
For example, ``Tuple[*Ts]`` and ``Tuple[T1, *Ts, T2]`` would both
be valid. (This PEP does not allow multiple unpacked ``TypeVarTuple``
instances to appear in a single parameter list, so ``Tuple[*Ts1, *Ts2]``
would be a runtime error. Also note that star expressions would *not*
be valid in slice expressions - e.g. ``Tuple[*Ts:*Ts]`` is
nonsensical and should remain invalid.)
2. We would need to make '``*args: *Ts``' valid in function definitions.
In both cases, at runtime the star operator would call ``Ts.__iter__()``.
This would, in turn, return an instance of a helper class, e.g.
``UnpackedTypeVarTuple``, whose ``repr`` would be ``*Ts``.
If these grammar changes are considered too burdensome, we could instead
simply use ``Unpack`` - though in this case it might be better for us to
first decide whether there's a better option.
The ``Unpack`` version of the PEP should be back-portable to previous
versions of Python.
Gradual typing is enabled by the fact that unparameterised variadic classes
are compatible with an arbitrary number of type parameters. This means
that if existing classes are made generic, a) all existing (unparameterised)
uses of the class will still work, and b) parameterised and unparameterised
versions of the class can be used together (relevant if, for example, library
code is updated to use parameters while user code is not, or vice-versa).
Reference Implementation
========================
Two reference implementations of type-checking functionality exist:
one in Pyre, as of v0.9.0, and one in Pyright, as of v1.1.108.
A preliminary implementation of the ``Unpack`` version of the PEP in CPython
is available in `cpython/23527`_. A preliminary version of the version
using the star operator, based on an early implementation of PEP 637,
is also available at `mrahtz/cpython/pep637+646`_.
Appendix A: Shape Typing Use Cases
==================================
To give this PEP additional context for those particularly interested in the
array typing use case, in this appendix we expand on the different ways
this PEP can be used for specifying shape-based subtypes.
Use Case 1: Specifying Shape Values
-----------------------------------
The simplest way to parameterise array types is using ``Literal``
type parameters - e.g. ``Array[Literal[64], Literal[64]]``.
We can attach names to each parameter using normal type variables:
::
K = TypeVar('K')
N = TypeVar('N')
def matrix_vector_multiply(x: Array[K, N], Array[N]) -> Array[K]: ...
a: Array[Literal[64], Literal[32]]
b: Array[Literal[32]]
matrix_vector_multiply(a, b)
# Result is Array[Literal[64]]
Note that such names have a purely local scope. That is, the name
``K`` is bound to ``Literal[64]`` only within ``matrix_vector_multiply``. To put it another
way, there's no relationship between the value of ``K`` in different
signatures. This is important: it would be inconvenient if every axis named ``K``
were constrained to have the same value throughout the entire program.
The disadvantage of this approach is that we have no ability to enforce shape semantics across
different calls. For example, we can't address the problem mentioned in `Motivation`_: if
one function returns an array with leading dimensions 'Time × Batch', and another function
takes the same array assuming leading dimensions 'Batch × Time', we have no way of detecting this.
The main advantage is that in some cases, axis sizes really are what we care about. This is true
for both simple linear algebra operations such as the matrix manipulations above, but also in more
complicated transformations such as convolutional layers in neural networks, where it would be of
great utility to the programmer to be able to inspect the array size after each layer using
static analysis. To aid this, in the future we would like to explore possibilities for additional
type operators that enable arithmetic on array shapes - for example:
::
def repeat_each_element(x: Array[N]) -> Array[Mul[2, N]]: ...
Such arithmetic type operators would only make sense if names such as ``N`` refer to axis size.
Use Case 2: Specifying Shape Semantics
--------------------------------------
A second approach (the one that most of the examples in this PEP are based around)
is to forgo annotation with actual axis size, and instead annotate axis *type*.
This would enable us to solve the problem of enforcing shape properties across calls.
For example:
::
# lib.py
class Batch: pass
class Time: pass
def make_array() -> Array[Batch, Time]: ...
# user.py
from lib import Batch, Time
# `Batch` and `Time` have the same identity as in `lib`,
# so must take array as produced by `lib.make_array`
def use_array(x: Array[Batch, Time]): ...
Note that in this case, names are *global* (to the extent that we use the
same ``Batch`` type in different place). However, because names refer only
to axis *types*, this doesn't constrain the *value* of certain axes to be
the same through (that is, this doesn't constrain all axes named ``Height``
to have a value of, say, 480 throughout).
The argument *for* this approach is that in many cases, axis *type* is the more
important thing to verify; we care more about which axis is which than what the
specific size of each axis is.
It also does not preclude cases where we wish to describe shape transformations
without knowing the type ahead of time. For example, we can still write:
::
K = TypeVar('K')
N = TypeVar('N')
def matrix_vector_multiply(x: Array[K, N], Array[N]) -> Array[K]: ...
We can then use this with:
class Batch: pass
class Values: pass
batch_of_values: Array[Batch, Values]
value_weights: Array[Values]
matrix_vector_multiply(batch_of_values, value_weights)
# Result is Array[Batch]
The disadvantages are the inverse of the advantages from use case 1.
In particular, this approach does not lend itself well to arithmetic
on axis types: ``Mul[2, Batch]`` would be as meaningless as ``2 * int``.
Discussion
----------
Note that use cases 1 and 2 are mutually exclusive in user code. Users
can verify size or semantic type but not both.
As of this PEP, we are agnostic about which approach will provide most benefit.
Since the features introduced in this PEP are compatible with both approaches, however,
we leave the door open.
Why Not Both?
-------------
Consider the following 'normal' code:
::
def f(x: int): ...
Note that we have symbols for both the value of the thing (``x``) and the type of
the thing (``int``). Why can't we do the same with axes? For example, with an imaginary
syntax, we could write:
::
def f(array: Array[TimeValue: TimeType]): ...
This would allow us to access the axis size (say, 32) through the symbol ``TimeValue``
*and* the type through the symbol ``TypeType``.
This might even be possible using existing syntax, through a second level of parameterisation:
::
def f(array: array[TimeValue[TimeType]]): ..
However, we leave exploration of this approach to the future.
Appendix B: Shaped Types vs Named Axes
======================================
An issue related to those addressed by this PEP concerns
axis *selection*. For example, if we have an image stored in an array of shape 64×64x3,
we might wish to convert to black-and-white by computing the mean over the third
axis, ``mean(image, axis=2)``. Unfortunately, the simple typo ``axis=1`` is
difficult to spot and will produce a result that means something completely different
(all while likely allowing the program to keep on running, resulting in a bug
that is serious but silent).
In response, some libraries have implemented so-called 'named tensors' (in this context,
'tensor' is synonymous with 'array'), in which axes are selected not by index but by
label - e.g. ``mean(image, axis='channels')``.
A question we are often asked about this PEP is: why not just use named tensors?
The answer is that we consider the named tensors approach insufficient, for two main reasons:
* **Static checking** of shape correctness is not possible. As mentioned in `Motivation`_,
this is a highly desireable feature in machine learning code where iteration times
are slow by default.
* **Interface documentation** is still not possible with this approach. If a function should
*only* be willing to take array arguments that have image-like shapes, this cannot be stipulated
with named tensors.
Additionally, there's the issue of **poor uptake**. At the time of writing, named tensors
have only been implemented in a small number of numerical computing libraries. Possible explanations for this
include difficulty of implementation (the whole API must be modified to allow selection by axis name
instead of index), and lack of usefulness due to the fact that axis ordering conventions are often
strong enough that axis names provide little benefit (e.g. when working with images, 3D tensors are
basically *always* height × width × channels). However, ultimately we are still uncertain
why this is the case.
Can the named tensors approach be combined with the approach we advocate for in
this PEP? We're not sure. One area of overlap is that in some contexts, we could do, say:
::
Image: Array[Height, Width, Channels]
im: Image
mean(im, axis=Image.axes.index(Channels)
Ideally, we might write something like ``im: Array[Height=64, Width=64, Channels=3]`` -
but this won't be possible in the short term, due to the rejection of PEP 637.
In any case, our attitude towards this is mostly "Wait and see what happens before
taking any further steps".
Footnotes
==========
.. [#batch] 'Batch' is machine learning parlance for 'a number of'.
.. [#array] We use the term 'array' to refer to a matrix with an arbitrary
number of dimensions. In NumPy, the corresponding class is the ``ndarray``;
in TensorFlow, the ``Tensor``; and so on.
.. [#timebatch] If the shape begins with 'batch × time', then
``videos_batch[0][1]`` would select the second frame of the first video. If the
shape begins with 'time × batch', then ``videos_batch[1][0]`` would select the
same frame.
Acknowledgements
================
Thank you to **Alfonso Castaño**, **Antoine Pitrou**, **Bas v.B.**, **David Foster**, **Dimitris Vardoulakis**, **Eric Traut**, **Guido van Rossum**, **Jia Chen**,
**Lucio Fernandez-Arjona**, **Nikita Sobolev**, **Peilonrayz**, **Rebecca Chen**,
**Sergei Lebedev**, and **Vladimir Mikulik** for helpful feedback and suggestions on
drafts of this PEP.
Thank you especially to **Lucio** for suggesting the star syntax (which has made multiple aspects of this proposal much more concise and intuitive), and to **Stephan Hoyer** for his kind `endorsement`_ of the PEP on the python-dev mailing list.
Resources
=========
Discussions on variadic generics in Python started in 2016 with Issue 193
on the python/typing GitHub repository [#typing193]_.
Inspired by this discussion, **Ivan Levkivskyi** made a concrete proposal
at PyCon 2019, summarised in notes on 'Type system improvements' [#type-improvements]_
and 'Static typing of Python numeric stack' [#numeric-stack]_.
Expanding on these ideas, **Mark Mendoza** and **Vincent Siles** gave a presentation on
'Variadic Type Variables for Decorators and Tensors' [#variadic-type-variables]_ at the 2019 Python
Typing Summit.
References
==========
.. [#typing193] Python typing issue #193:
https://github.com/python/typing/issues/193
.. [#type-improvements] Ivan Levkivskyi, 'Type system improvements', PyCon 2019:
https://paper.dropbox.com/doc/Type-system-improvements-HHOkniMG9WcCgS0LzXZAe
.. [#numeric-stack] Ivan Levkivskyi, 'Static typing of Python numeric stack', PyCon 2019:
https://paper.dropbox.com/doc/Static-typing-of-Python-numeric-stack-summary-6ZQzTkgN6e0oXko8fEWwN
.. [#typing-ideas] Stephan Hoyer, 'Ideas for array shape typing in Python':
https://docs.google.com/document/d/1vpMse4c6DrWH5rq2tQSx3qwP_m_0lyn-Ij4WHqQqRHY/edit
.. [#variadic-type-variables] Mark Mendoza, 'Variadic Type Variables for Decorators and Tensors', Python Typing Summit 2019:
https://github.com/facebook/pyre-check/blob/ae85c0c6e99e3bbfc92ec55104bfdc5b9b3097b2/docs/Variadic_Type_Variables_for_Decorators_and_Tensors.pdf
.. [#syntax-proposal] Matthew Rahtz et al., 'Shape annotation syntax proposal':
https://docs.google.com/document/d/1But-hjet8-djv519HEKvBN6Ik2lW3yu0ojZo6pG9osY/edit
.. [#arbitrary_len] Discussion on Python typing-sig mailing list:
https://mail.python.org/archives/list/typing-sig@python.org/thread/SQVTQYWIOI4TIO7NNBTFFWFMSMS2TA4J/
.. [#tsanley] tsanley: https://github.com/ofnote/tsanley
.. [#pycontracts] PyContracts: https://github.com/AndreaCensi/contracts
.. [#shapeguard] ShapeGuard: https://github.com/Qwlouse/shapeguard
.. _cpython/23527: https://github.com/python/cpython/pull/24527
.. _mrahtz/cpython/pep637+646: https://github.com/mrahtz/cpython/tree/pep637%2B646
.. _this exercise: https://spinningup.openai.com/en/latest/spinningup/exercise2_2_soln.html
.. _endorsement: https://mail.python.org/archives/list/python-dev@python.org/message/UDM7Y6HLHQBKXQEBIBD5ZLB5XNPDZDXV/
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End: