Merge branch 'master' of github.com:python/peps

This commit is contained in:
Antoine Pitrou 2017-09-08 23:56:35 +02:00
commit 38177aa3c7
4 changed files with 519 additions and 63 deletions

View File

@ -1,5 +1,5 @@
PEP: 549
Title: Instance Properties
Title: Instance Descriptors
Version: $Revision$
Last-Modified: $Date$
Author: larry@hastings.org (Larry Hastings)
@ -33,24 +33,37 @@ for now. If in the future you do need to run code, you
can change it to a "property", and happily the API doesn't
change.
Unfortunately this doesn't work with modules. Modules are
instances of a single generic ``module`` type, and it's not
feasible to modify or subclass this type to add a property
to one's module. This means that programmers facing this
API design decision, where the data-like member is a singleton
stored in a module, must preemptively add ugly "getters"
and "setters" for the data.
But consider this second bit of best-practice Python API design:
if you're writing a singleton, don't write a class, just build
your code directly into a module. Don't make your users
instantiate a singleton class, don't make your users have to
dereference through a singleton object stored in a module,
just have module-level functions and module-level data.
Adding support for module properties in pure Python is possible
only by using hacks like:
Unfortunately these two best practices are in opposition.
The problem is that properties aren't supported on modules.
Modules are instances of a single generic ``module`` type,
and it's not feasible to modify or subclass this type to add
a property to one's module. This means that programmers
facing this API design decision, where the data-like member
is a singleton stored in a module, must preemptively add
ugly "getters" and "setters" for the data.
* peeking in ``sys.getframe()``,
Adding support for module properties in pure Python has recently
become *possible;*
as of Python 3.5, Python permits assigning to the ``__class__``
attribute of module objects, specifically for this purpose. Here's
an example of using this functionality to add a property to a module::
* changing the type of an object after it's created, or
import sys, types
class _MyModuleType(types.ModuleType):
@property
def prop(self, instance, owner):
...
* replacing the object stored in ``sys.modules``.
sys.modules[__name__].__class__ = _MyModuleType
These techniques can be made to work, but they're fragile.
This works, and is supported behavior, but it's clumsy and obscure.
This PEP proposes a per-type opt-in extension to the descriptor
protocol specifically designed to enable properties in modules.
@ -84,8 +97,10 @@ Our implementation faces two challenges:
Both challenges can be solved with the same approach: we define a new
"fast subclass" flag that means "This object is a desciptor, and it
should be honored directly when this object is looked up as an
attribute of an instance". So far the only placed where this flag
is set is on ``PyProperty_Type``.
attribute of an instance". So far this flag is only set on two
types: ``property`` and ``collections.abc.InstanceDescriptor``.
The latter is an abstract base class, whose only purpose is
to allow user classes to inherit this "fast subclass" flag.
Prototype
=========
@ -99,6 +114,8 @@ Acknowledgements
Armin Rigo essentially proposed this mechanism when presented
with the idea of "module properties", and educated the author
both on the complexities of the problem and the proper solution.
Nathaniel J. Smith pointed out the 3.5 extension about assigning
to ``__class__`` on module objects, and provided the example.
References
==========

View File

@ -61,19 +61,26 @@ address here that can make pycs non-deterministic.)
Specification
=============
Python will begin to recognize two magic number variants for every pyc
version. One magic number will correspond to the current pyc format and the
other to "hash-based" pycs introduced by this PEP.
The pyc header currently consists of 3 32-bit words. We will expand it to 4. The
first word will continue to be the magic number, versioning the bytecode and pyc
format. The second word, conceptually the new word, will be a bit field. The
interpretation of the rest of the header and invalidation behavior of the pyc
depends on the contents of the bit field.
In hash-based pycs, the second field in the pyc header (currently the
"timestamp" field) will become a bitset of flags. We define the lowest flag in
this bitset called ``check_source``. Following the bitset is a 64-bit hash of
the source file. We will use a SipHash_ with a hardcoded key of the contents of
the source file. Another a fast hash like MD5 or BLAKE2_ would also work. We
choose SipHash because Python already has a builtin implementation of it from
:pep:`456`, although an interface that allows picking the SipHash key must be
exposed to Python. Security of the hash is not a concern, though we pass over
red-flag hashes like MD5 to ease auditing of Python in controlled environments.
If the bit field is 0, the pyc is a traditional timestamp-based pyc. I.e., the
third and forth words will be the timestamp and file size respectively, and
invalidation will be done by comparing the metadata of the source file with that
in the header.
If the lowest bit of the bit field is set, the pyc is a hash-based pyc. We call
the second lowest bit the ``check_source`` flag. Following the bit field is a
64-bit hash of the source file. We will use a SipHash_ with a hardcoded key of
the contents of the source file. Another fast hash like MD5 or BLAKE2_ would
also work. We choose SipHash because Python already has a builtin implementation
of it from :pep:`456`, although an interface that allows picking the SipHash key
must be exposed to Python. Security of the hash is not a concern, though we pass
over red-flag hashes like MD5 to ease auditing of Python in controlled
environments.
When Python encounters a hash-based pyc, its behavior depends on the setting of
the ``check_source`` flag. If the ``check_source`` flag is set, Python will

View File

@ -14,8 +14,9 @@ Abstract
This proposal introduces the stdlib ``interpreters`` module. It exposes
the basic functionality of subinterpreters that already exists in the
C-API. Each subinterpreter runs with its own state, including its own
copy of all modules, classes, functions, and variables.
C-API. Each subinterpreter runs with its own state (see
``Interpreter Isolation`` below). The module will be "provisional", as
described by PEP 411.
Rationale
@ -38,6 +39,60 @@ new area for Python so there is relative uncertainly about the best
tools to provide as companions to subinterpreters. Thus we minimize
the functionality we add in the proposal as much as possible.
Concerns
--------
* "subinterpreters are not worth the trouble"
Some have argued that subinterpreters do not add sufficient benefit
to justify making them an official part of Python. Adding features
to the language (or stdlib) has a cost in increasing the size of
the language. So it must pay for itself. In this case, subinterpreters
provide a novel concurrency model focused on isolated threads of
execution. Furthermore, they present an opportunity for changes in
CPython that will allow simulateous use of multiple CPU cores (currently
prevented by the GIL).
Alternatives to subinterpreters include threading, async, and
multiprocessing. Threading is limited by the GIL and async isn't
the right solution for every problem (nor for every person).
Multiprocessing is likewise valuable in some but not all situations.
Direct IPC (rather than via the multiprocessing module) provides
similar benefits but with the same caveat.
Notably, subinterpreters are not intended as a replacement for any of
the above. Certainly they overlap in some areas, but the benefits of
subinterpreters include isolation and (potentially) performance. In
particular, subinterpreters provide a direct route to an alternate
concurrency model (e.g. CSP) which has found success elsewhere and
will appeal to some Python users. That is the core value that the
``interpreters`` module will provide.
* stdlib support for subinterpreters adds extra burden
on C extension authors
In the ``Interpreter Isolation`` section below we identify ways in
which isolation in CPython's subinterpreters is incomplete. Most
notable is extension modules that use C globals to store internal
state. PEP 3121 and PEP 489 provide a solution for most of the
problem, but one still remains. [petr-c-ext]_ Until that is resolved,
C extension authors will face extra difficulty to support
subinterpreters.
Consequently, projects that publish extension modules may face an
increased maintenance burden as their users start using subinterpreters,
where their modules may break. This situation is limited to modules
that use C globals (or use libraries that use C globals) to store
internal state.
Ultimately this comes down to a question of how often it will be a
problem in practice: how many projects would be affected, how often
their users will be affected, what the additional maintenance burden
will be for projects, and what the overall benefit of subinterpreters
is to offset those costs. The position of this PEP is that the actual
extra maintenance burden will be small and well below the threshold at
which subinterpreters are worth it.
Proposal
========
@ -67,26 +122,127 @@ The module provides the following functions:
interpreter will be created in the current thread and will remain
idle until something is run in it.
The module also provides the following class:
The module also provides the following classes:
``Interpreter(id)``::
``id``::
id:
The interpreter's ID (read-only).
``is_running()``::
is_running():
Return whether or not the interpreter is currently running.
Return whether or not the interpreter is currently executing code.
``destroy()``::
destroy():
Finalize and destroy the interpreter.
Finalize and destroy the interpreter. If called on a running
interpreter, RuntimeError will be raised.
``run(code)``::
run(code):
Run the provided Python code in the interpreter, in the current
OS thread. Supported code: source text.
OS thread. If the interpreter is already running then raise
RuntimeError.
Supported code: source text.
get_fifo(name):
Return the FIFO object with the given name that is associated
with this interpreter. If no such FIFO exists then raise
KeyError. The FIFO will be either a "FIFOReader" or a
"FIFOWriter", depending on how "add_fifo()" was called.
list_fifos():
Return a list of all fifos associated with the interpreter.
add_fifo(name=None, *, recv=True):
Create a new FIFO associated with this interpreter and return
the opposite end of the FIFO. For example, if "recv" is True
then a "FIFOReader" is associated with this interpreter and a
"FIFOWriter" is returned. The returned FIFO is also associated
with the interpreter in which "add_fifo()" was called.
The FIFO's name is set to the provided value. If no name is
provided then a dynamically generated one is used. If a FIFO
with the given name is already associated with this interpreter
or with the one in which "add_fifo()" was called then raise
KeyError.
remove_fifo(name):
Drop the association between the named FIFO and this interpreter.
If the named FIFO is not found then raise KeyError.
``FIFOReader(name)``::
The receiving end of a FIFO. An interpreter may use this to receive
objects from another interpreter. At first only bytes and None will
be supported.
name:
The FIFO's name.
__next__():
Return the next bytes object from the pipe. If none have been
pushed on then block.
pop(*, block=True):
Return the next bytes object from the pipe. If none have been
pushed on and "block" is True (the default) then block.
Otherwise return None.
``FIFOWriter(name)``::
The sending end of a FIFO. An interpreter may use this to send
objects to another interpreter. At first only bytes and None will
be supported.
name:
The FIFO's name.
push(object, *, block=True):
Add the object to the FIFO. If "block" is true then block
until the object is popped off. If the FIFO does not support
the object's type then TypeError is raised.
About FIFOs
-----------
Subinterpreters are inherently isolated (with caveats explained below),
in contrast to threads. This enables a different concurrency model than
currently exists in Python. CSP (Communicating Sequential Processes),
upon which Go's concurrency is based, is one example of this model.
A key component of this approach to concurrency is message passing. So
providing a message/object passing mechanism alongside ``Interpreter``
is a fundamental requirement. This proposal includes a basic mechanism
upon which more complex machinery may be built. That basic mechanism
draws inspiration from pipes, queues, and CSP's channels.
The key challenge here is that sharing objects between interpreters
faces complexity due in part to CPython's current memory model.
Furthermore, in this class of concurrency, the ideal is that objects
only exist in one interpreter at a time. However, this is not practical
for Python so we initially constrain supported objects to ``bytes`` and
``None``. There are a number of strategies we may pursue in the future
to expand supported objects and object sharing strategies.
Note that the complexity of object sharing increases as subinterpreters
become more isolated, e.g. after GIL removal. So the mechanism for
message passing needs to be carefully considered. Keeping the API
minimal and initially restricting the supported types helps us avoid
further exposing any underlying complexity to Python users.
Deferred Functionality
@ -97,28 +253,6 @@ functionality has been left out for future consideration. Note that
this is not a judgement against any of said capability, but rather a
deferment. That said, each is arguably valid.
Queues (Channels)
-----------------
Subinterpreters are inherently isolated, in contrast to threads. This
enables a different concurrency model than currently exists in Python.
CSP (Communicating Sequential Processes), upon which Go's concurrency
is based, is one example of this model.
A key component of this approach to concurrency is message passing. So
providing a message/object passing mechanism alongside ``Interpreter``
is entirely sensible and even advisable. However, it isn't strictly
necessary to expose the existing functionality from the C-API.
The key challenge here is that sharing objects between interpreters
faces complexity due to CPython's memory model. This is substantially
more challenging under a possible future where interpreters do not share
the GIL.
In the spirit of minimalism explained above and given the complexity
involved with sharing objects between interpreters, this proposal leaves
the addition of queues to future consideration.
Interpreter.call()
------------------
@ -133,10 +267,48 @@ interpreters via queues. The minimal solution (running a source string)
is sufficient for us to get the feature out where it can be explored.
Open Questions
==============
Interpreter Isolation
=====================
* Add queues to the proposal anyway?
CPython's interpreters are intended to be strictly isolated from each
other. Each interpreter has its own copy of all modules, classes,
functions, and variables. The same applies to state in C, including in
extension modules. The CPython C-API docs explain more. [c-api]_
However, there are ways in which interpreters share some state. First
of all, some process-global state remains shared, like file descriptors.
There are no plans to change this.
Second, some isolation is faulty due to bugs or implementations that did
not take subinterpreters into account. This includes things like
at-exit handlers and extension modules that rely on C globals. In these
cases bugs should be opened (some are already).
Finally, some potential isolation is missing due to the current design
of CPython. This includes the GIL and memory management. Improvements
are currently going on to address gaps in this area.
Provisional Status
==================
The new ``interpreters`` module will be added with "provisional" status
(see PEP 411). This allows Python users to experiment with the feature
and provide feedback while still allowing us to adjust to that feedback.
The module will be provisional in Python 3.7 and we will make a decision
before the 3.8 release whether to keep it provisional, graduate it, or
remove it.
References
==========
.. [c-api]
https://docs.python.org/3/c-api/init.html#bugs-and-caveats
.. [petr-c-ext]
https://mail.python.org/pipermail/import-sig/2016-June/001062.html
https://mail.python.org/pipermail/python-ideas/2016-April/039748.html
Copyright

260
pep-0558.rst Normal file
View File

@ -0,0 +1,260 @@
PEP: 558
Title: Defined semantics for locals()
Author: Nick Coghlan <ncoghlan@gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 2017-09-08
Python-Version: 3.7
Post-History: 2017-09-08
Abstract
========
The semantics of the ``locals()`` builtin have historically been underspecified
and hence implementation dependent.
This PEP proposes formally standardising on the behaviour of the CPython 3.6
reference implementation for most execution scopes, with some adjustments to the
behaviour at function scopes to make it more predictable and independent of the
presence or absence of tracing functions.
Rationale
=========
While the precise semantics of the ``locals()`` builtin are nominally undefined,
in practice, many Python programs depend on it behaving exactly as it behaves in
CPython (at least when no tracing functions are installed).
Other implementations such as PyPy are currently replicating that behaviour,
up to and including replication of local variable mutation bugs that
can arise when a trace hook is installed [1]_.
Proposal
========
The expected semantics of the ``locals()`` builtin change based on the current
execution scope. For this purpose, the defined scopes of execution are:
* module scope: top-level module code, as well as any other code executed using
``exec()`` or ``eval()`` with a single namespace
* class scope: code in the body of a ``class`` statement, as well as any other
code executed using ``exec()`` or ``eval()`` with separate local and global
namespaces
* function scope: code in the body of a ``def`` or ``async def`` statement
Module scope
------------
At module scope, as well as when using ``exec()`` or ``eval()`` with a
single namespace, ``locals()`` must return the same object as ``globals()``,
which must be the actual execution namespace (available as
``inspect.currentframe().f_locals`` in implementations that provide access
to frame objects).
Variable assignments during subsequent code execution in the same scope must
dynamically change the contents of the returned mapping, and changes to the
returned mapping must change the values bound to local variable names in the
execution environment.
This part of the proposal does not require any changes to the reference
implementation - it is standardisation of the current behaviour.
Class scope
-----------
At class scope, as well as when using ``exec()`` or ``eval()`` with separate
global and local namespaces, ``locals()`` must return the specified local
namespace (which may be supplied by the metaclass ``__prepare__`` method
in the case of classes). As for module scope, this must be a direct reference
to the actual execution namespace (available as
``inspect.currentframe().f_locals`` in implementations that provide access
to frame objects).
Variable assignments during subsequent code execution in the same scope must
change the contents of the returned mapping, and changes to the returned mapping
must change the values bound to local variable names in the
execution environment.
For classes, this mapping will *not* be used as the actual class namespace
underlying the defined class (the class creation process will copy the contents
to a fresh dictionary that is only accessible by going through the class
machinery).
This part of the proposal does not require any changes to the reference
implementation - it is standardisation of the current behaviour.
Function scope
--------------
At function scope, interpreter implementations are granted significant freedom
to optimise local variable access, and hence are NOT required to permit
arbitrary modification of local and nonlocal variable bindings through the
mapping returned from ``locals()``.
Instead, ``locals()`` is expected to return a mutable *snapshot* of the
function's local variables and any referenced nonlocal cells with the following
semantics:
* each call to ``locals()`` returns the *same* mapping
* each call to ``locals()`` updates the mapping with the current
state of the local variables and any referenced nonlocal cells
* changes to the returned mapping are *never* written back to the
local variable bindings or the nonlocal cell references
* in implementations that provide access to frame objects, the return value
from ``locals()`` is *not* a direct reference to
``inspect.currentframe().f_locals``
For interpreters that provide access to frame objects, the ``frame.f_locals``
attribute at function scope is expected to be a write-through proxy that
immediately updates the local variables and reference nonlocal cell bindings.
Additional entries may also be added to ``frame.f_locals`` and will be
accessible through both ````frame.f_locals`` and ``locals()`` from inside the
frame, but will not be accessible by name from within the function (as any
names which don't appear as local or nonlocal variables at compile time will
only be looked up in the module globals and process builtins, not in the
function locals).
Open Questions
==============
How much compatibility is enough compatibility?
-----------------------------------------------
As discussed below, the proposed design aims to keep almost all current code
working, *except* code that relies on being able to mutate function local
variable bindings and nonlocal cell references via the ``locals()`` builtin
when a trace hook is installed.
This is considered reasonable, as if a trace hook is installed, that indicates
the use of an interpreter implementation that provides access to frame objects,
and hence ``frame.f_locals`` can be used as a more portable and future-proof
alternative.
If some other existing behaviours are deemed optional (e.g. allowing
``locals()`` to return a fresh object each time), then that may allow for
some simplification of the update implementation.
Design Discussion
=================
Making ``locals()`` return a shared snapshot at function scope
--------------------------------------------------------------
The ``locals()`` builtin is a required part of the language, and in the
reference implementation it has historically returned a mutable mapping with
the following characteristics:
* each call to ``locals()`` returns the *same* mapping
* each call to ``locals()`` updates the mapping with the current
state of the local variables and any referenced nonlocal cells
* changes to the returned mapping *usually* aren't written back to the
local variable bindings or the nonlocal cell references, but write backs
can be triggered by doing one of the following:
* installing a Python level trace hook (write backs then happen whenever
the trace hook is called)
* running a function level wildcard import (requires bytecode injection in Py3)
* running an ``exec`` statement in the function's scope (Py2 only, since
``exec`` became an ordinary builtin in Python 3)
The current proposal aims to retain the first two properties (to maintain
backwards compatibility with as much code as possible) while still
eliminating the ability to dynamically alter local and nonlocal variable
bindings through the mapping returned by ``locals()``.
Making ``frame.f_locals`` a write-through proxy at function scope
-----------------------------------------------------------------
While frame objects and related APIs are an explicitly optional feature of
Python implementations, there are nevertheless a lot of debuggers and other
introspection tools that expect them to behave in certain ways, including the
ability to update the bindings of local variables and nonlocal cell references,
as well as being able to store custom keys in the locals namespace for
arbitrary frames and retrieve those values later.
CPython currently supports this by copying the local variable bindings and
nonlocal cell references to ``frame.f_locals`` before calling a trace function,
and then copying them back after the function returns.
Unfortunately, as documented in [1]_, this approach leads to intrinsic race
conditions when a trace function writes to a closure variable via
``frame.f_locals``.
Switching to immediate updates of the frame state via ``frame.f_locals`` means
that the behaviour of trace functions should be more predictable, even in the
presence of multi-threaded access.
Historical semantics at function scope
--------------------------------------
The current semantics of mutating ``locals()`` and ``frame.f_locals`` in CPython
are rather quirky due to historical implementation details:
* actual execution uses the fast locals array for local variable bindings and
cell references for nonlocal variables
* there's a ``PyFrame_FastToLocals`` operation that populates the frame's
``f_locals`` attribute based on the current state of the fast locals array
and any referenced cells. This exists for three reasons:
* allowing trace functions to read the state of local variables
* allowing traceback processors to read the state of local variables
* allowing locals() to read the state of local variables
* a direct reference to ``frame.f_locals`` is returned from ``locals()``, so if
you hand out multiple concurrent references, then all those references will be
to the exact same dictionary
* the two common calls to the reverse operation, ``PyFrame_LocalsToFast``, were
removed in the migration to Python 3: ``exec`` is no longer a statement (and
hence can longer affect function local namespaces), and the compiler now
disallows the use of ``from module import *`` operations at function scope
* however, two obscure calling paths remain: ``PyFrame_LocalsToFast`` is called
as part of returning from a trace function (which allows debuggers to make
changes to the local variable state), and you can also still inject the
``IMPORT_STAR`` opcode when creating a function directly from a code object
rather than via the compiler
This proposal deliberately *doesn't* formalise these semantics as is, since they
only make sense in terms of the historical evolution of the language and the
reference implementation, rather than being deliberately designed.
Implementation
==============
The reference implementation update is TBD - when available, it will be linked
from [2]_.
References
==========
.. [1] Broken local variable assignment given threads + trace hook + closure
(https://bugs.python.org/issue30744)
.. [2] Clarify the required behaviour of ``locals()``
(https://bugs.python.org/issue17960)
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End: