865 lines
30 KiB
ReStructuredText
865 lines
30 KiB
ReStructuredText
PEP: 554
|
||
Title: Multiple Interpreters in the Stdlib
|
||
Author: Eric Snow <ericsnowcurrently@gmail.com>
|
||
Status: Draft
|
||
Type: Standards Track
|
||
Content-Type: text/x-rst
|
||
Created: 2017-09-05
|
||
Python-Version: 3.7
|
||
Post-History:
|
||
|
||
|
||
Abstract
|
||
========
|
||
|
||
CPython has supported subinterpreters, with increasing levels of
|
||
support, since version 1.5. The feature has been available via the
|
||
C-API. [c-api]_ Subinterpreters operate in
|
||
`relative isolation from one another <Interpreter Isolation_>`_, which
|
||
provides the basis for an
|
||
`alternative concurrency model <Concurrency_>`_.
|
||
|
||
This proposal introduces the stdlib ``interpreters`` module. The module
|
||
will be `provisional <Provisional Status_>`_. It exposes the basic
|
||
functionality of subinterpreters already provided by the C-API.
|
||
|
||
|
||
Proposal
|
||
========
|
||
|
||
The ``interpreters`` module will be added to the stdlib. It will
|
||
provide a high-level interface to subinterpreters and wrap the low-level
|
||
``_interpreters`` module. The proposed API is inspired by the
|
||
``threading`` module. See the `Examples`_ section for concrete usage
|
||
and use cases.
|
||
|
||
API for interpreters
|
||
--------------------
|
||
|
||
The module provides the following functions:
|
||
|
||
``list_all()``::
|
||
|
||
Return a list of all existing interpreters.
|
||
|
||
``get_current()``::
|
||
|
||
Return the currently running interpreter.
|
||
|
||
``create()``::
|
||
|
||
Initialize a new Python interpreter and return it. The
|
||
interpreter will be created in the current thread and will remain
|
||
idle until something is run in it. The interpreter may be used
|
||
in any thread and will run in whichever thread calls
|
||
``interp.run()``.
|
||
|
||
|
||
The module also provides the following class:
|
||
|
||
``Interpreter(id)``::
|
||
|
||
id:
|
||
|
||
The interpreter's ID (read-only).
|
||
|
||
is_running():
|
||
|
||
Return whether or not the interpreter is currently executing code.
|
||
Calling this on the current interpreter will always return True.
|
||
|
||
destroy():
|
||
|
||
Finalize and destroy the interpreter.
|
||
|
||
This may not be called on an already running interpreter. Doing
|
||
so results in a RuntimeError.
|
||
|
||
run(source_str, /, **shared):
|
||
|
||
Run the provided Python source code in the interpreter. Any
|
||
keyword arguments are added to the interpreter's execution
|
||
namespace. If any of the values are not supported for sharing
|
||
between interpreters then RuntimeError gets raised. Currently
|
||
only channels (see "create_channel()" below) are supported.
|
||
|
||
This may not be called on an already running interpreter. Doing
|
||
so results in a RuntimeError.
|
||
|
||
A "run()" call is quite similar to any other function call. Once
|
||
it completes, the code that called "run()" continues executing
|
||
(in the original interpreter). Likewise, if there is any uncaught
|
||
exception, it propagates into the code where "run()" was called.
|
||
|
||
The big difference is that "run()" executes the code in an
|
||
entirely different interpreter, with entirely separate state.
|
||
The state of the current interpreter in the current OS thread
|
||
is swapped out with the state of the target interpreter (the one
|
||
that will execute the code). When the target finishes executing,
|
||
the original interpreter gets swapped back in and its execution
|
||
resumes.
|
||
|
||
So calling "run()" will effectively cause the current Python
|
||
thread to pause. Sometimes you won't want that pause, in which
|
||
case you should make the "run()" call in another thread. To do
|
||
so, add a function that calls "run()" and then run that function
|
||
in a normal "threading.Thread".
|
||
|
||
Note that interpreter's state is never reset, neither before
|
||
"run()" executes the code nor after. Thus the interpreter
|
||
state is preserved between calls to "run()". This includes
|
||
"sys.modules", the "builtins" module, and the internal state
|
||
of C extension modules.
|
||
|
||
Also note that "run()" executes in the namespace of the "__main__"
|
||
module, just like scripts, the REPL, "-m", and "-c". Just as
|
||
the interpreter's state is not ever reset, the "__main__" module
|
||
is never reset. You can imagine concatenating the code from each
|
||
"run()" call into one long script. This is the same as how the
|
||
REPL operates.
|
||
|
||
Supported code: source text.
|
||
|
||
API for sharing data
|
||
--------------------
|
||
|
||
The mechanism for passing objects between interpreters is through
|
||
channels. A channel is a simplex FIFO similar to a pipe. The main
|
||
difference is that channels can be associated with zero or more
|
||
interpreters on either end. Unlike queues, which are also many-to-many,
|
||
channels have no buffer.
|
||
|
||
``create_channel()``::
|
||
|
||
Create a new channel and return (recv, send), the RecvChannel and
|
||
SendChannel corresponding to the ends of the channel. The channel
|
||
is not closed and destroyed (i.e. garbage-collected) until the number
|
||
of associated interpreters returns to 0.
|
||
|
||
An interpreter gets associated with a channel by calling its "send()"
|
||
or "recv()" method. That association gets dropped by calling
|
||
"close()" on the channel.
|
||
|
||
Both ends of the channel are supported "shared" objects (i.e. may be
|
||
safely shared by different interpreters. Thus they may be passed as
|
||
keyword arguments to "Interpreter.run()".
|
||
|
||
``list_all_channels()``::
|
||
|
||
Return a list of all open (RecvChannel, SendChannel) pairs.
|
||
|
||
|
||
``RecvChannel(id)``::
|
||
|
||
The receiving end of a channel. An interpreter may use this to
|
||
receive objects from another interpreter. At first only bytes will
|
||
be supported.
|
||
|
||
id:
|
||
|
||
The channel's unique ID.
|
||
|
||
interpreters:
|
||
|
||
The list of associated interpreters (those that have called
|
||
the "recv()" method).
|
||
|
||
__next__():
|
||
|
||
Return the next object from the channel. If none have been sent
|
||
then wait until the next send.
|
||
|
||
recv():
|
||
|
||
Return the next object from the channel. If none have been sent
|
||
then wait until the next send. If the channel has been closed
|
||
then EOFError is raised.
|
||
|
||
recv_nowait(default=None):
|
||
|
||
Return the next object from the channel. If none have been sent
|
||
then return the default. If the channel has been closed
|
||
then EOFError is raised.
|
||
|
||
close():
|
||
|
||
No longer associate the current interpreter with the channel (on
|
||
the receiving end). This is a noop if the interpreter isn't
|
||
already associated. Once an interpreter is no longer associated
|
||
with the channel, subsequent (or current) send() and recv() calls
|
||
from that interpreter will raise EOFError.
|
||
|
||
Once number of associated interpreters on both ends drops to 0,
|
||
the channel is actually marked as closed. The Python runtime
|
||
will garbage collect all closed channels. Note that "close()" is
|
||
automatically called when it is no longer used in the current
|
||
interpreter.
|
||
|
||
This operation is idempotent. Return True if the current
|
||
interpreter was still associated with the receiving end of the
|
||
channel and False otherwise.
|
||
|
||
|
||
``SendChannel(id)``::
|
||
|
||
The sending end of a channel. An interpreter may use this to send
|
||
objects to another interpreter. At first only bytes will be
|
||
supported.
|
||
|
||
id:
|
||
|
||
The channel's unique ID.
|
||
|
||
interpreters:
|
||
|
||
The list of associated interpreters (those that have called
|
||
the "send()" method).
|
||
|
||
send(obj):
|
||
|
||
Send the object to the receiving end of the channel. Wait until
|
||
the object is received. If the channel does not support the
|
||
object then TypeError is raised. Currently only bytes are
|
||
supported. If the channel has been closed then EOFError is
|
||
raised.
|
||
|
||
send_nowait(obj):
|
||
|
||
Send the object to the receiving end of the channel. If the
|
||
object is received then return True. Otherwise return False.
|
||
If the channel does not support the object then TypeError is
|
||
raised. If the channel has been closed then EOFError is raised.
|
||
|
||
close():
|
||
|
||
No longer associate the current interpreter with the channel (on
|
||
the sending end). This is a noop if the interpreter isn't already
|
||
associated. Once an interpreter is no longer associated with the
|
||
channel, subsequent (or current) send() and recv() calls from that
|
||
interpreter will raise EOFError.
|
||
|
||
Once number of associated interpreters on both ends drops to 0,
|
||
the channel is actually marked as closed. The Python runtime
|
||
will garbage collect all closed channels. Note that "close()" is
|
||
automatically called when it is no longer used in the current
|
||
interpreter.
|
||
|
||
This operation is idempotent. Return True if the current
|
||
interpreter was still associated with the sending end of the
|
||
channel and False otherwise.
|
||
|
||
|
||
Examples
|
||
========
|
||
|
||
Run isolated code
|
||
-----------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
print('before')
|
||
interp.run('print("during")')
|
||
print('after')
|
||
|
||
Run in a thread
|
||
---------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
def run():
|
||
interp.run('print("during")')
|
||
t = threading.Thread(target=run)
|
||
print('before')
|
||
t.start()
|
||
print('after')
|
||
|
||
Pre-populate an interpreter
|
||
---------------------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
interp.run("""if True:
|
||
import some_lib
|
||
import an_expensive_module
|
||
some_lib.set_up()
|
||
""")
|
||
wait_for_request()
|
||
interp.run("""if True:
|
||
some_lib.handle_request()
|
||
""")
|
||
|
||
Handling an exception
|
||
---------------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
try:
|
||
interp.run("""if True:
|
||
raise KeyError
|
||
""")
|
||
except KeyError:
|
||
print("got the error from the subinterpreter")
|
||
|
||
Synchronize using a channel
|
||
---------------------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
r, s = interpreters.create_channel()
|
||
def run():
|
||
interp.run("""if True:
|
||
reader.recv()
|
||
print("during")
|
||
reader.close()
|
||
""",
|
||
reader=r)
|
||
t = threading.Thread(target=run)
|
||
print('before')
|
||
t.start()
|
||
print('after')
|
||
s.send(b'')
|
||
s.close()
|
||
|
||
Sharing a file descriptor
|
||
-------------------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
r1, s1 = interpreters.create_channel()
|
||
r2, s2 = interpreters.create_channel()
|
||
def run():
|
||
interp.run("""if True:
|
||
fd = int.from_bytes(
|
||
reader.recv(), 'big')
|
||
for line in os.fdopen(fd):
|
||
print(line)
|
||
writer.send(b'')
|
||
""",
|
||
reader=r1, writer=s2)
|
||
t = threading.Thread(target=run)
|
||
t.start()
|
||
with open('spamspamspam') as infile:
|
||
fd = infile.fileno().to_bytes(1, 'big')
|
||
s.send(fd)
|
||
r.recv()
|
||
|
||
Passing objects via pickle
|
||
--------------------------
|
||
|
||
::
|
||
|
||
interp = interpreters.create()
|
||
r, s = interpreters.create_channel()
|
||
interp.run("""if True:
|
||
import pickle
|
||
""",
|
||
reader=r)
|
||
def run():
|
||
interp.run("""if True:
|
||
data = reader.recv()
|
||
while data:
|
||
obj = pickle.loads(data)
|
||
do_something(obj)
|
||
data = reader.recv()
|
||
reader.close()
|
||
""",
|
||
reader=r)
|
||
t = threading.Thread(target=run)
|
||
t.start()
|
||
for obj in input:
|
||
data = pickle.dumps(obj)
|
||
s.send(data)
|
||
s.send(b'')
|
||
|
||
|
||
Rationale
|
||
=========
|
||
|
||
Running code in multiple interpreters provides a useful level of
|
||
isolation within the same process. This can be leveraged in number
|
||
of ways. Furthermore, subinterpreters provide a well-defined framework
|
||
in which such isolation may extended.
|
||
|
||
CPython has supported subinterpreters, with increasing levels of
|
||
support, since version 1.5. While the feature has the potential
|
||
to be a powerful tool, subinterpreters have suffered from neglect
|
||
because they are not available directly from Python. Exposing the
|
||
existing functionality in the stdlib will help reverse the situation.
|
||
|
||
This proposal is focused on enabling the fundamental capability of
|
||
multiple isolated interpreters in the same Python process. This is a
|
||
new area for Python so there is relative uncertainly about the best
|
||
tools to provide as companions to subinterpreters. Thus we minimize
|
||
the functionality we add in the proposal as much as possible.
|
||
|
||
Concerns
|
||
--------
|
||
|
||
* "subinterpreters are not worth the trouble"
|
||
|
||
Some have argued that subinterpreters do not add sufficient benefit
|
||
to justify making them an official part of Python. Adding features
|
||
to the language (or stdlib) has a cost in increasing the size of
|
||
the language. So it must pay for itself. In this case, subinterpreters
|
||
provide a novel concurrency model focused on isolated threads of
|
||
execution. Furthermore, they present an opportunity for changes in
|
||
CPython that will allow simulateous use of multiple CPU cores (currently
|
||
prevented by the GIL).
|
||
|
||
Alternatives to subinterpreters include threading, async, and
|
||
multiprocessing. Threading is limited by the GIL and async isn't
|
||
the right solution for every problem (nor for every person).
|
||
Multiprocessing is likewise valuable in some but not all situations.
|
||
Direct IPC (rather than via the multiprocessing module) provides
|
||
similar benefits but with the same caveat.
|
||
|
||
Notably, subinterpreters are not intended as a replacement for any of
|
||
the above. Certainly they overlap in some areas, but the benefits of
|
||
subinterpreters include isolation and (potentially) performance. In
|
||
particular, subinterpreters provide a direct route to an alternate
|
||
concurrency model (e.g. CSP) which has found success elsewhere and
|
||
will appeal to some Python users. That is the core value that the
|
||
``interpreters`` module will provide.
|
||
|
||
* "stdlib support for subinterpreters adds extra burden
|
||
on C extension authors"
|
||
|
||
In the `Interpreter Isolation`_ section below we identify ways in
|
||
which isolation in CPython's subinterpreters is incomplete. Most
|
||
notable is extension modules that use C globals to store internal
|
||
state. PEP 3121 and PEP 489 provide a solution for most of the
|
||
problem, but one still remains. [petr-c-ext]_ Until that is resolved,
|
||
C extension authors will face extra difficulty to support
|
||
subinterpreters.
|
||
|
||
Consequently, projects that publish extension modules may face an
|
||
increased maintenance burden as their users start using subinterpreters,
|
||
where their modules may break. This situation is limited to modules
|
||
that use C globals (or use libraries that use C globals) to store
|
||
internal state.
|
||
|
||
Ultimately this comes down to a question of how often it will be a
|
||
problem in practice: how many projects would be affected, how often
|
||
their users will be affected, what the additional maintenance burden
|
||
will be for projects, and what the overall benefit of subinterpreters
|
||
is to offset those costs. The position of this PEP is that the actual
|
||
extra maintenance burden will be small and well below the threshold at
|
||
which subinterpreters are worth it.
|
||
|
||
|
||
About Subinterpreters
|
||
=====================
|
||
|
||
Shared data
|
||
-----------
|
||
|
||
Subinterpreters are inherently isolated (with caveats explained below),
|
||
in contrast to threads. This enables `a different concurrency model
|
||
<Concurrency_>`_ than is currently readily available in Python.
|
||
`Communicating Sequential Processes`_ (CSP) is the prime example.
|
||
|
||
A key component of this approach to concurrency is message passing. So
|
||
providing a message/object passing mechanism alongside ``Interpreter``
|
||
is a fundamental requirement. This proposal includes a basic mechanism
|
||
upon which more complex machinery may be built. That basic mechanism
|
||
draws inspiration from pipes, queues, and CSP's channels. [fifo]_
|
||
|
||
The key challenge here is that sharing objects between interpreters
|
||
faces complexity due in part to CPython's current memory model.
|
||
Furthermore, in this class of concurrency, the ideal is that objects
|
||
only exist in one interpreter at a time. However, this is not practical
|
||
for Python so we initially constrain supported objects to ``bytes``.
|
||
There are a number of strategies we may pursue in the future to expand
|
||
supported objects and object sharing strategies.
|
||
|
||
Note that the complexity of object sharing increases as subinterpreters
|
||
become more isolated, e.g. after GIL removal. So the mechanism for
|
||
message passing needs to be carefully considered. Keeping the API
|
||
minimal and initially restricting the supported types helps us avoid
|
||
further exposing any underlying complexity to Python users.
|
||
|
||
To make this work, the mutable shared state will be managed by the
|
||
Python runtime, not by any of the interpreters. Initially we will
|
||
support only one type of objects for shared state: the channels provided
|
||
by ``create_channel()``. Channels, in turn, will carefully manage
|
||
passing objects between interpreters.
|
||
|
||
Interpreter Isolation
|
||
---------------------
|
||
|
||
CPython's interpreters are intended to be strictly isolated from each
|
||
other. Each interpreter has its own copy of all modules, classes,
|
||
functions, and variables. The same applies to state in C, including in
|
||
extension modules. The CPython C-API docs explain more. [caveats]_
|
||
|
||
However, there are ways in which interpreters share some state. First
|
||
of all, some process-global state remains shared:
|
||
|
||
* file descriptors
|
||
* builtin types (e.g. dict, bytes)
|
||
* singletons (e.g. None)
|
||
* underlying static module data (e.g. functions) for
|
||
builtin/extension/frozen modules
|
||
|
||
There are no plans to change this.
|
||
|
||
Second, some isolation is faulty due to bugs or implementations that did
|
||
not take subinterpreters into account. This includes things like
|
||
extension modules that rely on C globals. [cryptography]_ In these
|
||
cases bugs should be opened (some are already):
|
||
|
||
* readline module hook functions (http://bugs.python.org/issue4202)
|
||
* memory leaks on re-init (http://bugs.python.org/issue21387)
|
||
|
||
Finally, some potential isolation is missing due to the current design
|
||
of CPython. Improvements are currently going on to address gaps in this
|
||
area:
|
||
|
||
* interpreters share the GIL
|
||
* interpreters share memory management (e.g. allocators, gc)
|
||
* GC is not run per-interpreter [global-gc]_
|
||
* at-exit handlers are not run per-interpreter [global-atexit]_
|
||
* extensions using the ``PyGILState_*`` API are incompatible [gilstate]_
|
||
|
||
Concurrency
|
||
-----------
|
||
|
||
Concurrency is a challenging area of software development. Decades of
|
||
research and practice have led to a wide variety of concurrency models,
|
||
each with different goals. Most center on correctness and usability.
|
||
|
||
One class of concurrency models focuses on isolated threads of
|
||
execution that interoperate through some message passing scheme. A
|
||
notable example is `Communicating Sequential Processes`_ (CSP), upon
|
||
which Go's concurrency is based. The isolation inherent to
|
||
subinterpreters makes them well-suited to this approach.
|
||
|
||
|
||
Existing Usage
|
||
--------------
|
||
|
||
Subinterpreters are not a widely used feature. In fact, the only
|
||
documented case of wide-spread usage is
|
||
`mod_wsgi <https://github.com/GrahamDumpleton/mod_wsgi>`_. On the one
|
||
hand, this case provides confidence that existing subinterpreter support
|
||
is relatively stable. On the other hand, there isn't much of a sample
|
||
size from which to judge the utility of the feature.
|
||
|
||
|
||
Provisional Status
|
||
==================
|
||
|
||
The new ``interpreters`` module will be added with "provisional" status
|
||
(see PEP 411). This allows Python users to experiment with the feature
|
||
and provide feedback while still allowing us to adjust to that feedback.
|
||
The module will be provisional in Python 3.7 and we will make a decision
|
||
before the 3.8 release whether to keep it provisional, graduate it, or
|
||
remove it.
|
||
|
||
|
||
Alternate Python Implementations
|
||
================================
|
||
|
||
TBD
|
||
|
||
|
||
Open Questions
|
||
==============
|
||
|
||
Leaking exceptions across interpreters
|
||
--------------------------------------
|
||
|
||
As currently proposed, uncaught exceptions from ``run()`` propagate
|
||
to the frame that called it. However, this means that exception
|
||
objects are leaking across the inter-interpreter boundary. Likewise,
|
||
the frames in the traceback potentially leak.
|
||
|
||
While that might not be a problem currently, it would be a problem once
|
||
interpreters get better isolation relative to memory management (which
|
||
is necessary to stop sharing the GIL between interpreters). So the
|
||
semantics of how the exceptions propagate needs to be resolved.
|
||
|
||
Initial support for buffers in channels
|
||
---------------------------------------
|
||
|
||
An alternative to support for bytes in channels in support for
|
||
read-only buffers (the PEP 3119 kind). Then ``recv()`` would return
|
||
a memoryview to expose the buffer in a zero-copy way. This is similar
|
||
to what ``multiprocessing.Connection`` supports. [mp-conn]
|
||
|
||
Switching to such an approach would help resolve questions of how
|
||
passing bytes through channels will work once we isolate memory
|
||
management in interpreters.
|
||
|
||
|
||
Deferred Functionality
|
||
======================
|
||
|
||
In the interest of keeping this proposal minimal, the following
|
||
functionality has been left out for future consideration. Note that
|
||
this is not a judgement against any of said capability, but rather a
|
||
deferment. That said, each is arguably valid.
|
||
|
||
Interpreter.call()
|
||
------------------
|
||
|
||
It would be convenient to run existing functions in subinterpreters
|
||
directly. ``Interpreter.run()`` could be adjusted to support this or
|
||
a ``call()`` method could be added::
|
||
|
||
Interpreter.call(f, *args, **kwargs)
|
||
|
||
This suffers from the same problem as sharing objects between
|
||
interpreters via queues. The minimal solution (running a source string)
|
||
is sufficient for us to get the feature out where it can be explored.
|
||
|
||
timeout arg to pop() and push()
|
||
-------------------------------
|
||
|
||
Typically functions that have a ``block`` argument also have a
|
||
``timeout`` argument. We can add it later if needed.
|
||
|
||
get_main()
|
||
----------
|
||
|
||
CPython has a concept of a "main" interpreter. This is the initial
|
||
interpreter created during CPython's runtime initialization. It may
|
||
be useful to identify the main interpreter. For instance, the main
|
||
interpreter should not be destroyed. However, for the basic
|
||
functionality of a high-level API a ``get_main()`` function is not
|
||
necessary. Furthermore, there is no requirement that a Python
|
||
implementation have a concept of a main interpreter. So until there's
|
||
a clear need we'll leave ``get_main()`` out.
|
||
|
||
Interpreter.run_in_thread()
|
||
---------------------------
|
||
|
||
This method would make a ``run()`` call for you in a thread. Doing this
|
||
using only ``threading.Thread`` and ``run()`` is relatively trivial so
|
||
we've left it out.
|
||
|
||
Synchronization Primitives
|
||
--------------------------
|
||
|
||
The ``threading`` module provides a number of synchronization primitives
|
||
for coordinating concurrent operations. This is especially necessary
|
||
due to the shared-state nature of threading. In contrast,
|
||
subinterpreters do not share state. Data sharing is restricted to
|
||
channels, which do away with the need for explicit synchronization. If
|
||
any sort of opt-in shared state support is added to subinterpreters in
|
||
the future, that same effort can introduce synchronization primitives
|
||
to meet that need.
|
||
|
||
CSP Library
|
||
-----------
|
||
|
||
A ``csp`` module would not be a large step away from the functionality
|
||
provided by this PEP. However, adding such a module is outside the
|
||
minimalist goals of this proposal.
|
||
|
||
Syntactic Support
|
||
-----------------
|
||
|
||
The ``Go`` language provides a concurrency model based on CSP, so
|
||
it's similar to the concurrency model that subinterpreters support.
|
||
``Go`` provides syntactic support, as well several builtin concurrency
|
||
primitives, to make concurrency a first-class feature. Conceivably,
|
||
similar syntactic (and builtin) support could be added to Python using
|
||
subinterpreters. However, that is *way* outside the scope of this PEP!
|
||
|
||
Multiprocessing
|
||
---------------
|
||
|
||
The ``multiprocessing`` module could support subinterpreters in the same
|
||
way it supports threads and processes. In fact, the module's
|
||
maintainer, Davin Potts, has indicated this is a reasonable feature
|
||
request. However, it is outside the narrow scope of this PEP.
|
||
|
||
C-extension opt-in/opt-out
|
||
--------------------------
|
||
|
||
By using the ``PyModuleDef_Slot`` introduced by PEP 489, we could easily
|
||
add a mechanism by which C-extension modules could opt out of support
|
||
for subinterpreters. Then the import machinery, when operating in
|
||
a subinterpreter, would need to check the module for support. It would
|
||
raise an ImportError if unsupported.
|
||
|
||
Alternately we could support opting in to subinterpreter support.
|
||
However, that would probably exclude many more modules (unnecessarily)
|
||
than the opt-out approach.
|
||
|
||
The scope of adding the ModuleDef slot and fixing up the import
|
||
machinery is non-trivial, but could be worth it. It all depends on
|
||
how many extension modules break under subinterpreters. Given the
|
||
relatively few cases we know of through mod_wsgi, we can leave this
|
||
for later.
|
||
|
||
Poisoning channels
|
||
------------------
|
||
|
||
CSP has the concept of poisoning a channel. Once a channel has been
|
||
poisoned, and ``send()`` or ``recv()`` call on it will raise a special
|
||
exception, effectively ending execution in the interpreter that tried
|
||
to use the poisoned channel.
|
||
|
||
This could be accomplished by adding a ``poison()`` method to both ends
|
||
of the channel. The ``close()`` method could work if it had a ``force``
|
||
option to force the channel closed. Regardless, these semantics are
|
||
relatively specialized and can wait.
|
||
|
||
Sending channels over channels
|
||
------------------------------
|
||
|
||
Some advanced usage of subinterpreters could take advantage of the
|
||
ability to send channels over channels, in addition to bytes. Given
|
||
that channels will already be multi-interpreter safe, supporting then
|
||
in ``RecvChannel.recv()`` wouldn't be a big change. However, this can
|
||
wait until the basic functionality has been ironed out.
|
||
|
||
Reseting __main__
|
||
-----------------
|
||
|
||
As proposed, every call to ``Interpreter.run()`` will execute in the
|
||
namespace of the interpreter's existing ``__main__`` module. This means
|
||
that data persists there between ``run()`` calls. Sometimes this isn't
|
||
desireable and you want to execute in a fresh ``__main__``. Also,
|
||
you don't necessarily want to leak objects there that you aren't using
|
||
any more.
|
||
|
||
Solutions include:
|
||
|
||
* a ``create()`` arg to indicate resetting ``__main__`` after each
|
||
``run`` call
|
||
* an ``Interpreter.reset_main`` flag to support opting in or out
|
||
after the fact
|
||
* an ``Interpreter.reset_main()`` method to opt in when desired
|
||
|
||
This isn't a critical feature initially. It can wait until later
|
||
if desirable.
|
||
|
||
Support passing ints in channels
|
||
--------------------------------
|
||
|
||
Passing ints around should be fine and ultimately is probably
|
||
desirable. However, we can get by with serializing them as bytes
|
||
for now. The goal is a minimal API for the sake of basic
|
||
functionality at first.
|
||
|
||
File descriptors and sockets in channels
|
||
----------------------------------------
|
||
|
||
Given that file descriptors and sockets are process-global resources,
|
||
support for passing them through channels is a reasonable idea. They
|
||
would be a good candidate for the first effort at expanding the types
|
||
that channels support. They aren't strictly necessary for the initial
|
||
API.
|
||
|
||
|
||
Rejected Ideas
|
||
==============
|
||
|
||
Explicit channel association
|
||
----------------------------
|
||
|
||
Interpreters are implicitly associated with channels upon ``recv()`` and
|
||
``send()`` calls. They are de-associated with ``close()`` calls. The
|
||
alternative would be explicit methods. It would be either
|
||
``add_channel()`` and ``remove_channel()`` methods on ``Interpreter``
|
||
objects or something similar on channel objects.
|
||
|
||
In practice, this level of management shouldn't be necessary for users.
|
||
So adding more explicit support would only add clutter to the API.
|
||
|
||
Use pipes instead of channels
|
||
-----------------------------
|
||
|
||
A pipe would be a simplex FIFO between exactly two interpreters. For
|
||
most use cases this would be sufficient. It could potentially simplify
|
||
the implementation as well. However, it isn't a big step to supporting
|
||
a many-to-many simplex FIFO via channels. Also, with pipes the API
|
||
ends up being slightly more complicated, requiring naming the pipes.
|
||
|
||
Use queues instead of channels
|
||
------------------------------
|
||
|
||
The main difference between queues and channels is that queues support
|
||
buffering. This would complicate the blocking semantics of ``recv()``
|
||
and ``send()``. Also, queues can be built on top of channels.
|
||
|
||
"enumerate"
|
||
-----------
|
||
|
||
The ``list_all()`` function provides the list of all interpreters.
|
||
In the threading module, which partly inspired the proposed API, the
|
||
function is called ``enumerate()``. The name is different here to
|
||
avoid confusing Python users that are not already familiar with the
|
||
threading API. For them "enumerate" is rather unclear, whereas
|
||
"list_all" is clear.
|
||
|
||
|
||
References
|
||
==========
|
||
|
||
.. [c-api]
|
||
https://docs.python.org/3/c-api/init.html#sub-interpreter-support
|
||
|
||
.. _Communicating Sequential Processes:
|
||
|
||
.. [CSP]
|
||
https://en.wikipedia.org/wiki/Communicating_sequential_processes
|
||
https://github.com/futurecore/python-csp
|
||
|
||
.. [fifo]
|
||
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Pipe
|
||
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue
|
||
https://docs.python.org/3/library/queue.html#module-queue
|
||
http://stackless.readthedocs.io/en/2.7-slp/library/stackless/channels.html
|
||
https://golang.org/doc/effective_go.html#sharing
|
||
http://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-feel-bad/
|
||
|
||
.. [caveats]
|
||
https://docs.python.org/3/c-api/init.html#bugs-and-caveats
|
||
|
||
.. [petr-c-ext]
|
||
https://mail.python.org/pipermail/import-sig/2016-June/001062.html
|
||
https://mail.python.org/pipermail/python-ideas/2016-April/039748.html
|
||
|
||
.. [cryptography]
|
||
https://github.com/pyca/cryptography/issues/2299
|
||
|
||
.. [global-gc]
|
||
http://bugs.python.org/issue24554
|
||
|
||
.. [gilstate]
|
||
https://bugs.python.org/issue10915
|
||
http://bugs.python.org/issue15751
|
||
|
||
.. [global-atexit]
|
||
https://bugs.python.org/issue6531
|
||
|
||
.. [mp-conn]
|
||
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Connection
|
||
|
||
|
||
Copyright
|
||
=========
|
||
|
||
This document has been placed in the public domain.
|
||
|
||
|
||
|
||
..
|
||
Local Variables:
|
||
mode: indented-text
|
||
indent-tabs-mode: nil
|
||
sentence-end-double-space: t
|
||
fill-column: 70
|
||
coding: utf-8
|
||
End:
|