diff --git a/pep-0554.rst b/pep-0554.rst index 6ba0952ed..c4a7e205e 100644 --- a/pep-0554.rst +++ b/pep-0554.rst @@ -16,62 +16,54 @@ Abstract CPython has supported multiple interpreters in the same process (AKA "subinterpreters") since version 1.5 (1997). The feature has been -available via the C-API. [c-api]_ Subinterpreters operate in +available via the C-API. [c-api]_ Multiple interpreters operate in `relative isolation from one another `_, which facilitates novel alternative approaches to `concurrency `_. -This proposal introduces the stdlib ``interpreters`` module. The module -will be `provisional `_. It exposes the basic -functionality of subinterpreters already provided by the C-API, along -with new (basic) functionality for sharing data between interpreters. +This proposal introduces the stdlib ``interpreters`` module. It exposes +the basic functionality of multiple interpreters already provided by the +C-API, along with a *very* basic way to communicate +(i.e. pass data between interpreters). A Disclaimer about the GIL ========================== -To avoid any confusion up front: This PEP is unrelated to any efforts -to stop sharing the GIL between subinterpreters. At most this proposal -will allow users to take advantage of any results of work on the GIL. -The position here is that exposing subinterpreters to Python code is -worth doing, even if they still share the GIL. +To avoid any confusion up front: This PEP is meant to be independent +of any efforts to stop sharing the GIL between interpreters (:pep:`684`). +At most this proposal will allow users to take advantage of any +GIL-related work. + +The author's position here is that exposing multiple interpreters +to Python code is worth doing, even if they still share the GIL. +Conversations with past steering councils indicates they do not +necessarily agree. Proposal ======== -The ``interpreters`` module will be added to the stdlib. To help -authors of extension modules, a new page will be added to the -`Extending Python `_ docs. More information on both -is found in the immediately following sections. - The "interpreters" Module ------------------------- -The ``interpreters`` module will -provide a high-level interface to subinterpreters and wrap a new -low-level ``_interpreters`` (in the same way as the ``threading`` -module). See the `Examples`_ section for concrete usage and use cases. +The ``interpreters`` module will provide a high-level interface +to the multiple interpreter functionality, and wrap a new low-level +``_interpreters`` (in the same way as the ``threading`` module). +See the `Examples`_ section for concrete usage and use cases. -Along with exposing the existing (in CPython) subinterpreter support, -the module will also provide a mechanism for sharing data between -interpreters. This mechanism centers around "channels", which are -similar to queues and pipes. +Along with exposing the existing (in CPython) multiple interpreter +support, the module will also support a very basic mechanism for +passing data between interpreters. That involves setting simple objects +in the ``__main__`` module of a target subinterpreter. If one end of +an ``os.pipe()`` is passed this way then that pipe can be used to send +bytes between the two interpreters. Note that *objects* are not shared between interpreters since they are tied to the interpreter in which they were created. Instead, the -objects' *data* is passed between interpreters. See the `Shared data`_ -section for more details about sharing between interpreters. - -At first only the following types will be supported for sharing: - -* None -* bytes -* str -* int -* :pep:`554` channels - -Support for other basic types (e.g. bool, float, Ellipsis) will be added later. +objects' *data* is passed between interpreters. See the `Shared Data`_ +and `API For Sharing Data`_ sections for more details about +sharing/communicating between interpreters. API summary for interpreters module ----------------------------------- @@ -82,36 +74,37 @@ the `"interpreters" Module API`_ section below. For creating and using interpreters: -+---------------------------------------------+----------------------------------------------+ -| signature | description | -+=============================================+==============================================+ -| ``list_all() -> [Interpreter]`` | Get all existing interpreters. | -+---------------------------------------------+----------------------------------------------+ -| ``get_current() -> Interpreter`` | Get the currently running interpreter. | -+---------------------------------------------+----------------------------------------------+ -| ``get_main() -> Interpreter`` | Get the main interpreter. | -+---------------------------------------------+----------------------------------------------+ -| ``create(*, isolated=True) -> Interpreter`` | Initialize a new (idle) Python interpreter. | -+---------------------------------------------+----------------------------------------------+ ++----------------------------------+----------------------------------------------+ +| signature | description | ++==================================+==============================================+ +| ``list_all() -> [Interpreter]`` | Get all existing interpreters. | ++----------------------------------+----------------------------------------------+ +| ``get_current() -> Interpreter`` | Get the currently running interpreter. | ++----------------------------------+----------------------------------------------+ +| ``get_main() -> Interpreter`` | Get the main interpreter. | ++----------------------------------+----------------------------------------------+ +| ``create() -> Interpreter`` | Initialize a new (idle) Python interpreter. | ++----------------------------------+----------------------------------------------+ | -+----------------------------------------+-----------------------------------------------------+ -| signature | description | -+========================================+=====================================================+ -| ``class Interpreter(id)`` | A single interpreter. | -+----------------------------------------+-----------------------------------------------------+ -| ``.id`` | The interpreter's ID (read-only). | -+----------------------------------------+-----------------------------------------------------+ -| ``.isolated`` | The interpreter's mode (read-only). | -+----------------------------------------+-----------------------------------------------------+ -| ``.is_running() -> bool`` | Is the interpreter currently executing code? | -+----------------------------------------+-----------------------------------------------------+ -| ``.close()`` | Finalize and destroy the interpreter. | -+----------------------------------------+-----------------------------------------------------+ -| ``.run(src_str, /, *, channels=None)`` | | Run the given source code in the interpreter. | -| | | (This blocks the current thread until done.) | -+----------------------------------------+-----------------------------------------------------+ ++---------------------------------------------------+---------------------------------------------------+ +| signature | description | ++===================================================+===================================================+ +| ``class Interpreter`` | A single interpreter. | ++---------------------------------------------------+---------------------------------------------------+ +| ``.id`` | The interpreter's ID (read-only). | ++---------------------------------------------------+---------------------------------------------------+ +| ``.is_running() -> bool`` | Is the interpreter currently executing code? | ++---------------------------------------------------+---------------------------------------------------+ +| ``.close()`` | Finalize and destroy the interpreter. | ++---------------------------------------------------+---------------------------------------------------+ +| ``.run(src_str, /, *, shared=None) -> Status`` | | Run the given source code in the interpreter | +| | | (in its own thread). | ++---------------------------------------------------+---------------------------------------------------+ + +.. XXX Support blocking interp.run() until the interpreter + finishes its current work. | @@ -121,6 +114,28 @@ For creating and using interpreters: | ``RunFailedError`` | ``RuntimeError`` | Interpreter.run() resulted in an uncaught exception. | +--------------------+------------------+------------------------------------------------------+ +.. XXX Add "InterpreterAlreadyRunningError"? + +Asynchronous results: + ++--------------------------------------------------+---------------------------------------------------+ +| signature | description | ++==================================================+===================================================+ +| ``class Status`` | Tracks if a request is complete. | ++--------------------------------------------------+---------------------------------------------------+ +| ``.wait(timeout=None)`` | Block until the requested work is done. | ++--------------------------------------------------+---------------------------------------------------+ +| ``.done() -> bool`` | Has the requested work completed (or failed)? | ++--------------------------------------------------+---------------------------------------------------+ +| ``.exception() -> Exception | None`` | Return any exception from the requested work. | ++--------------------------------------------------+---------------------------------------------------+ + ++--------------------------+------------------------+------------------------------------------------+ +| exception | base | description | ++==========================+========================+================================================+ +| ``NotFinishedError`` | ``Exception`` | The request has not completed yet. | ++--------------------------+------------------------+------------------------------------------------+ + For sharing data between interpreters: +---------------------------------------------------------+--------------------------------------------+ @@ -129,75 +144,27 @@ For sharing data between interpreters: | ``is_shareable(obj) -> Bool`` | | Can the object's data be shared | | | | between interpreters? | +---------------------------------------------------------+--------------------------------------------+ -| ``create_channel() -> (RecvChannel, SendChannel)`` | | Create a new channel for passing | -| | | data between interpreters. | -+---------------------------------------------------------+--------------------------------------------+ -| ``list_all_channels() -> [(RecvChannel, SendChannel)]`` | Get all open channels. | -+---------------------------------------------------------+--------------------------------------------+ - -| - -+------------------------------------------+-----------------------------------------------+ -| signature | description | -+==========================================+===============================================+ -| ``class RecvChannel(id)`` | The receiving end of a channel. | -+------------------------------------------+-----------------------------------------------+ -| ``.id`` | The channel's unique ID. | -+------------------------------------------+-----------------------------------------------+ -| ``.recv() -> object`` | | Get the next object from the channel, | -| | | and wait if none have been sent. | -+------------------------------------------+-----------------------------------------------+ -| ``.recv_nowait(default=None) -> object`` | | Like recv(), but return the default | -| | | instead of waiting. | -+------------------------------------------+-----------------------------------------------+ - -| - -+------------------------------+--------------------------------------------------+ -| signature | description | -+==============================+==================================================+ -| ``class SendChannel(id)`` | The sending end of a channel. | -+------------------------------+--------------------------------------------------+ -| ``.id`` | The channel's unique ID. | -+------------------------------+--------------------------------------------------+ -| ``.send(obj)`` | | Send the object (i.e. its data) to the | -| | | receiving end of the channel and wait. | -+------------------------------+--------------------------------------------------+ -| ``.send_nowait(obj)`` | | Like send(), but return False if not received. | -+------------------------------+--------------------------------------------------+ - -| - -+--------------------------+------------------------+------------------------------------------------+ -| exception | base | description | -+==========================+========================+================================================+ -| ``ChannelError`` | ``Exception`` | The base class for channel-related exceptions. | -+--------------------------+------------------------+------------------------------------------------+ -| ``ChannelNotFoundError`` | ``ChannelError`` | The identified channel was not found. | -+--------------------------+------------------------+------------------------------------------------+ -| ``ChannelEmptyError`` | ``ChannelError`` | The channel was unexpectedly empty. | -+--------------------------+------------------------+------------------------------------------------+ -| ``ChannelNotEmptyError`` | ``ChannelError`` | The channel was unexpectedly not empty. | -+--------------------------+------------------------+------------------------------------------------+ -| ``NotReceivedError`` | ``ChannelError`` | Nothing was waiting to receive a sent object. | -+--------------------------+------------------------+------------------------------------------------+ Help for Extension Module Maintainers ------------------------------------- -Many extension modules do not support use in subinterpreters yet. The -maintainers and users of such extension modules will both benefit when -they are updated to support subinterpreters. In the meantime users may -become confused by failures when using subinterpreters, which could +In practice, an extension that implements multi-phase init (:pep:`489`) +is considered isolated and thus compatible with multiple interpreters. +Otherwise it is "incompatible". + +Many extension modules are still incompatible. The maintainers and +users of such extension modules will both benefit when they are updated +to support multiple interpreters. In the meantime, users may become +confused by failures when using multiple interpreters, which could negatively impact extension maintainers. See `Concerns`_ below. To mitigate that impact and accelerate compatibility, we will do the following: * be clear that extension modules are *not* required to support use in - subinterpreters -* raise ``ImportError`` when an incompatible (no :pep:`489` support) module - is imported in a subinterpreter + multiple interpreters +* raise ``ImportError`` when an incompatible module is imported + in a subinterpreter * provide resources (e.g. docs) to help maintainers reach compatibility * reach out to the maintainers of Cython and of the most used extension modules (on PyPI) to get feedback and possibly provide assistance @@ -213,20 +180,7 @@ Run isolated code interp = interpreters.create() print('before') - interp.run('print("during")') - print('after') - -Run in a thread ---------------- - -:: - - interp = interpreters.create() - def run(): - interp.run('print("during")') - t = threading.Thread(target=run) - print('before') - t.start() + interp.run('print("during")').wait() print('after') Pre-populate an interpreter @@ -235,12 +189,13 @@ Pre-populate an interpreter :: interp = interpreters.create() - interp.run(tw.dedent(""" + st = interp.run(tw.dedent(""" import some_lib import an_expensive_module some_lib.set_up() """)) wait_for_request() + st.wait() interp.run(tw.dedent(""" some_lib.handle_request() """)) @@ -254,7 +209,7 @@ Handling an exception try: interp.run(tw.dedent(""" raise KeyError - """)) + """)).wait() except interpreters.RunFailedError as exc: print(f"got the error from the subinterpreter: {exc}") @@ -268,7 +223,7 @@ Re-raising an exception try: interp.run(tw.dedent(""" raise KeyError - """)) + """)).wait() except interpreters.RunFailedError as exc: raise exc.__cause__ except KeyError: @@ -276,27 +231,25 @@ Re-raising an exception Note that this pattern is a candidate for later improvement. -Synchronize using a channel ---------------------------- +Synchronize using an OS pipe +---------------------------- :: interp = interpreters.create() - r, s = interpreters.create_channel() - def run(): - interp.run(tw.dedent(""" - reader.recv() + r, s = os.pipe() + print('before') + interp.run(tw.dedent(""" + import os + os.read(reader, 1) print("during") """), shared=dict( reader=r, ), ) - t = threading.Thread(target=run) - print('before') - t.start() print('after') - s.send(b'') + os.write(s, '') Sharing a file descriptor ------------------------- @@ -304,56 +257,25 @@ Sharing a file descriptor :: interp = interpreters.create() - r1, s1 = interpreters.create_channel() - r2, s2 = interpreters.create_channel() - def run(): - interp.run(tw.dedent(""" + r1, s1 = os.pipe() + r2, s2 = os.pipe() + interp.run(tw.dedent(""" + import os fd = int.from_bytes( - reader.recv(), 'big') + os.read(reader, 10), 'big') for line in os.fdopen(fd): print(line) - writer.send(b'') + os.write(writer, b'') """), shared=dict( - reader=r, + reader=r1, writer=s2, ), ) - t = threading.Thread(target=run) - t.start() with open('spamspamspam') as infile: fd = infile.fileno().to_bytes(1, 'big') - s.send(fd) - r.recv() - -Passing objects via marshal ---------------------------- - -:: - - interp = interpreters.create() - r, s = interpreters.create_channel() - interp.run(tw.dedent(""" - import marshal - """), - shared=dict( - reader=r, - ), - ) - def run(): - interp.run(tw.dedent(""" - data = reader.recv() - while data: - obj = marshal.loads(data) - do_something(obj) - data = reader.recv() - """)) - t = threading.Thread(target=run) - t.start() - for obj in input: - data = marshal.dumps(obj) - s.send(data) - s.send(None) + os.write(s1, fd) + os.read(r2, 1) Passing objects via pickle -------------------------- @@ -361,28 +283,31 @@ Passing objects via pickle :: interp = interpreters.create() - r, s = interpreters.create_channel() + r, s = os.pipe() interp.run(tw.dedent(""" + import os import pickle """), shared=dict( reader=r, ), - ) - def run(): - interp.run(tw.dedent(""" - data = reader.recv() - while data: + ).wait() + interp.run(tw.dedent(""" + data = b'' + c = os.read(reader, 1) + while c != b'\x00': + while c != b'\x00': + data += c + c = os.read(reader, 1) obj = pickle.loads(data) do_something(obj) - data = reader.recv() + c = os.read(reader, 1) """)) - t = threading.Thread(target=run) - t.start() for obj in input: data = pickle.dumps(obj) - s.send(data) - s.send(None) + os.write(s, data) + os.write(s, b'\x00') + os.write(s, b'\x00') Running a module ---------------- @@ -402,18 +327,6 @@ Running as script (including zip archives & directories) main_script = path_name interp.run(f"import runpy; runpy.run_path({main_script!r})") -Running in a thread pool executor ---------------------------------- - -:: - - interps = [interpreters.create() for i in range(5)] - with concurrent.futures.ThreadPoolExecutor(max_workers=len(interps)) as pool: - print('before') - for interp in interps: - pool.submit(interp.run, 'print("starting"); print("stopping")' - print('after') - Rationale ========= @@ -421,7 +334,7 @@ Rationale Running code in multiple interpreters provides a useful level of isolation within the same process. This can be leveraged in a number of ways. Furthermore, subinterpreters provide a well-defined framework -in which such isolation may extended. +in which such isolation may extended. (See :pep:`684`.) Nick Coghlan explained some of the benefits through a comparison with multi-processing [benefits]_:: @@ -444,16 +357,18 @@ multi-processing [benefits]_:: poking holes in the process isolation that operating systems give you by default. -CPython has supported subinterpreters, with increasing levels of -support, since version 1.5. While the feature has the potential -to be a powerful tool, subinterpreters have suffered from neglect -because they are not available directly from Python. Exposing the -existing functionality in the stdlib will help reverse the situation. +CPython has supported multiple interpreters, with increasing levels +of support, since version 1.5. While the feature has the potential +to be a powerful tool, it has suffered from neglect +because the multiple interpreter capabilities are not readily available +directly from Python. Exposing the existing functionality +in the stdlib will help reverse the situation. This proposal is focused on enabling the fundamental capability of -multiple isolated interpreters in the same Python process. This is a +multiple interpreters, isolated from each other, +in the same Python process. This is a new area for Python so there is relative uncertainly about the best -tools to provide as companions to subinterpreters. Thus we minimize +tools to provide as companions to interpreters. Thus we minimize the functionality we add in the proposal as much as possible. Concerns @@ -464,11 +379,13 @@ Concerns Some have argued that subinterpreters do not add sufficient benefit to justify making them an official part of Python. Adding features to the language (or stdlib) has a cost in increasing the size of -the language. So an addition must pay for itself. In this case, -subinterpreters provide a novel concurrency model focused on isolated -threads of execution. Furthermore, they provide an opportunity for -changes in CPython that will allow simultaneous use of multiple CPU -cores (currently prevented by the GIL). +the language. So an addition must pay for itself. + +In this case, multiple interpreter support provide a novel concurrency +model focused on isolated threads of execution. Furthermore, they +provide an opportunity for changes in CPython that will allow +simultaneous use of multiple CPU cores (currently prevented +by the GIL--see :pep:`684`). Alternatives to subinterpreters include threading, async, and multiprocessing. Threading is limited by the GIL and async isn't @@ -485,7 +402,7 @@ concurrency model (e.g. CSP) which has found success elsewhere and will appeal to some Python users. That is the core value that the ``interpreters`` module will provide. -* "stdlib support for subinterpreters adds extra burden +* "stdlib support for multiple interpreters adds extra burden on C extension authors" In the `Interpreter Isolation`_ section below we identify ways in @@ -521,25 +438,25 @@ consideration. It is not something that can be done a simply as this PEP proposes and likely deserves significant time on PyPI to mature. (See `Nathaniel's post `_ on python-dev.) -However, this PEP does not propose any new concurrency API. At most -it exposes minimal tools (e.g. subinterpreters, channels) which may -be used to write code that follows patterns associated with (relatively) -new-to-Python `concurrency models `_. Those tools could -also be used as the basis for APIs for such concurrency models. -Again, this PEP does not propose any such API. +However, this PEP does not propose any new concurrency API. +At most it exposes minimal tools (e.g. subinterpreters, simple "sharing") +which may be used to write code that follows patterns associated with +(relatively) new-to-Python `concurrency models `_. +Those tools could also be used as the basis for APIs for such +concurrency models. Again, this PEP does not propose any such API. * "there is no point to exposing subinterpreters if they still share the GIL" * "the effort to make the GIL per-interpreter is disruptive and risky" A common misconception is that this PEP also includes a promise that -subinterpreters will no longer share the GIL. When that is clarified, +interpreters will no longer share the GIL. When that is clarified, the next question is "what is the point?". This is already answered at length in this PEP. Just to be clear, the value lies in:: * increase exposure of the existing feature, which helps improve the code health of the entire CPython runtime - * expose the (mostly) isolated execution of subinterpreters + * expose the (mostly) isolated execution of interpreters * preparation for per-interpreter GIL * encourage experimentation @@ -566,16 +483,18 @@ each with different goals. Most center on correctness and usability. One class of concurrency models focuses on isolated threads of execution that interoperate through some message passing scheme. A notable example is Communicating Sequential Processes [CSP]_ (upon -which Go's concurrency is roughly based). The isolation inherent to -subinterpreters makes them well-suited to this approach. +which Go's concurrency is roughly based). The inteded isolation +inherent to CPython's interpreters makes them well-suited +to this approach. -Shared data +Shared Data ----------- -Subinterpreters are inherently isolated (with caveats explained below), -in contrast to threads. So the same communicate-via-shared-memory -approach doesn't work. Without an alternative, effective use of -concurrency via subinterpreters is significantly limited. +CPython's interpreters are inherently isolated (with caveats +explained below), in contrast to threads. So the same +communicate-via-shared-memory approach doesn't work. Without an +alternative, effective use of concurrency via multiple interpreters +is significantly limited. The key challenge here is that sharing objects between interpreters faces complexity due to various constraints on object ownership, @@ -583,52 +502,31 @@ visibility, and mutability. At a conceptual level it's easier to reason about concurrency when objects only exist in one interpreter at a time. At a technical level, CPython's current memory model limits how Python *objects* may be shared safely between interpreters; -effectively objects are bound to the interpreter in which they were +effectively, objects are bound to the interpreter in which they were created. Furthermore, the complexity of *object* sharing increases as -subinterpreters become more isolated, e.g. after GIL removal. +interpreters become more isolated, e.g. after GIL removal (though this +is mitigated somewhat for some "immortal" objects (see :pep:`683`). Consequently,the mechanism for sharing needs to be carefully considered. There are a number of valid solutions, several of which may be -appropriate to support in Python. This proposal provides a single basic -solution: "channels". Ultimately, any other solution will look similar -to the proposed one, which will set the precedent. Note that the -implementation of ``Interpreter.run()`` will be done in a way that -allows for multiple solutions to coexist, but doing so is not -technically a part of the proposal here. +appropriate to support in Python. Earlier versions of this proposal +included a basic capability ("channels"), though most of the options +were quite similar. -Regarding the proposed solution, "channels", it is a basic, opt-in data -sharing mechanism that draws inspiration from pipes, queues, and CSP's -channels. [fifo]_ +Note that the implementation of ``Interpreter.run()`` will be done +in a way that allows for may of these solutions to be implemented +independently and to coexist, but doing so is not technically +a part of the proposal here. -As simply described earlier by the API summary, -channels have two operations: send and receive. A key characteristic -of those operations is that channels transmit data derived from Python -objects rather than the objects themselves. When objects are sent, -their data is extracted. When the "object" is received in the other -interpreter, the data is converted back into an object owned by that -interpreter. +The fundamental enabling feature for communication is that most objects +can be converted to some encoding of underlying raw data, which is safe +to be passed between interpreters. For example, an ``int`` object can +be turned into a C ``long`` value, send to another interpreter, and +turned back into an ``int`` object there. -To make this work, the mutable shared state will be managed by the -Python runtime, not by any of the interpreters. Initially we will -support only one type of objects for shared state: the channels provided -by ``create_channel()``. Channels, in turn, will carefully manage -passing objects between interpreters. - -This approach, including keeping the API minimal, helps us avoid further -exposing any underlying complexity to Python users. Along those same -lines, we will initially restrict the types that may be passed through -channels to the following: - -* None -* bytes -* str -* int -* channels - -Limiting the initial shareable types is a practical matter, reducing -the potential complexity of the initial implementation. There are a -number of strategies we may pursue in the future to expand supported -objects and object sharing strategies. +Regardless, the effort to determine the best way forward here is outside +the scope of this PEP. In the meantime, this proposal provides a basic +interim solution, described in `API For Sharing Data`_ below. Interpreter Isolation --------------------- @@ -670,30 +568,14 @@ area: Existing Usage -------------- -Subinterpreters are not a widely used feature. In fact, the only -documented cases of widespread usage are +Multiple interpreter support is not a widely used feature. In fact, +the only documented cases of widespread usage are `mod_wsgi `_, `OpenStack Ceph `_, and `JEP `_. On the one hand, these cases -provide confidence that existing subinterpreter support is relatively -stable. On the other hand, there isn't much of a sample size from which -to judge the utility of the feature. - - -Provisional Status -================== - -The new ``interpreters`` module will be added with "provisional" status -(see :pep:`411`). This allows Python users to experiment with the feature -and provide feedback while still allowing us to adjust to that feedback. -The module will be provisional in Python 3.9 and we will make a decision -before the 3.10 release whether to keep it provisional, graduate it, or -remove it. This PEP will be updated accordingly. - -While the module is provisional, any changes to the API (or to behavior) -do not need to be reflected here, nor get approval by the BDFL-delegate. -However, such changes will still need to go through the normal processes -(BPO for smaller changes and python-dev/PEP for substantial ones). +provide confidence that existing multiple interpreter support is +relatively stable. On the other hand, there isn't much of a sample +size from which to judge the utility of the feature. Alternate Python Implementations @@ -701,8 +583,8 @@ Alternate Python Implementations I've solicited feedback from various Python implementors about support for subinterpreters. Each has indicated that they would be able to -support subinterpreters (if they choose to) without a lot of -trouble. Here are the projects I contacted: +support multiple interpreters in the same process (if they choose to) +without a lot of trouble. Here are the projects I contacted: * jython ([jython]_) * ironpython (personal correspondence) @@ -733,17 +615,14 @@ The module provides the following functions:: Return the main interpreter. If the Python implementation has no concept of a main interpreter then return None. - create(*, isolated=True) -> Interpreter + create() -> Interpreter - Initialize a new Python interpreter and return it. The - interpreter will be created in the current thread and will remain - idle until something is run in it. The interpreter may be used - in any thread and will run in whichever thread calls - ``interp.run()``. See "Interpreter Isolated Mode" below for - an explanation of the "isolated" parameter. + Initialize a new Python interpreter and return it. + It will remain idle until something is run in it and always + run in its own thread. -The module also provides the following class:: +The module also provides the following classes:: class Interpreter(id): @@ -751,11 +630,6 @@ The module also provides the following class:: The interpreter's ID. (read-only) - isolated -> bool: - - Whether or not the interpreter is operating in "isolated" mode. - (read-only) - is_running() -> bool: Return whether or not the interpreter is currently executing @@ -769,39 +643,32 @@ The module also provides the following class:: This may not be called on an already running interpreter. Doing so results in a RuntimeError. - run(source_str, /, *, channels=None): + run(source_str, /, *, shared=None) -> Status: - Run the provided Python source code in the interpreter. If - the "channels" keyword argument is provided (and is a mapping - of attribute names to channels) then it is added to the - interpreter's execution namespace (the interpreter's - "__main__" module). If any of the values are not RecvChannel - or SendChannel instances then ValueError gets raised. + Run the provided Python source code in the interpreter and + return a Status object that tracks when it finishes. + + If the "shared" keyword argument is provided (and is a mapping + of attribute name keys) then each key-value pair is added to + the interpreter's execution namespace (the interpreter's + "__main__" module). If any of the values are not a shareable + object (see below) then ValueError gets raised. This may not be called on an already running interpreter. Doing so results in a RuntimeError. - A "run()" call is similar to a function call. Once it - completes, the code that called "run()" continues executing - (in the original interpreter). Likewise, if there is any - uncaught exception then it effectively (see below) propagates - into the code where ``run()`` was called. However, unlike - function calls (but like threads), there is no return value. - If any value is needed, pass it out via a channel. + A "run()" call is similar to a Thread.start() call. That code + starts running in a background thread and "run()" returns. At + that point, the code that called "run()" continues executing + (in the original interpreter). If any "return" value is + needed, pass it out via a pipe (os.pipe()). If there is any + uncaught exception then the returned Status object will expose it. - The big difference from functions is that "run()" executes - the code in an entirely different interpreter, with entirely - separate state. The state of the current interpreter in the - current OS thread is swapped out with the state of the target - interpreter (the one that will execute the code). When the - target finishes executing, the original interpreter gets - swapped back in and its execution resumes. - - So calling "run()" will effectively cause the current Python - thread to pause. Sometimes you won't want that pause, in - which case you should make the "run()" call in another thread. - To do so, add a function that calls "run()" and then run that - function in a normal "threading.Thread". + The big difference from functions or threading.Thread is that + "run()" executes the code in an entirely different interpreter, + with entirely separate state. The state of the current + interpreter in the original OS thread does not affect that of + the target interpreter (the one that will execute the code). Note that the interpreter's state is never reset, neither before "run()" executes the code nor after. Thus the @@ -818,38 +685,88 @@ The module also provides the following class:: Supported code: source text. + class Status: + + # This is similar to concurrent.futures.Future. + + wait(timeout=None): + + Block until the requested work has finished. + + done() -> bool: + + Has the requested work completed (or failed)? + + exception() -> Exception | None: + + Return the exception raised by the requested work, if any. + If the work has not completed yet then ``NotFinishedError`` + is raised. + Uncaught Exceptions ------------------- Regarding uncaught exceptions in ``Interpreter.run()``, we noted that -they are "effectively" propagated into the code where ``run()`` was -called. To prevent leaking exceptions (and tracebacks) between -interpreters, we create a surrogate of the exception and its traceback -(see ``traceback.TracebackException``), set it to ``__cause__`` on a -new ``RunFailedError``, and raise that. +they are exposed via the returned ``Status`` object. To prevent leaking +exceptions (and tracebacks) between interpreters, we create a surrogate +of the exception and its traceback +(see ``traceback.TracebackException``). This is returned by +``Status.exception()``. ``Status.wait()`` set it to ``__cause__`` +on a new ``RunFailedError``, and raise that. Raising (a proxy of) the exception directly is problematic since it's -harder to distinguish between an error in the ``run()`` call and an +harder to distinguish between an error in the ``wait()`` call and an uncaught exception from the subinterpreter. -.. _interpreters-is-shareable: -.. _interpreters-create-channel: -.. _interpreters-list-all-channels: -.. _interpreters-RecvChannel: -.. _interpreters-SendChannel: - -API for sharing data +API For Sharing Data -------------------- -Subinterpreters are less useful without a mechanism for sharing data +As discussed in `Shared Data`_ above, multiple interpreter support +is less useful without a mechanism for sharing data (communicating) between them. Sharing actual Python objects between interpreters, however, has enough potential problems that we are avoiding support -for that here. Instead, only minimum set of types will be supported. -Initially this will include ``None``, ``bytes``, ``str``, ``int``, -and channels. Further types may be supported later. +for that in this proposal. Nor, as mentioned earlier, are we adding +anything more than the most minimal mechanism for communication. -The ``interpreters`` module provides a function that users may call -to determine whether an object is shareable or not:: +That very basic mechanism, using pipes (see ``os.pipe()``), will allow +users to send data (bytes) from one interpreter to another. We'll +take a closer look in a moment. Fundamentally, it's a simple +application of the underlying sharing capability proposed here. + +The various aspects of the approach, including keeping the API minimal, +helps us avoid further exposing any underlying complexity +to Python users. + +.. _interpreters-is-shareable: + +Shareable Objects +''''''''''''''''' + +A "shareable" object is one that the runtime knows how to safely "share" +between interpreters. For now this actually means that a copy of the +object is provided to the second interpreter. Legitimate sharing is +feasible but beyond the scope of this proposal. + +In fact, this proposal only covers very minimal "sharing" of a handful +of simple, immutable object types. We will initially limit the types +that are shareable to the following: + +* ``None`` +* ``bytes`` +* ``str`` +* ``int`` + +Support for other basic types (e.g. ``bool``, ``float``, ``Ellipsis``) +will be added later, separately. + +Limiting the initial shareable types is a practical matter, reducing +the potential complexity of the initial implementation. There are a +number of solutions we may pursue in the future to expand supported +objects and object sharing strategies. + +However, this PEP does provide one concrete addition related to +shareable objects. The ``interpreters`` module provides a function +that users may call to determine whether an object is shareable or not:: is_shareable(obj) -> bool: @@ -859,141 +776,83 @@ to determine whether an object is shareable or not:: be shared in a cross-interpreter way, whether via a proxy, a copy, or some other means. -This proposal provides two ways to share such objects between +How Sharing Works +''''''''''''''''' + +In this propsal, shareable objects are used with ``Interpreter.run()``. +The steps look something like this: + +1. a "shareable" object is mapped to an identifier in some container +2. that mapping is passed as the "shared" argument in the + ``Interpreter.run()`` call +3. the mapped object is converted to an object that the target + interpreter may safely use +4. that object is bound to the mapped name in the target interpreter's + ``__main__`` module, where the running code has access to it + +The critical part is what happens in step 3. The object must be +converted to some cross-interpreter-safe data (its raw data or even +a pointer). Then that data must be converted back into an object +for the target interpreter to use, likely a new object. For example, +an ``int`` object could be converted to the underlying C ``long`` value +and then back into a Python ``int`` object. + +To make this work, the intermediate data (and any associated mutable +shared state) will be managed by the Python runtime, not by any of the interpreters. -First, channels may be passed to ``run()`` via the ``channels`` -keyword argument, where they are effectively injected into the target -interpreter's ``__main__`` module. While passing arbitrary shareable -objects this way is possible, doing so is mainly intended for sharing -meta-objects (e.g. channels) between interpreters. It is less useful -to pass other objects (like ``bytes``) to ``run`` directly. +The underlying runtime capability that ``Interpreter.run()`` uses is +what enables data/object "sharing", and is available for use elsewhere +in the runtime. In fact, it was used in the implementation of the +"channels" that were part of an earlier version of this PEP. +Likewise, this runtime functionality facilitates most of the possible +solutions to which `Shared Data`_ alluded. Thus any separate effort +to introduce effective means for communicating and sharing data will +be well served by the underlying functionality proposed here. -Second, the main mechanism for sharing objects (i.e. their data) between -interpreters is through channels. A channel is a simplex FIFO similar -to a pipe. The main difference is that channels can be associated with -zero or more interpreters on either end. Like queues, which are also -many-to-many, channels are buffered (though they also offer methods -with unbuffered semantics). +.. XXX Add Interpreter.set_on___main__() and drop the "shared" arg? -Python objects are not shared between interpreters. However, in some -cases data those objects wrap is actually shared and not just copied. -One example might be :pep:`3118` buffers. In those cases the object in the -original interpreter is kept alive until the shared data in the other -interpreter is no longer used. Then object destruction can happen like -normal in the original interpreter, along with the previously shared -data. +Communicating Through OS Pipes +'''''''''''''''''''''''''''''' -The ``interpreters`` module provides the following functions related -to channels:: +As noted, this proposal enables a very basic mechanism for +communicating between interpreters, which makes use of +``Interpreter.run()`` and shareable objects: - create_channel() -> (RecvChannel, SendChannel): +1. interpreter A calls ``os.pipe()`` to get a read/write pair + of file descriptors (both shareable ``int`` objects) +2. interpreter A calls ``run()`` on interpreter B, passing + the read FD via the "shared" argument +3. interpreter A writes some bytes to the write FD +4. interpreter B reads those bytes - Create a new channel and return (recv, send), the RecvChannel - and SendChannel corresponding to the ends of the channel. - - Both ends of the channel are supported "shared" objects (i.e. - may be safely shared by different interpreters. Thus they - may be passed as keyword arguments to "Interpreter.run()". - - list_all_channels() -> [(RecvChannel, SendChannel)]: - - Return a list of all open channel-end pairs. - -The module also provides the following channel-related classes:: - - class RecvChannel(id): - - The receiving end of a channel. An interpreter may use this to - receive objects from another interpreter. At first only a few - of the simple, immutable builtin types will be supported. - - id -> int: - - The channel's unique ID. This is shared with the "send" end. - - recv(): - - Return the next object from the channel. If none have been - sent then wait until the next send. - - At the least, the object will be equivalent to the sent object. - That will almost always mean the same type with the same data, - though it could also be a compatible proxy. Regardless, it may - use a copy of that data or actually share the data. - - recv_nowait(default=None): - - Return the next object from the channel. If none have been - sent then return the default. Otherwise, this is the same - as the "recv()" method. +Several of the earlier examples demonstrate this, such as +`Synchronize using an OS pipe`_. - class SendChannel(id): +Interpreter Restrictions +======================== - The sending end of a channel. An interpreter may use this to - send objects to another interpreter. At first only a few of - the simple, immutable builtin types will be supported. - - id -> int: - - The channel's unique ID. This is shared with the "recv" end. - - send(obj): - - Send the object (i.e. its data) to the "recv" end of the - channel. Wait until the object is received. If the object - is not shareable then ValueError is raised. - - send_nowait(obj): - - Send the object to the "recv" end of the channel. This - behaves the same as "send()", except for the waiting part. - If no interpreter is currently receiving (waiting on the - other end) then queue the object and return False. Otherwise - return True. - -Channel Lifespan ----------------- - -A channel is automatically closed and destroyed once there are no more -Python objects (e.g. ``RecvChannel`` and ``SendChannel``) referring -to it. So it is effectively triggered via garbage-collection of those -objects.. - - -.. _isolated-mode: - -Interpreter "Isolated" Mode -=========================== - -By default, every new interpreter created by ``interpreters.create()`` -has specific restrictions on any code it runs. This includes the +Every new interpreter created by ``interpreters.create()`` +now has specific restrictions on any code it runs. This includes the following: -* importing an extension module fails if it does not implement the - :pep:`489` API -* new threads of any kind are not allowed +* importing an extension module fails if it does not implement + multi-phase init +* daemon threads may not be created * ``os.fork()`` is not allowed (so no ``multiprocessing``) -* ``os.exec*()``, AKA "fork+exec", is not allowed (so no ``subprocess``) +* ``os.exec*()`` is not allowed + (but "fork+exec", a la ``subprocess`` is okay) -This represents the full "isolated" mode of subinterpreters. It is -applied when ``interpreters.create()`` is called with the "isolated" -keyword-only argument set to ``True`` (the default). If -``interpreters.create(isolated=False)`` is called then none of those -restrictions is applied. - -One advantage of this approach is that it allows extension maintainers -to check subinterpreter compatibility before they implement the :pep:`489` -API. Also note that ``isolated=False`` represents the historical -behavior when using the existing subinterpreters C-API, thus providing -backward compatibility. For the existing C-API itself, the default -remains ``isolated=False``. The same is true for the "main" module, so +Note that interpreters created with the existing C-API do not have these +restrictions. The same is true for the "main" interpreter, so existing use of Python will not change. +.. Mention the similar restrictions in PEP 684? + We may choose to later loosen some of the above restrictions or provide a way to enable/disable granular restrictions individually. Regardless, -requiring :pep:`489` support from extension modules will always be a +requiring multi-phase init from extension modules will always be a default restriction. @@ -1003,36 +862,31 @@ Documentation The new stdlib docs page for the ``interpreters`` module will include the following: -* (at the top) a clear note that subinterpreter support in extension - modules is not required +* (at the top) a clear note that support for multiple interpreters + is not required from extension modules * some explanation about what subinterpreters are -* brief examples of how to use subinterpreters and channels -* a summary of the limitations of subinterpreters +* brief examples of how to use multiple interpreters + (and communicating between them) +* a summary of the limitations of using multiple interpreters * (for extension maintainers) a link to the resources for ensuring - subinterpreter compatibility + multiple interpreters compatibility * much of the API information in this PEP -A separate page will be added to the docs for resources to help -extension maintainers ensure their modules can be used safely in -subinterpreters, under `Extending Python `_. The page -will include the following information: +Docs about resources for extension maintainers already exist on the +`Isolating Extension Modules `_ howto page. Any +extra help will be added there. For example, it may prove helpful +to discuss strategies for dealing with linked libraries that keep +their own subinterpreter-incompatible global state. -* a summary about subinterpreters (similar to the same in the new - ``interpreters`` module page and in the C-API docs) -* an explanation of how extension modules can be impacted -* how to implement :pep:`489` support -* how to move from global module state to per-interpreter -* how to take advantage of :pep:`384` (heap types), :pep:`3121` - (module state), and :pep:`573` -* strategies for dealing with 3rd party C libraries that keep their - own subinterpreter-incompatible global state +.. _isolation-howto: + https://docs.python.org/3/howto/isolating-extensions.html Note that the documentation will play a large part in mitigating any negative impact that the new ``interpreters`` module might have on extension module maintainers. -Also, the ``ImportError`` for incompatible extgension modules will have -a message that clearly says it is due to missing subinterpreter +Also, the ``ImportError`` for incompatible extension modules will have +a message that clearly says it is due to missing multiple interpreters compatibility and that extensions are not required to provide it. This will help set user expectations properly. @@ -1058,14 +912,6 @@ This suffers from the same problem as sharing objects between interpreters via queues. The minimal solution (running a source string) is sufficient for us to get the feature out where it can be explored. -timeout arg to recv() and send() --------------------------------- - -Typically functions that have a ``block`` argument also have a -``timeout`` argument. It sometimes makes sense to do likewise for -functions that otherwise block, like the channel ``recv()`` and -``send()`` methods. We can add it later if needed. - Interpreter.run_in_thread() --------------------------- @@ -1079,11 +925,11 @@ Synchronization Primitives The ``threading`` module provides a number of synchronization primitives for coordinating concurrent operations. This is especially necessary due to the shared-state nature of threading. In contrast, -subinterpreters do not share state. Data sharing is restricted to -channels, which do away with the need for explicit synchronization. If -any sort of opt-in shared state support is added to subinterpreters in -the future, that same effort can introduce synchronization primitives -to meet that need. +interpreters do not share state. Data sharing is restricted to the +runtime's shareable objects capability, which does away with the need +for explicit synchronization. If any sort of opt-in shared state +support is added to CPython's interpreters in the future, that same +effort can introduce synchronization primitives to meet that need. CSP Library ----------- @@ -1095,18 +941,18 @@ minimalist goals of this proposal. Syntactic Support ----------------- -The ``Go`` language provides a concurrency model based on CSP, so -it's similar to the concurrency model that subinterpreters support. -However, ``Go`` also provides syntactic support, as well several builtin -concurrency primitives, to make concurrency a first-class feature. -Conceivably, similar syntactic (and builtin) support could be added to -Python using subinterpreters. However, that is *way* outside the scope -of this PEP! +The ``Go`` language provides a concurrency model based on CSP, +so it's similar to the concurrency model that multiple interpreters +support. However, ``Go`` also provides syntactic support, as well as +several builtin concurrency primitives, to make concurrency a +first-class feature. Conceivably, similar syntactic (and builtin) +support could be added to Python using interpreters. However, +that is *way* outside the scope of this PEP! Multiprocessing --------------- -The ``multiprocessing`` module could support subinterpreters in the same +The ``multiprocessing`` module could support interpreters in the same way it supports threads and processes. In fact, the module's maintainer, Davin Potts, has indicated this is a reasonable feature request. However, it is outside the narrow scope of this PEP. @@ -1114,17 +960,17 @@ request. However, it is outside the narrow scope of this PEP. C-extension opt-in/opt-out -------------------------- -By using the ``PyModuleDef_Slot`` introduced by :pep:`489`, we could easily -add a mechanism by which C-extension modules could opt out of support -for subinterpreters. Then the import machinery, when operating in -a subinterpreter, would need to check the module for support. It would -raise an ImportError if unsupported. +By using the ``PyModuleDef_Slot`` introduced by :pep:`489`, we could +easily add a mechanism by which C-extension modules could opt out of +multiple interpreter support. Then the import machinery, when operating +in a subinterpreter, would need to check the module for support. +It would raise an ImportError if unsupported. -Alternately we could support opting in to subinterpreter support. +Alternately we could support opting in to multiple interpreters support. However, that would probably exclude many more modules (unnecessarily) than the opt-out approach. Also, note that :pep:`489` defined that an -extension's use of the PEP's machinery implies support for -subinterpreters. +extension's use of the PEP's machinery implies multiple interpreters +support. The scope of adding the ModuleDef slot and fixing up the import machinery is non-trivial, but could be worth it. It all depends on @@ -1132,18 +978,6 @@ how many extension modules break under subinterpreters. Given that there are relatively few cases we know of through mod_wsgi, we can leave this for later. -Poisoning channels ------------------- - -CSP has the concept of poisoning a channel. Once a channel has been -poisoned, any ``send()`` or ``recv()`` call on it would raise a special -exception, effectively ending execution in the interpreter that tried -to use the poisoned channel. - -This could be accomplished by adding a ``poison()`` method to both ends -of the channel. The ``close()`` method can be used in this way -(mostly), but these semantics are relatively specialized and can wait. - Resetting __main__ ------------------ @@ -1194,22 +1028,21 @@ A possible solution is to add an ``Interpreter.reset()`` method. This would put the interpreter back into the state it was in when newly created. If called on a running interpreter it would fail (hence the main interpreter could never be reset). This would likely be more -efficient than creating a new subinterpreter, though that depends on -what optimizations will be made later to subinterpreter creation. +efficient than creating a new interpreter, though that depends on +what optimizations will be made later to interpreter creation. While this would potentially provide functionality that is not otherwise available from Python code, it isn't a fundamental functionality. So in the spirit of minimalism here, this can wait. Regardless, I doubt it would be controversial to add it post-PEP. -File descriptors and sockets in channels ----------------------------------------- +Shareable file descriptors and sockets +-------------------------------------- Given that file descriptors and sockets are process-global resources, -support for passing them through channels is a reasonable idea. They -would be a good candidate for the first effort at expanding the types -that channels support. They aren't strictly necessary for the initial -API. +making them shareable is a reasonable idea. They would be a good +candidate for the first effort at expanding the supported shareable +types. They aren't strictly necessary for the initial API. Integration with async ---------------------- @@ -1224,45 +1057,25 @@ Per Antoine Pitrou [async]_:: FIFOs to be able to synchronize on something an event loop can wait on (probably a file descriptor?). -A possible solution is to provide async implementations of the blocking -channel methods (``recv()``, and ``send()``). However, -the basic functionality of subinterpreters does not depend on async and -can be added later. +The basic functionality of multiple interpreters support does not depend +on async and can be added later. -Alternately, "readiness callbacks" could be used to simplify use in -async scenarios. This would mean adding an optional ``callback`` -(kw-only) parameter to the ``recv_nowait()`` and ``send_nowait()`` -channel methods. The callback would be called once the object was sent -or received (respectively). +channels +-------- -(Note that making channels buffered makes readiness callbacks less -important.) - -Support for iteration ---------------------- - -Supporting iteration on ``RecvChannel`` (via ``__iter__()`` or -``_next__()``) may be useful. A trivial implementation would use the -``recv()`` method, similar to how files do iteration. Since this isn't -a fundamental capability and has a simple analog, adding iteration -support can wait until later. - -Channel context managers ------------------------- - -Context manager support on ``RecvChannel`` and ``SendChannel`` may be -helpful. The implementation would be simple, wrapping a call to -``close()`` (or maybe ``release()``) like files do. As with iteration, -this can wait. +We could introduce some relatively efficient, native data types for +passing data between interpreters, to use instead of OS pipes. Earlier +versions of this PEP introduced one such mechanism, called "channels". +This can be pursued later. Pipes and Queues ---------------- -With the proposed object passing mechanism of "channels", other similar -basic types aren't required to achieve the minimal useful functionality -of subinterpreters. Such types include pipes (like unbuffered channels, -but one-to-one) and queues (like channels, but more generic). See below -in `Rejected Ideas`_ for more information. +With the proposed object passing mechanism of "os.pipe()", other similar +basic types aren't strictly required to achieve the minimal useful +functionality of multiple interpreters. Such types include pipes +(like unbuffered channels, but one-to-one) and queues (like channels, +but more generic). See below in `Rejected Ideas`_ for more information. Even though these types aren't part of this proposal, they may still be useful in the context of concurrency. Adding them later is entirely @@ -1270,30 +1083,10 @@ reasonable. The could be trivially implemented as wrappers around channels. Alternatively they could be implemented for efficiency at the same low level as channels. -Return a lock from send() -------------------------- - -When sending an object through a channel, you don't have a way of knowing -when the object gets received on the other end. One way to work around -this is to return a locked ``threading.Lock`` from ``SendChannel.send()`` -that unlocks once the object is received. - -Alternately, the proposed ``SendChannel.send()`` (blocking) and -``SendChannel.send_nowait()`` provide an explicit distinction that is -less likely to confuse users. - -Note that returning a lock would matter for buffered channels -(i.e. queues). For unbuffered channels it is a non-issue. - -Support prioritization in channels ----------------------------------- - -A simple example is ``queue.PriorityQueue`` in the stdlib. - Support inheriting settings (and more?) --------------------------------------- -Folks might find it useful, when creating a new subinterpreter, to be +Folks might find it useful, when creating a new interpreter, to be able to indicate that they would like some things "inherited" by the new interpreter. The mechanism could be a strict copy or it could be copy-on-write. The motivating example is with the warnings module @@ -1309,7 +1102,7 @@ Make exceptions shareable ------------------------- Exceptions are propagated out of ``run()`` calls, so it isn't a big -leap to make them shareable in channels. However, as noted elsewhere, +leap to make them shareable. However, as noted elsewhere, it isn't essential or (particularly common) so we can wait on doing that. @@ -1332,13 +1125,14 @@ It may also make sense to have ``RunFailedError.__cause__`` be a descriptor that does the lazy deserialization (and set ``__cause__``) on the ``RunFailedError`` instance. -Serialize everything through channels -------------------------------------- +Make everything shareable through serialization +----------------------------------------------- -We could use pickle (or marshal) to serialize everything sent through -channels. Doing this is potentially inefficient, but it may be a -matter of convenience in the end. We can add it later, but trying to -remove it later would be significantly more painful. +We could use pickle (or marshal) to serialize everything and thus +make them shareable. Doing this is potentially inefficient, +but it may be a matter of convenience in the end. +We can add it later, but trying to remove it later +would be significantly more painful. Return a value from ``run()`` ----------------------------- @@ -1355,23 +1149,13 @@ Add a "tp_share" type slot This would replace the current global registry for shareable types. -Expose which interpreters have actually *used* a channel end. -------------------------------------------------------------- - -Currently we associate interpreters upon access to a channel. We would -keep a separate association list for "upon use" and expose that. - Add a shareable synchronization primitive ----------------------------------------- This would be ``_threading.Lock`` (or something like it) where -interpreters would actually share the underlying mutex. This would -provide much better efficiency than blocking channel ops. The main -concern is that locks and channels don't mix well (as learned in Go). - -Note that the same functionality as a lock can be achieved by passing -some sort of "token" object through a channel. "send()" would be -equivalent to releasing the lock and "recv()" to acquiring the lock. +interpreters would actually share the underlying mutex. The main +concern is that locks and isolated interpreters may not mix well +(as learned in Go). We can add this later if it proves desirable without much trouble. @@ -1392,35 +1176,10 @@ make sense to treat them specially when it comes to propagation from We aren't going to worry about handling them differently. Threads already ignore ``SystemExit``, so for now we will follow that pattern. -Add an explicit release() and close() to channel end classes ------------------------------------------------------------- - -It can be convenient to have an explicit way to close a channel against -further global use. Likewise it could be useful to have an explicit -way to release one of the channel ends relative to the current -interpreter. Among other reasons, such a mechanism is useful for -communicating overall state between interpreters without the extra -boilerplate that passing objects through a channel directly would -require. - -The challenge is getting automatic release/close right without making -it hard to understand. This is especially true when dealing with a -non-empty channel. We should be able to get by without release/close -for now. - -Add SendChannel.send_buffer() ------------------------------ - -This method would allow no-copy sending of an object through a channel -if it supports the :pep:`3118` buffer protocol (e.g. memoryview). - -Support for this is not fundamental to channels and can be added on -later without much disruption. - Auto-run in a thread -------------------- -The PEP proposes a hard separation between subinterpreters and threads: +The PEP proposes a hard separation between interpreters and threads: if you want to run in a thread you must create the thread yourself and call ``run()`` in it. However, it might be convenient if ``run()`` could do that for you, meaning there would be less boilerplate. @@ -1434,18 +1193,6 @@ to ``run()`` to allow the run-in-the-current-thread operation. Rejected Ideas ============== -Explicit channel association ----------------------------- - -Interpreters are implicitly associated with channels upon ``recv()`` and -``send()`` calls. They are de-associated with ``release()`` calls. The -alternative would be explicit methods. It would be either -``add_channel()`` and ``remove_channel()`` methods on ``Interpreter`` -objects or something similar on channel objects. - -In practice, this level of management shouldn't be necessary for users. -So adding more explicit support would only add clutter to the API. - Use pipes instead of channels ----------------------------- @@ -1509,32 +1256,32 @@ Rejected possible solutions: (unnecessary complexity?) * throw the exception away and expect users to deal with unhandled exceptions explicitly in the script they pass to ``run()`` - (they can pass error info out via channels); with threads you have - to do something similar + (they can pass error info out via ``os.pipe()``); + with threads you have to do something similar Always associate each new interpreter with its own thread --------------------------------------------------------- -As implemented in the C-API, a subinterpreter is not inherently tied to +As implemented in the C-API, an interpreter is not inherently tied to any thread. Furthermore, it will run in any existing thread, whether created by Python or not. You only have to activate one of its thread states (``PyThreadState``) in the thread first. This means that the same thread may run more than one interpreter (though obviously not at the same time). -The proposed module maintains this behavior. Subinterpreters are not +The proposed module maintains this behavior. Interpreters are not tied to threads. Only calls to ``Interpreter.run()`` are. However, one of the key objectives of this PEP is to provide a more human- centric concurrency model. With that in mind, from a conceptual standpoint the module *might* be easier to understand if each -subinterpreter were associated with its own thread. +interpreter were associated with its own thread. That would mean ``interpreters.create()`` would create a new thread and ``Interpreter.run()`` would only execute in that thread (and nothing else would). The benefit is that users would not have to wrap ``Interpreter.run()`` calls in a new ``threading.Thread``. Nor would they be in a position to accidentally pause the current -interpreter (in the current thread) while their subinterpreter +interpreter (in the current thread) while their interpreter executes. The idea is rejected because the benefit is small and the cost is high. @@ -1547,16 +1294,6 @@ require extra runtime modifications. It would also make the module's implementation overly complicated. Finally, it might not even make the module easier to understand. -Only associate interpreters upon use ------------------------------------- - -Associate interpreters with channel ends only once ``recv()``, -``send()``, etc. are called. - -Doing this is potentially confusing and also can lead to unexpected -races where a channel is auto-closed before it can be used in the -original (creating) interpreter. - Add a "reraise" method to RunFailedError ---------------------------------------- @@ -1620,12 +1357,12 @@ you go: However, fully implementing it will be almost trivial. * the low-level module is mostly complete. The bulk of the implementation was merged into master in December 2018 as the - "_xxsubinterpreters" module (for the sake of testing subinterpreter - functionality). Only 3 parts of the implementation remain: - "send_wait()", "send_buffer()", and exception propagation. All three - have been mostly finished, but were blocked by work related to ceval. - That blocker is basically resolved now and finishing the low-level - will not require extensive work. + "_xxsubinterpreters" module (for the sake of testing multiple + interpreters functionality). Only 3 parts of the implementation + remain: "send_wait()", "send_buffer()", and exception propagation. + All three have been mostly finished, but were blocked by work + related to ceval. That blocker is basically resolved now and + finishing the low-level will not require extensive work. * all necessary C-API work has been finished * all anticipated work in the runtime has been finished @@ -1644,14 +1381,6 @@ References https://en.wikipedia.org/wiki/Communicating_sequential_processes https://github.com/futurecore/python-csp -.. [fifo] - https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Pipe - https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue - https://docs.python.org/3/library/queue.html#module-queue - http://stackless.readthedocs.io/en/2.7-slp/library/stackless/channels.html - https://golang.org/doc/effective_go.html#sharing - http://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-feel-bad/ - .. [caveats] https://docs.python.org/3/c-api/init.html#bugs-and-caveats @@ -1700,9 +1429,6 @@ References .. _nathaniel-asyncio: https://mail.python.org/archives/list/python-dev@python.org/message/TUEAZNZHVJGGLL4OFD32OW6JJDKM6FAS/ -.. _extension-docs: - https://docs.python.org/3/extending/index.html - * mp-conn https://docs.python.org/3/library/multiprocessing.html#connection-objects