PEP: 554 Title: Multiple Interpreters in the Stdlib Author: Eric Snow Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 2017-09-05 Python-Version: 3.7 Post-History: Abstract ======== CPython has supported subinterpreters, with increasing levels of support, since version 1.5. The feature has been available via the C-API. [c-api]_ Subinterpreters operate in `relative isolation from one another `_, which provides the basis for an `alternative concurrency model `_. This proposal introduces the stdlib ``interpreters`` module. The module will be `provisional `_. It exposes the basic functionality of subinterpreters already provided by the C-API. Proposal ======== The ``interpreters`` module will be added to the stdlib. It will provide a high-level interface to subinterpreters and wrap the low-level ``_interpreters`` module. The proposed API is inspired by the ``threading`` module. See the `Examples`_ section for concrete usage and use cases. The module provides the following functions: ``list()``:: Return a list of all existing interpreters. ``get_current()``:: Return the currently running interpreter. ``create()``:: Initialize a new Python interpreter and return it. The interpreter will be created in the current thread and will remain idle until something is run in it. The interpreter may be used in any thread and will run in whichever thread calls ``interp.run()``. The module also provides the following classes: ``Interpreter(id)``:: id: The interpreter's ID (read-only). is_running(): Return whether or not the interpreter is currently executing code. Calling this on the current interpreter will always return True. destroy(): Finalize and destroy the interpreter. This may not be called on an already running interpreter. Doing so results in a RuntimeError. run(source_str): Run the provided Python source code in the interpreter. This may not be called on an already running interpreter. Doing so results in a RuntimeError. A "run()" call is quite similar to any other function call. Once it completes, the code that called "run()" continues executing (in the original interpreter). Likewise, if there is any uncaught exception, it propagates into the code where "run()" was called. The big difference is that "run()" executes the code in an entirely different interpreter, with entirely separate state. The state of the current interpreter in the current OS thread is swapped out with the state of the target interpreter (the one that will execute the code). When the target finishes executing, the original interpreter gets swapped back in and its execution resumes. So calling "run()" will effectively cause the current Python thread to pause. Sometimes you won't want that pause, in which case you should make the "run()" call in another thread. To do so, add a function that calls "run()" and then run that function in a normal "threading.Thread". Note that interpreter's state is never reset, neither before "run()" executes the code nor after. Thus the interpreter state is preserved between calls to "run()". This includes "sys.modules", the "builtins" module, and the internal state of C extension modules. Also note that "run()" executes in the namespace of the "__main__" module, just like scripts, the REPL, "-m", and "-c". Just as the interpreter's state is not ever reset, the "__main__" module is never reset. You can imagine concatenating the code from each "run()" call into one long script. This is the same as how the REPL operates. Supported code: source text. get_fifo(name): Return the FIFO object with the given name that is associated with this interpreter. If no such FIFO exists then raise KeyError. The FIFO will be either a "FIFOReader" or a "FIFOWriter", depending on which "add_*_fifo()" was called. list_fifos(): Return a list of all fifos associated with the interpreter. add_recv_fifo(name=None): Create a new FIFO, associate the two ends with the involved interpreters, and return the side associated with the interpreter in which "add_recv_fifo()" was called. A FIFOReader gets tied to this interpreter. A FIFOWriter gets tied to the interpreter that called "add_recv_fifo()". The FIFO's name is set to the provided value. If no name is provided then a dynamically generated one is used. If a FIFO with the given name is already associated with this interpreter (or with the one in which "add_recv_fifo()" was called) then raise KeyError. add_send_fifo(name=None): Create a new FIFO, associate the two ends with the involved interpreters, and return the side associated with the interpreter in which "add_recv_fifo()" was called. A FIFOWriter gets tied to this interpreter. A FIFOReader gets tied to the interpreter that called "add_recv_fifo()". The FIFO's name is set to the provided value. If no name is provided then a dynamically generated one is used. If a FIFO with the given name is already associated with this interpreter (or with the one in which "add_send_fifo()" was called) then raise KeyError. remove_fifo(name): Drop the association between the named FIFO and this interpreter. If the named FIFO is not found then raise KeyError. ``FIFOReader(name)``:: The receiving end of a FIFO. An interpreter may use this to receive objects from another interpreter. At first only bytes and None will be supported. name: The FIFO's name. __next__(): Return the next bytes object from the pipe. If none have been pushed on then block. pop(*, block=True): Return the next bytes object from the pipe. If none have been pushed on and "block" is True (the default) then block. Otherwise return None. ``FIFOWriter(name)``:: The sending end of a FIFO. An interpreter may use this to send objects to another interpreter. At first only bytes and None will be supported. name: The FIFO's name. push(object, *, block=True): Add the object to the FIFO. If "block" is true then block until the object is popped off. If the FIFO does not support the object's type then TypeError is raised. About FIFOs ----------- Subinterpreters are inherently isolated (with caveats explained below), in contrast to threads. This enables `a different concurrency model `_ than currently exists in Python. `Communicating Sequential Processes`_ (CSP) is the prime example. A key component of this approach to concurrency is message passing. So providing a message/object passing mechanism alongside ``Interpreter`` is a fundamental requirement. This proposal includes a basic mechanism upon which more complex machinery may be built. That basic mechanism draws inspiration from pipes, queues, and CSP's channels. The key challenge here is that sharing objects between interpreters faces complexity due in part to CPython's current memory model. Furthermore, in this class of concurrency, the ideal is that objects only exist in one interpreter at a time. However, this is not practical for Python so we initially constrain supported objects to ``bytes`` and ``None``. There are a number of strategies we may pursue in the future to expand supported objects and object sharing strategies. Note that the complexity of object sharing increases as subinterpreters become more isolated, e.g. after GIL removal. So the mechanism for message passing needs to be carefully considered. Keeping the API minimal and initially restricting the supported types helps us avoid further exposing any underlying complexity to Python users. Examples ======== Run isolated code ----------------- :: interp = interpreters.create() print('before') interp.run('print("during")') print('after') Run in a thread --------------- :: interp = interpreters.create() def run(): interp.run('print("during")') t = threading.Thread(target=run) print('before') t.start() print('after') Pre-populate an interpreter --------------------------- :: interp = interpreters.create() interp.run("""if True: import some_lib import an_expensive_module some_lib.set_up() """) wait_for_request() interp.run("""if True: some_lib.handle_request() """) Handling an exception --------------------- :: interp = interpreters.create() try: interp.run("""if True: raise KeyError """) except KeyError: print("got the error from the subinterpreter") Synchronize using a FIFO ------------------------ :: interp = interpreters.create() writer = interp.add_recv_fifo('spam') def run(): interp.run("""if True: import interpreters interp = interpreters.get_current() reader = interp.get_fifo('spam') reader.pop() print("during") """) t = threading.Thread(target=run) print('before') t.start() print('after') writer.push(None) Sharing a file descriptor ------------------------- :: interp = interpreters.create() writer = interp.add_recv_fifo('spam') reader = interp.add_send_fifo('done') def run(): interp.run("""if True: import interpreters interp = interpreters.get_current() reader = interp.get_fifo('spam') writer = interp.get_fifo('done') fd = reader.pop() for line in os.fdopen(fd): print(line) writer.push(None) """) t = threading.Thread(target=run) t.start() with open('spamspamspam') as infile: writer.push(infile.fileno()) reader.pop() Passing objects via pickle -------------------------- :: interp = interpreters.create() writer = interp.add_recv_fifo('spam') interp.run("""if True: import pickle import interpreters interp = interpreters.get_current() reader = interp.get_fifo('spam') """) def run(): interp.run("""if True: data = reader.pop() while data is not None: obj = pickle.loads(data) do_something(obj) data = reader.pop() """) t = threading.Thread(target=run) t.start() for obj in input: data = pickle.dumps(obj) writer.push(data) writer.push(None) Rationale ========= Running code in multiple interpreters provides a useful level of isolation within the same process. This can be leveraged in number of ways. Furthermore, subinterpreters provide a well-defined framework in which such isolation may extended. CPython has supported subinterpreters, with increasing levels of support, since version 1.5. While the feature has the potential to be a powerful tool, subinterpreters have suffered from neglect because they are not available directly from Python. Exposing the existing functionality in the stdlib will help reverse the situation. This proposal is focused on enabling the fundamental capability of multiple isolated interpreters in the same Python process. This is a new area for Python so there is relative uncertainly about the best tools to provide as companions to subinterpreters. Thus we minimize the functionality we add in the proposal as much as possible. Concerns -------- * "subinterpreters are not worth the trouble" Some have argued that subinterpreters do not add sufficient benefit to justify making them an official part of Python. Adding features to the language (or stdlib) has a cost in increasing the size of the language. So it must pay for itself. In this case, subinterpreters provide a novel concurrency model focused on isolated threads of execution. Furthermore, they present an opportunity for changes in CPython that will allow simulateous use of multiple CPU cores (currently prevented by the GIL). Alternatives to subinterpreters include threading, async, and multiprocessing. Threading is limited by the GIL and async isn't the right solution for every problem (nor for every person). Multiprocessing is likewise valuable in some but not all situations. Direct IPC (rather than via the multiprocessing module) provides similar benefits but with the same caveat. Notably, subinterpreters are not intended as a replacement for any of the above. Certainly they overlap in some areas, but the benefits of subinterpreters include isolation and (potentially) performance. In particular, subinterpreters provide a direct route to an alternate concurrency model (e.g. CSP) which has found success elsewhere and will appeal to some Python users. That is the core value that the ``interpreters`` module will provide. * "stdlib support for subinterpreters adds extra burden on C extension authors" In the `Interpreter Isolation`_ section below we identify ways in which isolation in CPython's subinterpreters is incomplete. Most notable is extension modules that use C globals to store internal state. PEP 3121 and PEP 489 provide a solution for most of the problem, but one still remains. [petr-c-ext]_ Until that is resolved, C extension authors will face extra difficulty to support subinterpreters. Consequently, projects that publish extension modules may face an increased maintenance burden as their users start using subinterpreters, where their modules may break. This situation is limited to modules that use C globals (or use libraries that use C globals) to store internal state. Ultimately this comes down to a question of how often it will be a problem in practice: how many projects would be affected, how often their users will be affected, what the additional maintenance burden will be for projects, and what the overall benefit of subinterpreters is to offset those costs. The position of this PEP is that the actual extra maintenance burden will be small and well below the threshold at which subinterpreters are worth it. About Subinterpreters ===================== Interpreter Isolation --------------------- CPython's interpreters are intended to be strictly isolated from each other. Each interpreter has its own copy of all modules, classes, functions, and variables. The same applies to state in C, including in extension modules. The CPython C-API docs explain more. [caveats]_ However, there are ways in which interpreters share some state. First of all, some process-global state remains shared: * file descriptors * builtin types (e.g. dict, bytes) * singletons (e.g. None) * underlying static module data (e.g. functions) for builtin/extension/frozen modules There are no plans to change this. Second, some isolation is faulty due to bugs or implementations that did not take subinterpreters into account. This includes things like extension modules that rely on C globals. [cryptography]_ In these cases bugs should be opened (some are already): * readline module hook functions (http://bugs.python.org/issue4202) * memory leaks on re-init (http://bugs.python.org/issue21387) Finally, some potential isolation is missing due to the current design of CPython. Improvements are currently going on to address gaps in this area: * interpreters share the GIL * interpreters share memory management (e.g. allocators, gc) * GC is not run per-interpreter [global-gc]_ * at-exit handlers are not run per-interpreter [global-atexit]_ * extensions using the ``PyGILState_*`` API are incompatible [gilstate]_ Concurrency ----------- Concurrency is a challenging area of software development. Decades of research and practice have led to a wide variety of concurrency models, each with different goals. Most center on correctness and usability. One class of concurrency models focuses on isolated threads of execution that interoperate through some message passing scheme. A notable example is `Communicating Sequential Processes`_ (CSP), upon which Go's concurrency is based. The isolation inherent to subinterpreters makes them well-suited to this approach. Existing Usage -------------- Subinterpreters are not a widely used feature. In fact, the only documented case of wide-spread usage is `mod_wsgi `_. On the one hand, this case provides confidence that existing subinterpreter support is relatively stable. On the other hand, there isn't much of a sample size from which to judge the utility of the feature. Provisional Status ================== The new ``interpreters`` module will be added with "provisional" status (see PEP 411). This allows Python users to experiment with the feature and provide feedback while still allowing us to adjust to that feedback. The module will be provisional in Python 3.7 and we will make a decision before the 3.8 release whether to keep it provisional, graduate it, or remove it. Alternate Python Implementations ================================ TBD Deferred Functionality ====================== In the interest of keeping this proposal minimal, the following functionality has been left out for future consideration. Note that this is not a judgement against any of said capability, but rather a deferment. That said, each is arguably valid. Interpreter.call() ------------------ It would be convenient to run existing functions in subinterpreters directly. ``Interpreter.run()`` could be adjusted to support this or a ``call()`` method could be added:: Interpreter.call(f, *args, **kwargs) This suffers from the same problem as sharing objects between interpreters via queues. The minimal solution (running a source string) is sufficient for us to get the feature out where it can be explored. timeout arg to pop() and push() ------------------------------- Typically functions that have a ``block`` argument also have a ``timeout`` argument. We can add it later if needed. get_main() ---------- CPython has a concept of a "main" interpreter. This is the initial interpreter created during CPython's runtime initialization. It may be useful to identify the main interpreter. For instance, the main interpreter should not be destroyed. However, for the basic functionality of a high-level API a ``get_main()`` function is not necessary. Furthermore, there is no requirement that a Python implementation have a concept of a main interpreter. So until there's a clear need we'll leave ``get_main()`` out. Interpreter.run_in_thread() --------------------------- This method would make a ``run()`` call for you in a thread. Doing this using only ``threading.Thread`` and ``run()`` is relatively trivial so we've left it out. Synchronization Primitives -------------------------- The ``threading`` module provides a number of synchronization primitives for coordinating concurrent operations. This is especially necessary due to the shared-state nature of threading. In contrast, subinterpreters do not share state. Data sharing is restricted to FIFOs, which do away with the need for explicit synchronization. If any sort of opt-in shared state support is added to subinterpreters in the future, that same effort can introduce synchronization primitives to meet that need. CSP Library ----------- A ``csp`` module would not be a large step away from the functionality provided by this PEP. However, adding such a module is outside the minimalist goals of this proposal. Syntactic Support ----------------- The ``Go`` language provides a concurrency model based on CSP, so it's similar to the concurrency model that subinterpreters support. ``Go`` provides syntactic support, as well several builtin concurrency primitives, to make concurrency a first-class feature. Conceivably, similar syntactic (and builtin) support could be added to Python using subinterpreters. However, that is *way* outside the scope of this PEP! Multiprocessing --------------- The ``multiprocessing`` module could support subinterpreters in the same way it supports threads and processes. In fact, the module's maintainer, Davin Potts, has indicated this is a reasonable feature request. However, it is outside the narrow scope of this PEP. References ========== .. [c-api] https://docs.python.org/3/c-api/init.html#sub-interpreter-support .. _Communicating Sequential Processes: .. [CSP] https://en.wikipedia.org/wiki/Communicating_sequential_processes https://github.com/futurecore/python-csp .. [caveats] https://docs.python.org/3/c-api/init.html#bugs-and-caveats .. [petr-c-ext] https://mail.python.org/pipermail/import-sig/2016-June/001062.html https://mail.python.org/pipermail/python-ideas/2016-April/039748.html .. [cryptography] https://github.com/pyca/cryptography/issues/2299 .. [global-gc] http://bugs.python.org/issue24554 .. [gilstate] https://bugs.python.org/issue10915 http://bugs.python.org/issue15751 .. [global-atexit] https://bugs.python.org/issue6531 Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: