reSTify PEP 371 (#325)
This commit is contained in:
parent
0f7b74537f
commit
dd454efb3a
396
pep-0371.txt
396
pep-0371.txt
|
@ -6,64 +6,66 @@ Author: Jesse Noller <jnoller@gmail.com>,
|
||||||
Richard Oudkerk <r.m.oudkerk@googlemail.com>
|
Richard Oudkerk <r.m.oudkerk@googlemail.com>
|
||||||
Status: Final
|
Status: Final
|
||||||
Type: Standards Track
|
Type: Standards Track
|
||||||
Content-Type: text/plain
|
Content-Type: text/x-rst
|
||||||
Created: 06-May-2008
|
Created: 06-May-2008
|
||||||
Python-Version: 2.6 / 3.0
|
Python-Version: 2.6 / 3.0
|
||||||
Post-History:
|
Post-History:
|
||||||
|
|
||||||
|
|
||||||
Abstract
|
Abstract
|
||||||
|
========
|
||||||
|
|
||||||
This PEP proposes the inclusion of the pyProcessing [1] package
|
This PEP proposes the inclusion of the ``pyProcessing`` [1]_ package
|
||||||
into the Python standard library, renamed to "multiprocessing".
|
into the Python standard library, renamed to "multiprocessing".
|
||||||
|
|
||||||
The processing package mimics the standard library threading
|
The ``processing`` package mimics the standard library ``threading``
|
||||||
module functionality to provide a process-based approach to
|
module functionality to provide a process-based approach to
|
||||||
threaded programming allowing end-users to dispatch multiple
|
threaded programming allowing end-users to dispatch multiple
|
||||||
tasks that effectively side-step the global interpreter lock.
|
tasks that effectively side-step the global interpreter lock.
|
||||||
|
|
||||||
The package also provides server and client functionality
|
The package also provides server and client functionality
|
||||||
(processing.Manager) to provide remote sharing and management of
|
(``processing.Manager``) to provide remote sharing and management of
|
||||||
objects and tasks so that applications may not only leverage
|
objects and tasks so that applications may not only leverage
|
||||||
multiple cores on the local machine, but also distribute objects
|
multiple cores on the local machine, but also distribute objects
|
||||||
and tasks across a cluster of networked machines.
|
and tasks across a cluster of networked machines.
|
||||||
|
|
||||||
While the distributed capabilities of the package are beneficial,
|
While the distributed capabilities of the package are beneficial,
|
||||||
the primary focus of this PEP is the core threading-like API and
|
the primary focus of this PEP is the core threading-like API and
|
||||||
capabilities of the package.
|
capabilities of the package.
|
||||||
|
|
||||||
Rationale
|
Rationale
|
||||||
|
=========
|
||||||
|
|
||||||
The current CPython interpreter implements the Global Interpreter
|
The current CPython interpreter implements the Global Interpreter
|
||||||
Lock (GIL) and barring work in Python 3000 or other versions
|
Lock (GIL) and barring work in Python 3000 or other versions
|
||||||
currently planned [2], the GIL will remain as-is within the
|
currently planned [2]_, the GIL will remain as-is within the
|
||||||
CPython interpreter for the foreseeable future. While the GIL
|
CPython interpreter for the foreseeable future. While the GIL
|
||||||
itself enables clean and easy to maintain C code for the
|
itself enables clean and easy to maintain C code for the
|
||||||
interpreter and extensions base, it is frequently an issue for
|
interpreter and extensions base, it is frequently an issue for
|
||||||
those Python programmers who are leveraging multi-core machines.
|
those Python programmers who are leveraging multi-core machines.
|
||||||
|
|
||||||
The GIL itself prevents more than a single thread from running
|
The GIL itself prevents more than a single thread from running
|
||||||
within the interpreter at any given point in time, effectively
|
within the interpreter at any given point in time, effectively
|
||||||
removing Python's ability to take advantage of multi-processor
|
removing Python's ability to take advantage of multi-processor
|
||||||
systems.
|
systems.
|
||||||
|
|
||||||
The pyprocessing package offers a method to side-step the GIL
|
The pyprocessing package offers a method to side-step the GIL
|
||||||
allowing applications within CPython to take advantage of
|
allowing applications within CPython to take advantage of
|
||||||
multi-core architectures without asking users to completely change
|
multi-core architectures without asking users to completely change
|
||||||
their programming paradigm (i.e.: dropping threaded programming
|
their programming paradigm (i.e.: dropping threaded programming
|
||||||
for another "concurrent" approach - Twisted, Actors, etc).
|
for another "concurrent" approach - Twisted, Actors, etc).
|
||||||
|
|
||||||
The Processing package offers CPython a "known API" which mirrors
|
The Processing package offers CPython a "known API" which mirrors
|
||||||
albeit in a PEP 8 compliant manner, that of the threading API,
|
albeit in a PEP 8 compliant manner, that of the threading API,
|
||||||
with known semantics and easy scalability.
|
with known semantics and easy scalability.
|
||||||
|
|
||||||
In the future, the package might not be as relevant should the
|
In the future, the package might not be as relevant should the
|
||||||
CPython interpreter enable "true" threading, however for some
|
CPython interpreter enable "true" threading, however for some
|
||||||
applications, forking an OS process may sometimes be more
|
applications, forking an OS process may sometimes be more
|
||||||
desirable than using lightweight threads, especially on those
|
desirable than using lightweight threads, especially on those
|
||||||
platforms where process creation is fast and optimized.
|
platforms where process creation is fast and optimized.
|
||||||
|
|
||||||
For example, a simple threaded application:
|
For example, a simple threaded application::
|
||||||
|
|
||||||
from threading import Thread as worker
|
from threading import Thread as worker
|
||||||
|
|
||||||
|
@ -74,82 +76,85 @@ Rationale
|
||||||
t.start()
|
t.start()
|
||||||
t.join()
|
t.join()
|
||||||
|
|
||||||
The pyprocessing package mirrored the API so well, that with a
|
The pyprocessing package mirrored the API so well, that with a
|
||||||
simple change of the import to:
|
simple change of the import to::
|
||||||
|
|
||||||
from processing import process as worker
|
from processing import process as worker
|
||||||
|
|
||||||
The code would now execute through the processing.process class.
|
The code would now execute through the processing.process class.
|
||||||
Obviously, with the renaming of the API to PEP 8 compliance there
|
Obviously, with the renaming of the API to PEP 8 compliance there
|
||||||
would be additional renaming which would need to occur within
|
would be additional renaming which would need to occur within
|
||||||
user applications, however minor.
|
user applications, however minor.
|
||||||
|
|
||||||
This type of compatibility means that, with a minor (in most cases)
|
This type of compatibility means that, with a minor (in most cases)
|
||||||
change in code, users' applications will be able to leverage all
|
change in code, users' applications will be able to leverage all
|
||||||
cores and processors on a given machine for parallel execution.
|
cores and processors on a given machine for parallel execution.
|
||||||
In many cases the pyprocessing package is even faster than the
|
In many cases the pyprocessing package is even faster than the
|
||||||
normal threading approach for I/O bound programs. This of course,
|
normal threading approach for I/O bound programs. This of course,
|
||||||
takes into account that the pyprocessing package is in optimized C
|
takes into account that the pyprocessing package is in optimized C
|
||||||
code, while the threading module is not.
|
code, while the threading module is not.
|
||||||
|
|
||||||
The "Distributed" Problem
|
The "Distributed" Problem
|
||||||
|
=========================
|
||||||
|
|
||||||
In the discussion on Python-Dev about the inclusion of this
|
In the discussion on Python-Dev about the inclusion of this
|
||||||
package [3] there was confusion about the intentions this PEP with
|
package [3]_ there was confusion about the intentions this PEP with
|
||||||
an attempt to solve the "Distributed" problem - frequently
|
an attempt to solve the "Distributed" problem - frequently
|
||||||
comparing the functionality of this package with other solutions
|
comparing the functionality of this package with other solutions
|
||||||
like MPI-based communication [4], CORBA, or other distributed
|
like MPI-based communication [4]_, CORBA, or other distributed
|
||||||
object approaches [5].
|
object approaches [5]_.
|
||||||
|
|
||||||
The "distributed" problem is large and varied. Each programmer
|
The "distributed" problem is large and varied. Each programmer
|
||||||
working within this domain has either very strong opinions about
|
working within this domain has either very strong opinions about
|
||||||
their favorite module/method or a highly customized problem for
|
their favorite module/method or a highly customized problem for
|
||||||
which no existing solution works.
|
which no existing solution works.
|
||||||
|
|
||||||
The acceptance of this package does not preclude or recommend that
|
The acceptance of this package does not preclude or recommend that
|
||||||
programmers working on the "distributed" problem not examine other
|
programmers working on the "distributed" problem not examine other
|
||||||
solutions for their problem domain. The intent of including this
|
solutions for their problem domain. The intent of including this
|
||||||
package is to provide entry-level capabilities for local
|
package is to provide entry-level capabilities for local
|
||||||
concurrency and the basic support to spread that concurrency
|
concurrency and the basic support to spread that concurrency
|
||||||
across a network of machines - although the two are not tightly
|
across a network of machines - although the two are not tightly
|
||||||
coupled, the pyprocessing package could in fact, be used in
|
coupled, the pyprocessing package could in fact, be used in
|
||||||
conjunction with any of the other solutions including MPI/etc.
|
conjunction with any of the other solutions including MPI/etc.
|
||||||
|
|
||||||
If necessary - it is possible to completely decouple the local
|
If necessary - it is possible to completely decouple the local
|
||||||
concurrency abilities of the package from the
|
concurrency abilities of the package from the
|
||||||
network-capable/shared aspects of the package. Without serious
|
network-capable/shared aspects of the package. Without serious
|
||||||
concerns or cause however, the author of this PEP does not
|
concerns or cause however, the author of this PEP does not
|
||||||
recommend that approach.
|
recommend that approach.
|
||||||
|
|
||||||
Performance Comparison
|
Performance Comparison
|
||||||
|
======================
|
||||||
|
|
||||||
As we all know - there are "lies, damned lies, and benchmarks".
|
As we all know - there are "lies, damned lies, and benchmarks".
|
||||||
These speed comparisons, while aimed at showcasing the performance
|
These speed comparisons, while aimed at showcasing the performance
|
||||||
of the pyprocessing package, are by no means comprehensive or
|
of the pyprocessing package, are by no means comprehensive or
|
||||||
applicable to all possible use cases or environments. Especially
|
applicable to all possible use cases or environments. Especially
|
||||||
for those platforms with sluggish process forking timing.
|
for those platforms with sluggish process forking timing.
|
||||||
|
|
||||||
All benchmarks were run using the following:
|
All benchmarks were run using the following:
|
||||||
* 4 Core Intel Xeon CPU @ 3.00GHz
|
|
||||||
* 16 GB of RAM
|
|
||||||
* Python 2.5.2 compiled on Gentoo Linux (kernel 2.6.18.6)
|
|
||||||
* pyProcessing 0.52
|
|
||||||
|
|
||||||
All of the code for this can be downloaded from:
|
* 4 Core Intel Xeon CPU @ 3.00GHz
|
||||||
http://jessenoller.com/code/bench-src.tgz
|
* 16 GB of RAM
|
||||||
|
* Python 2.5.2 compiled on Gentoo Linux (kernel 2.6.18.6)
|
||||||
|
* pyProcessing 0.52
|
||||||
|
|
||||||
The basic method of execution for these benchmarks is in the
|
All of the code for this can be downloaded from
|
||||||
run_benchmarks.py script, which is simply a wrapper to execute a
|
http://jessenoller.com/code/bench-src.tgz
|
||||||
target function through a single threaded (linear), multi-threaded
|
|
||||||
(via threading), and multi-process (via pyprocessing) function for
|
|
||||||
a static number of iterations with increasing numbers of execution
|
|
||||||
loops and/or threads.
|
|
||||||
|
|
||||||
The run_benchmarks.py script executes each function 100 times,
|
The basic method of execution for these benchmarks is in the
|
||||||
picking the best run of that 100 iterations via the timeit module.
|
run_benchmarks.py script, which is simply a wrapper to execute a
|
||||||
|
target function through a single threaded (linear), multi-threaded
|
||||||
|
(via threading), and multi-process (via pyprocessing) function for
|
||||||
|
a static number of iterations with increasing numbers of execution
|
||||||
|
loops and/or threads.
|
||||||
|
|
||||||
First, to identify the overhead of the spawning of the workers, we
|
The run_benchmarks.py script executes each function 100 times,
|
||||||
execute a function which is simply a pass statement (empty):
|
picking the best run of that 100 iterations via the timeit module.
|
||||||
|
|
||||||
|
First, to identify the overhead of the spawning of the workers, we
|
||||||
|
execute a function which is simply a pass statement (empty)::
|
||||||
|
|
||||||
cmd: python run_benchmarks.py empty_func.py
|
cmd: python run_benchmarks.py empty_func.py
|
||||||
Importing empty_func
|
Importing empty_func
|
||||||
|
@ -170,12 +175,12 @@ Performance Comparison
|
||||||
threaded (8 threads) 0.007990 seconds
|
threaded (8 threads) 0.007990 seconds
|
||||||
processes (8 procs) 0.005512 seconds
|
processes (8 procs) 0.005512 seconds
|
||||||
|
|
||||||
As you can see, process forking via the pyprocessing package is
|
As you can see, process forking via the pyprocessing package is
|
||||||
faster than the speed of building and then executing the threaded
|
faster than the speed of building and then executing the threaded
|
||||||
version of the code.
|
version of the code.
|
||||||
|
|
||||||
The second test calculates 50000 Fibonacci numbers inside of each
|
The second test calculates 50000 Fibonacci numbers inside of each
|
||||||
thread (isolated and shared nothing):
|
thread (isolated and shared nothing)::
|
||||||
|
|
||||||
cmd: python run_benchmarks.py fibonacci.py
|
cmd: python run_benchmarks.py fibonacci.py
|
||||||
Importing fibonacci
|
Importing fibonacci
|
||||||
|
@ -196,8 +201,8 @@ Performance Comparison
|
||||||
threaded (8 threads) 1.596824 seconds
|
threaded (8 threads) 1.596824 seconds
|
||||||
processes (8 procs) 0.417899 seconds
|
processes (8 procs) 0.417899 seconds
|
||||||
|
|
||||||
The third test calculates the sum of all primes below 100000,
|
The third test calculates the sum of all primes below 100000,
|
||||||
again sharing nothing.
|
again sharing nothing::
|
||||||
|
|
||||||
cmd: run_benchmarks.py crunch_primes.py
|
cmd: run_benchmarks.py crunch_primes.py
|
||||||
Importing crunch_primes
|
Importing crunch_primes
|
||||||
|
@ -218,18 +223,18 @@ Performance Comparison
|
||||||
threaded (8 threads) 5.109192 seconds
|
threaded (8 threads) 5.109192 seconds
|
||||||
processes (8 procs) 1.077939 seconds
|
processes (8 procs) 1.077939 seconds
|
||||||
|
|
||||||
The reason why tests two and three focused on pure numeric
|
The reason why tests two and three focused on pure numeric
|
||||||
crunching is to showcase how the current threading implementation
|
crunching is to showcase how the current threading implementation
|
||||||
does hinder non-I/O applications. Obviously, these tests could be
|
does hinder non-I/O applications. Obviously, these tests could be
|
||||||
improved to use a queue for coordination of results and chunks of
|
improved to use a queue for coordination of results and chunks of
|
||||||
work but that is not required to show the performance of the
|
work but that is not required to show the performance of the
|
||||||
package and core processing.process module.
|
package and core processing.process module.
|
||||||
|
|
||||||
The next test is an I/O bound test. This is normally where we see
|
The next test is an I/O bound test. This is normally where we see
|
||||||
a steep improvement in the threading module approach versus a
|
a steep improvement in the threading module approach versus a
|
||||||
single-threaded approach. In this case, each worker is opening a
|
single-threaded approach. In this case, each worker is opening a
|
||||||
descriptor to lorem.txt, randomly seeking within it and writing
|
descriptor to lorem.txt, randomly seeking within it and writing
|
||||||
lines to /dev/null:
|
lines to /dev/null::
|
||||||
|
|
||||||
cmd: python run_benchmarks.py file_io.py
|
cmd: python run_benchmarks.py file_io.py
|
||||||
Importing file_io
|
Importing file_io
|
||||||
|
@ -250,14 +255,14 @@ Performance Comparison
|
||||||
threaded (8 threads) 2.437204 seconds
|
threaded (8 threads) 2.437204 seconds
|
||||||
processes (8 procs) 0.203438 seconds
|
processes (8 procs) 0.203438 seconds
|
||||||
|
|
||||||
As you can see, pyprocessing is still faster on this I/O operation
|
As you can see, pyprocessing is still faster on this I/O operation
|
||||||
than using multiple threads. And using multiple threads is slower
|
than using multiple threads. And using multiple threads is slower
|
||||||
than the single threaded execution itself.
|
than the single threaded execution itself.
|
||||||
|
|
||||||
Finally, we will run a socket-based test to show network I/O
|
Finally, we will run a socket-based test to show network I/O
|
||||||
performance. This function grabs a URL from a server on the LAN
|
performance. This function grabs a URL from a server on the LAN
|
||||||
that is a simple error page from tomcat. It gets the page 100
|
that is a simple error page from tomcat. It gets the page 100
|
||||||
times. The network is silent, and a 10G connection:
|
times. The network is silent, and a 10G connection::
|
||||||
|
|
||||||
cmd: python run_benchmarks.py url_get.py
|
cmd: python run_benchmarks.py url_get.py
|
||||||
Importing url_get
|
Importing url_get
|
||||||
|
@ -278,19 +283,19 @@ Performance Comparison
|
||||||
threaded (8 threads) 0.659298 seconds
|
threaded (8 threads) 0.659298 seconds
|
||||||
processes (8 procs) 0.298625 seconds
|
processes (8 procs) 0.298625 seconds
|
||||||
|
|
||||||
We finally see threaded performance surpass that of
|
We finally see threaded performance surpass that of
|
||||||
single-threaded execution, but the pyprocessing package is still
|
single-threaded execution, but the pyprocessing package is still
|
||||||
faster when increasing the number of workers. If you stay with
|
faster when increasing the number of workers. If you stay with
|
||||||
one or two threads/workers, then the timing between threads and
|
one or two threads/workers, then the timing between threads and
|
||||||
pyprocessing is fairly close.
|
pyprocessing is fairly close.
|
||||||
|
|
||||||
One item of note however, is that there is an implicit overhead
|
One item of note however, is that there is an implicit overhead
|
||||||
within the pyprocessing package's Queue implementation due to the
|
within the pyprocessing package's ``Queue`` implementation due to the
|
||||||
object serialization.
|
object serialization.
|
||||||
|
|
||||||
Alec Thomas provided a short example based on the
|
Alec Thomas provided a short example based on the
|
||||||
run_benchmarks.py script to demonstrate this overhead versus the
|
run_benchmarks.py script to demonstrate this overhead versus the
|
||||||
default Queue implementation:
|
default ``Queue`` implementation::
|
||||||
|
|
||||||
cmd: run_bench_queue.py
|
cmd: run_bench_queue.py
|
||||||
non_threaded (1 iters) 0.010546 seconds
|
non_threaded (1 iters) 0.010546 seconds
|
||||||
|
@ -309,123 +314,130 @@ Performance Comparison
|
||||||
threaded (8 threads) 0.184254 seconds
|
threaded (8 threads) 0.184254 seconds
|
||||||
processes (8 procs) 0.302999 seconds
|
processes (8 procs) 0.302999 seconds
|
||||||
|
|
||||||
Additional benchmarks can be found in the pyprocessing package's
|
Additional benchmarks can be found in the pyprocessing package's
|
||||||
source distribution's examples/ directory. The examples will be
|
source distribution's examples/ directory. The examples will be
|
||||||
included in the package's documentation.
|
included in the package's documentation.
|
||||||
|
|
||||||
Maintenance
|
Maintenance
|
||||||
|
===========
|
||||||
|
|
||||||
Richard M. Oudkerk - the author of the pyprocessing package has
|
Richard M. Oudkerk - the author of the pyprocessing package has
|
||||||
agreed to maintain the package within Python SVN. Jesse Noller
|
agreed to maintain the package within Python SVN. Jesse Noller
|
||||||
has volunteered to also help maintain/document and test the
|
has volunteered to also help maintain/document and test the
|
||||||
package.
|
package.
|
||||||
|
|
||||||
API Naming
|
API Naming
|
||||||
|
==========
|
||||||
|
|
||||||
While the aim of the package's API is designed to closely mimic that of
|
While the aim of the package's API is designed to closely mimic that of
|
||||||
the threading and Queue modules as of python 2.x, those modules are not
|
the threading and ``Queue`` modules as of python 2.x, those modules are not
|
||||||
PEP 8 compliant. It has been decided that instead of adding the package
|
PEP 8 compliant. It has been decided that instead of adding the package
|
||||||
"as is" and therefore perpetuating the non-PEP 8 compliant naming, we
|
"as is" and therefore perpetuating the non-PEP 8 compliant naming, we
|
||||||
will rename all APIs, classes, etc to be fully PEP 8 compliant.
|
will rename all APIs, classes, etc to be fully PEP 8 compliant.
|
||||||
|
|
||||||
This change does affect the ease-of-drop in replacement for those using
|
This change does affect the ease-of-drop in replacement for those using
|
||||||
the threading module, but that is an acceptable side-effect in the view
|
the threading module, but that is an acceptable side-effect in the view
|
||||||
of the authors, especially given that the threading module's own API
|
of the authors, especially given that the threading module's own API
|
||||||
will change.
|
will change.
|
||||||
|
|
||||||
Issue 3042 in the tracker proposes that for Python 2.6 there will be
|
Issue 3042 in the tracker proposes that for Python 2.6 there will be
|
||||||
two APIs for the threading module - the current one, and the PEP 8
|
two APIs for the threading module - the current one, and the PEP 8
|
||||||
compliant one. Warnings about the upcoming removal of the original
|
compliant one. Warnings about the upcoming removal of the original
|
||||||
java-style API will be issued when -3 is invoked.
|
java-style API will be issued when -3 is invoked.
|
||||||
|
|
||||||
In Python 3000, the threading API will become PEP 8 compliant, which
|
In Python 3000, the threading API will become PEP 8 compliant, which
|
||||||
means that the multiprocessing module and the threading module will
|
means that the multiprocessing module and the threading module will
|
||||||
again have matching APIs.
|
again have matching APIs.
|
||||||
|
|
||||||
Timing/Schedule
|
Timing/Schedule
|
||||||
|
===============
|
||||||
|
|
||||||
Some concerns have been raised about the timing/lateness of this
|
Some concerns have been raised about the timing/lateness of this
|
||||||
PEP for the 2.6 and 3.0 releases this year, however it is felt by
|
PEP for the 2.6 and 3.0 releases this year, however it is felt by
|
||||||
both the authors and others that the functionality this package
|
both the authors and others that the functionality this package
|
||||||
offers surpasses the risk of inclusion.
|
offers surpasses the risk of inclusion.
|
||||||
|
|
||||||
However, taking into account the desire not to destabilize
|
However, taking into account the desire not to destabilize
|
||||||
Python-core, some refactoring of pyprocessing's code "into"
|
Python-core, some refactoring of pyprocessing's code "into"
|
||||||
Python-core can be withheld until the next 2.x/3.x releases. This
|
Python-core can be withheld until the next 2.x/3.x releases. This
|
||||||
means that the actual risk to Python-core is minimal, and largely
|
means that the actual risk to Python-core is minimal, and largely
|
||||||
constrained to the actual package itself.
|
constrained to the actual package itself.
|
||||||
|
|
||||||
Open Issues
|
Open Issues
|
||||||
|
===========
|
||||||
|
|
||||||
* Confirm no "default" remote connection capabilities, if needed
|
* Confirm no "default" remote connection capabilities, if needed
|
||||||
enable the remote security mechanisms by default for those
|
enable the remote security mechanisms by default for those
|
||||||
classes which offer remote capabilities.
|
classes which offer remote capabilities.
|
||||||
|
|
||||||
* Some of the API (Queue methods qsize(), task_done() and join())
|
* Some of the API (``Queue`` methods ``qsize()``, ``task_done()`` and ``join()``)
|
||||||
either need to be added, or the reason for their exclusion needs
|
either need to be added, or the reason for their exclusion needs
|
||||||
to be identified and documented clearly.
|
to be identified and documented clearly.
|
||||||
|
|
||||||
Closed Issues
|
Closed Issues
|
||||||
|
=============
|
||||||
|
|
||||||
* The PyGILState bug patch submitted in issue 1683 by roudkerk
|
* The ``PyGILState`` bug patch submitted in issue 1683 by roudkerk
|
||||||
must be applied for the package unit tests to work.
|
must be applied for the package unit tests to work.
|
||||||
|
|
||||||
* Existing documentation has to be moved to ReST formatting.
|
* Existing documentation has to be moved to ReST formatting.
|
||||||
|
|
||||||
* Reliance on ctypes: The pyprocessing package's reliance on
|
* Reliance on ctypes: The ``pyprocessing`` package's reliance on
|
||||||
ctypes prevents the package from functioning on platforms where
|
ctypes prevents the package from functioning on platforms where
|
||||||
ctypes is not supported. This is not a restriction of this
|
ctypes is not supported. This is not a restriction of this
|
||||||
package, but rather of ctypes.
|
package, but rather of ctypes.
|
||||||
|
|
||||||
* DONE: Rename top-level package from "pyprocessing" to
|
* DONE: Rename top-level package from "pyprocessing" to
|
||||||
"multiprocessing".
|
"multiprocessing".
|
||||||
|
|
||||||
* DONE: Also note that the default behavior of process spawning
|
* DONE: Also note that the default behavior of process spawning
|
||||||
does not make it compatible with use within IDLE as-is, this
|
does not make it compatible with use within IDLE as-is, this
|
||||||
will be examined as a bug-fix or "setExecutable" enhancement.
|
will be examined as a bug-fix or "setExecutable" enhancement.
|
||||||
|
|
||||||
* DONE: Add in "multiprocessing.setExecutable()" method to override the
|
* DONE: Add in "multiprocessing.setExecutable()" method to override the
|
||||||
default behavior of the package to spawn processes using the
|
default behavior of the package to spawn processes using the
|
||||||
current executable name rather than the Python interpreter. Note
|
current executable name rather than the Python interpreter. Note
|
||||||
that Mark Hammond has suggested a factory-style interface for
|
that Mark Hammond has suggested a factory-style interface for
|
||||||
this[7].
|
this [7]_.
|
||||||
|
|
||||||
References
|
References
|
||||||
|
==========
|
||||||
|
|
||||||
[1] PyProcessing home page
|
.. [1] PyProcessing home page
|
||||||
http://pyprocessing.berlios.de/
|
http://pyprocessing.berlios.de/
|
||||||
|
|
||||||
[2] See Adam Olsen's "safe threading" project
|
.. [2] See Adam Olsen's "safe threading" project
|
||||||
http://code.google.com/p/python-safethread/
|
http://code.google.com/p/python-safethread/
|
||||||
|
|
||||||
[3] See: Addition of "pyprocessing" module to standard lib.
|
.. [3] See: Addition of "pyprocessing" module to standard lib.
|
||||||
https://mail.python.org/pipermail/python-dev/2008-May/079417.html
|
https://mail.python.org/pipermail/python-dev/2008-May/079417.html
|
||||||
|
|
||||||
[4] http://mpi4py.scipy.org/
|
.. [4] http://mpi4py.scipy.org/
|
||||||
|
|
||||||
[5] See "Cluster Computing"
|
.. [5] See "Cluster Computing"
|
||||||
http://wiki.python.org/moin/ParallelProcessing
|
http://wiki.python.org/moin/ParallelProcessing
|
||||||
|
|
||||||
[6] The original run_benchmark.py code was published in Python
|
.. [6] The original run_benchmark.py code was published in Python
|
||||||
Magazine in December 2007: "Python Threads and the Global
|
Magazine in December 2007: "Python Threads and the Global
|
||||||
Interpreter Lock" by Jesse Noller. It has been modified for
|
Interpreter Lock" by Jesse Noller. It has been modified for
|
||||||
this PEP.
|
this PEP.
|
||||||
|
|
||||||
[7] http://groups.google.com/group/python-dev2/msg/54cf06d15cbcbc34
|
.. [7] http://groups.google.com/group/python-dev2/msg/54cf06d15cbcbc34
|
||||||
|
|
||||||
[8] Addition Python-Dev discussion
|
.. [8] Addition Python-Dev discussion
|
||||||
https://mail.python.org/pipermail/python-dev/2008-June/080011.html
|
https://mail.python.org/pipermail/python-dev/2008-June/080011.html
|
||||||
|
|
||||||
Copyright
|
Copyright
|
||||||
|
=========
|
||||||
|
|
||||||
This document has been placed in the public domain.
|
This document has been placed in the public domain.
|
||||||
|
|
||||||
|
|
||||||
|
..
|
||||||
Local Variables:
|
Local Variables:
|
||||||
mode: indented-text
|
mode: indented-text
|
||||||
indent-tabs-mode: nil
|
indent-tabs-mode: nil
|
||||||
sentence-end-double-space: t
|
sentence-end-double-space: t
|
||||||
fill-column: 70
|
fill-column: 70
|
||||||
coding: utf-8
|
coding: utf-8
|
||||||
End:
|
End:
|
||||||
|
|
Loading…
Reference in New Issue