PEP 446: closing all file descriptors between fork() and exec() is not reliable

in a multithreaded application
This commit is contained in:
Victor Stinner 2013-08-11 22:08:38 +02:00
parent a9dd5b6b82
commit aed11c8530
1 changed files with 13 additions and 3 deletions

View File

@ -357,14 +357,20 @@ Legend:
so this case is not concerned by this PEP.
Performances of Closing All File Descriptors
--------------------------------------------
Closing All Open File Descriptors
---------------------------------
On UNIX, the ``subprocess`` module closes almost all file descriptors in
the child process. This operation require MAXFD system calls, where
MAXFD is the maximum number of file descriptors, even if there are only
few open file descriptors. This maximum can be read using:
``sysconf("SC_OPEN_MAX")``.
``os.sysconf("SC_OPEN_MAX")``.
There is no portable nor reliable function to close all open file
descriptors between ``fork()`` and ``execv()``. Another thread may
create an inheritable file descriptors while we are closing existing
file descriptors. Holding the CPython GIL reduces the risk of the race
condition.
The operation can be slow if MAXFD is large. For example, on a FreeBSD
buildbot with ``MAXFD=655,000``, the operation took 300 ms: see
@ -375,6 +381,10 @@ On Linux, Python 3.3 gets the list of all open file descriptors from
``/proc/<PID>/fd/``, and so performances depends on the number of open
file descriptors, not on MAXFD.
FreeBSD, OpenBSD and Solaris provide a ``closefrom()`` function. It
cannot be used by the ``subprocess`` module when the *pass_fds*
parameter is a non-empty list of file descriptors.
See also:
* `Python issue #1663329 <http://bugs.python.org/issue1663329>`_: