From aed11c85306a4ba299a21d7dd124c8f1f6fcadb6 Mon Sep 17 00:00:00 2001 From: Victor Stinner Date: Sun, 11 Aug 2013 22:08:38 +0200 Subject: [PATCH] PEP 446: closing all file descriptors between fork() and exec() is not reliable in a multithreaded application --- pep-0446.txt | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/pep-0446.txt b/pep-0446.txt index 3c5efa332..4f6270fe0 100644 --- a/pep-0446.txt +++ b/pep-0446.txt @@ -357,14 +357,20 @@ Legend: so this case is not concerned by this PEP. -Performances of Closing All File Descriptors --------------------------------------------- +Closing All Open File Descriptors +--------------------------------- On UNIX, the ``subprocess`` module closes almost all file descriptors in the child process. This operation require MAXFD system calls, where MAXFD is the maximum number of file descriptors, even if there are only few open file descriptors. This maximum can be read using: -``sysconf("SC_OPEN_MAX")``. +``os.sysconf("SC_OPEN_MAX")``. + +There is no portable nor reliable function to close all open file +descriptors between ``fork()`` and ``execv()``. Another thread may +create an inheritable file descriptors while we are closing existing +file descriptors. Holding the CPython GIL reduces the risk of the race +condition. The operation can be slow if MAXFD is large. For example, on a FreeBSD buildbot with ``MAXFD=655,000``, the operation took 300 ms: see @@ -375,6 +381,10 @@ On Linux, Python 3.3 gets the list of all open file descriptors from ``/proc//fd/``, and so performances depends on the number of open file descriptors, not on MAXFD. +FreeBSD, OpenBSD and Solaris provide a ``closefrom()`` function. It +cannot be used by the ``subprocess`` module when the *pass_fds* +parameter is a non-empty list of file descriptors. + See also: * `Python issue #1663329 `_: