develooper Front page | perl.perl5.porters | Postings from July 2009

Re: [PATCH] Restore original order of HV free

Thread Previous | Thread Next
Sam Vilain
July 30, 2009 20:37
Re: [PATCH] Restore original order of HV free
Message ID:
Sam Vilain wrote:
> In change 26550 (commit 7440661e26), a minor refactoring reversed the
> order of freeing entries in HV arrays.  It was not the stated
> intention of the change to alter this behaviour; it seems to be done
> merely to avoid the use of an extra temporary variable. It can cause
> some programs which set up complicated pipelines using a mix of open
> FH, "|-", pipe() and fork() to deadlock where 5.8.x did not.
> Restoring the original order of the free works around the problem.
> Signed-off-by: Sam Vilain <a class="moz-txt-link-rfc2396E" href="">&lt;;</a>
> ---
>  For maint-5.10; this is a 5.8.x to 5.10.0 regression.
>  My theory is that by starting at the end of the HvARRAY and working
>  backwards, the gremlins can see the destructor coming and run away.
>  Starting at the beginning of the HvARRAY takes their feet first and
>  therefore stops them from escaping.
>  The program which it broke is apt-repository-simple, see

Sorry. It turns out that the program did indeed have a deadlock
condition. This destruction order change meant that as all of the open
FH, "|-" calls unwound as the program exited, they did it in the
reverse order and this made a real bug in the program happen all the

So, lessons learned;

  1. don't use open FH, "|-" if you are setting up complicated
     pipelines of many processes; you can't close the FH in processes
     which you don't want to be writers on that pipeline, without
     potentially deadlocking.

  2. filehandles started with open FH, "|prog" etc end up having
     waitpid() called on them during global destruction, and this too
     can cause deadlocks for the above reason.

  3. always use pipe() and fork() for IPC :)

I hope that the following doc patch will help transfer my findings to
people trying to do things like this too.

Subject: [PATCH] pod/perlipc.pod: add some hints on avoiding pipe deadlocks

Tracking down deadlocks when using pipes for IPC can be hard, so put
even more notes about gotchas in this section of perlipc.
 pod/perlipc.pod |   86 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 86 insertions(+), 0 deletions(-)

diff --git a/pod/perlipc.pod b/pod/perlipc.pod
index 416ded5..77f0b6e 100644
--- a/pod/perlipc.pod
+++ b/pod/perlipc.pod
@@ -630,6 +630,68 @@ And here's a safe pipe open for writing:
+It is very easy to dead-lock a process using this form of open(), or
+indeed any use of pipe() and multiple sub-processes.  The above
+example is 'safe' because it is simple and calls exec().  See
+L</"Avoiding Pipe Deadlocks"> for general safety principles, but there
+are extra gotchas with Safe Pipe Opens.
+In particular, if you opened the pipe using C<open FH, "|-">, then you
+cannot simply use close() in the parent process to close an unwanted
+writer.  Consider this code:
+    $pid = open WRITER, "|-";
+    defined $pid or die "fork failed; $!";
+    if ($pid) {
+        if (my $sub_pid = fork()) {
+            close WRITER;
+            # do something else...
+        }
+        else {
+            # write to WRITER...
+	    exit;
+        }
+    }
+    else {
+        # do something with STDIN...
+	exit;
+    }
+In the above, the true parent does not want to write to the WRITER
+filehandle, so it closes it.  However, because WRITER was opened using
+C<open FH, "|-">, it has a special behaviour: closing it will call
+waitpid() (see L<perlfunc/waitpid>), which waits for the sub-process
+to exit.  If the child process ends up waiting for something happening
+in the section marked "do something else", then you have a deadlock.
+This can also be a problem with intermediate sub-processes in more
+complicated code, which will call waitpid() on all open filehandles
+during global destruction; in no predictable order.
+To solve this, you must manually use pipe(), fork(), and the form of
+open() which sets one file descriptor to another, as below:
+    pipe(READER, WRITER);
+    $pid = fork();
+    defined $pid or die "fork failed; $!";
+    if ($pid) {
+	close READER;
+        if (my $sub_pid = fork()) {
+            close WRITER;
+        }
+        else {
+            # write to WRITER...
+	    exit;
+        }
+        # write to WRITER...
+    }
+    else {
+        open STDIN, "<&READER";
+        close WRITER;
+        # do something...
+        exit;
+    }
 Since Perl 5.8.0, you can also use the list form of C<open> for pipes :
 the syntax
@@ -645,6 +707,30 @@ correctly implemented on alien systems.  Additionally, these are not true
 multithreading.  If you'd like to learn more about threading, see the
 F<modules> file mentioned below in the SEE ALSO section.
+=head2 Avoiding Pipe Deadlocks
+In general, if you have more than one sub-process, you need to be very
+careful that any process which does not need the writer half of any
+pipe you create for inter-process communication does not have it open.
+The reason for this is that any child process which is reading from
+the pipe and expecting an EOF will never receive it, and therefore
+never exit.  A single process closing a pipe is not enough to close it;
+the last process with the pipe open must close it for it to read EOF.
+There are some features built-in to unix to help prevent this most of
+the time.  For instance, filehandles have a 'close on exec' flag (set
+I<en masse> with Perl using the C<$^F> L<perlvar>), so that any
+filehandles which you didn't explicitly route to the STDIN, STDOUT or
+STDERR of a child I<program> will automatically be closed for you.
+So, always explicitly and immediately call close() on the writable end
+of any pipe, unless that process is actually writing to it.  If you
+don't explicitly call close() then be warned Perl will still close()
+all the filehandles during global destruction.  As warned above, if
+those filehandles were opened with Safe Pipe Open, they will also call
+waitpid() and you might again deadlock.
 =head2 Bidirectional Communication with Another Process
 While this works reasonably well for unidirectional communication, what

Thread Previous | Thread Next Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at | Group listing | About