package IPC::Open2; use strict; require 5.006; use Exporter 'import'; our $VERSION = 1.08; our @EXPORT = qw(open2); =head1 NAME IPC::Open2 - open a process for both reading and writing using open2() =head1 SYNOPSIS use IPC::Open2; my $pid = open2(my $chld_out, my $chld_in, 'some', 'cmd', 'and', 'args'); # or passing the command through the shell my $pid = open2(my $chld_out, my $chld_in, 'some cmd and args'); # read from parent STDIN and write to already open handle open my $outfile, '>', 'outfile.txt' or die "open failed: $!"; my $pid = open2(['&', $outfile], ['&', *STDIN], 'some', 'cmd', 'and', 'args'); # read from already open handle and write to parent STDOUT open my $infile, '<', 'infile.txt' or die "open failed: $!"; my $pid = open2(['&', *STDOUT], ['&', $infile], 'some', 'cmd', 'and', 'args'); # reap zombie and retrieve exit status waitpid( $pid, 0 ); my $child_exit_status = $? >> 8; =head1 DESCRIPTION The C function runs the given command and connects C<$chld_out> for reading and C<$chld_in> for writing. It's what you think should work when you try my $pid = open(my $fh, "|cmd args|"); # ERROR but you have to write it as: my $pid = open2($chld_out, $chld_in, @command_and_args); The C<$chld_in> filehandle will have autoflush turned on. By default, the filehandles you pass in are used as output parameters. C internally creates two pipes. The write end of the first pipe and the read end of the second pipe are connected to the command's standard output and input, respectively. The corresponding read and write ends are placed in the first and second argument to C. The filehandle arguments can take the following forms: =over =item * An uninitialized variable (technically, either C or the empty string will work): C generates a fresh filehandle and assigns it to the argument, which must be a modifiable variable for this work (otherwise an exception will be raised). =item * An existing handle in the form of a typeglob like C<*STDIN> or C<*FOO> or a reference to such: C places the filehandle in the C slot of the typeglob, which means the corresponding bareword filehandle (like C or C) can be used for I/O from/to the child process. (If the handle is already open, it is automatically closed first.) =item * A string containing the name of a bareword handle (like C<'STDIN'> or C<'FOO'>): Such strings are resolved to typeglobs at runtime and then act like the case described above. =back However, it is possible to make C use an existing handle directly (as an input argument) and skip the creation of a pipe. To do this, the filehandle argument must have one of the following two forms: =over =item * An array reference like C<['&', $fh]>, i.e. the first element is the string C<'&'> and the second element is the existing handle to use in the child process. =item * A string of the form C<< '<&FOO' >> or C<< '>&FOO' >>, i.e. a string starting with the two characters C<< <& >> (for input) or C<< >& >> (for output), followed by the name of a bareword filehandle. (The string form cannot be used with handles stored in variables.) =back If you use this form for C<$chld_in>, the filehandle will be closed in the parent process. C returns the process ID of the child process. It doesn't return on failure: it just raises an exception matching C. However, C failures in the child are not detected. You'll have to trap SIGPIPE yourself. C does not wait for and reap the child process after it exits. Except for short programs where it's acceptable to let the operating system take care of this, you need to do this yourself. This is normally as simple as calling C when you're done with the process. Failing to do this can result in an accumulation of defunct or "zombie" processes. See L for more information. This whole affair is quite dangerous, as you may block forever. It assumes it's going to talk to something like L, both writing to it and reading from it. This is presumably safe because you "know" that commands like L will read a line at a time and output a line at a time. Programs like L that read their entire input stream first, however, are quite apt to cause deadlock. The big problem with this approach is that if you don't have control over source code being run in the child process, you can't control what it does with pipe buffering. Thus you can't just open a pipe to C and continually read and write a line from it. The L and L modules from CPAN can help with this, as they provide a real tty (well, a pseudo-tty, actually), which gets you back to line buffering in the invoked command again. =head1 WARNING The order of arguments differs from that of C from L. =head1 SEE ALSO See L for an alternative that handles C as well. This function is really just a wrapper around C. =cut # &open2: tom christiansen, # # usage: $pid = open2('rdr', 'wtr', 'some cmd and args'); # or $pid = open2('rdr', 'wtr', 'some', 'cmd', 'and', 'args'); # # spawn the given $cmd and connect $rdr for # reading and $wtr for writing. return pid # of child, or 0 on failure. # # WARNING: this is dangerous, as you may block forever # unless you are very careful. # # $wtr is left unbuffered. # # abort program if # rdr or wtr are null # a system call fails require IPC::Open3; sub open2 { local $Carp::CarpLevel = $Carp::CarpLevel + 1; return IPC::Open3::_open3('open2', $_[1], $_[0], '>&STDERR', @_[2 .. $#_]); } 1