You are viewing the version of this documentation from Perl 5.41.6. This is a development version of Perl.

CONTENTS

NAME

IPC::Open3 - open a process for reading, writing, and error handling using open3()

SYNOPSIS

    use Symbol 'gensym'; # vivify a separate handle for STDERR
    my $pid = open3(my $chld_in, my $chld_out, my $chld_err = gensym,
		    'some', 'cmd', 'and', 'args');
    # or pass the command through the shell
    my $pid = open3(my $chld_in, my $chld_out, my $chld_err = gensym,
		    'some cmd and args');

    # read from parent STDIN
    # send STDOUT and STDERR to already open handle
    open my $outfile, '>>', 'output.txt' or die "open failed: $!";
    my $pid = open3(['&', *STDIN], ['&', $outfile], undef,
		    'some', 'cmd', 'and', 'args');

    # write to parent STDOUT and STDERR
    my $pid = open3(my $chld_in, ['&', *STDOUT], ['&', *STDERR],
		    'some', 'cmd', 'and', 'args');

    # reap zombie and retrieve exit status
    waitpid( $pid, 0 );
    my $child_exit_status = $? >> 8;

DESCRIPTION

Extremely similar to open2 from IPC::Open2, open3 spawns the given command and provides filehandles for interacting with the command's standard I/O streams.

my $pid = open3($chld_in, $chld_out, $chld_err, @command_and_args);

It connects $chld_in for writing to the child's standard input, $chld_out for reading from the child's standard output, and $chld_err for reading from the child's standard error stream. If $chld_err is false, or the same file descriptor as $chld_out, then STDOUT and STDERR of the child are on the same filehandle. This means that you cannot pass an uninitialized variable for $chld_err and have open3 auto-generate a filehandle for you, but gensym from Symbol can be used to vivify a new glob reference; see "SYNOPSIS". The $chld_in handle will have autoflush turned on.

By default, the filehandles you pass in are used as output parameters. open3 internally creates three pipes. The write end of the first pipe and the read ends of the other pipes are connected to the command's standard input/output/error, respectively. The corresponding read and write ends are placed in the first three argument to open3.

The filehandle arguments can take the following forms:

However, it is possible to make open3 use an existing handle directly (as an input argument) and skip the creation of a pipe. To do this, the filehandle argument must have one of the following two forms:

If you use this form for $chld_in, the filehandle will be closed in the parent process.

The filehandles may also be integers, in which case they are understood as file descriptors.

open3 returns the process ID of the child process. It doesn't return on failure: it just raises an exception matching /^open3:/. However, exec failures in the child (such as no such file or permission denied), are just reported to $chld_err under Windows and OS/2, as it is not possible to trap them.

If the child process dies for any reason, the next write to $chld_in is likely to generate a SIGPIPE in the parent, which is fatal by default, So you may wish to handle this signal.

Note: if you specify - as the command, in an analogous fashion to open(my $fh, "-|") the child process will just be the forked Perl process rather than an external command. This feature isn't yet supported on Win32 platforms.

open3 does not wait for and reap the child process after it exits. Except for short programs where it's acceptable to let the operating system take care of this, you need to do this yourself. This is normally as simple as calling waitpid $pid, 0 when you're done with the process. Failing to do this can result in an accumulation of defunct or "zombie" processes. See "waitpid" in perlfunc for more information.

If you try to read from the child's stdout writer and their stderr writer, you'll have problems with blocking, which means you'll want to use select or IO::Select, which means you'd best use sysread instead of readline for normal stuff.

This is very dangerous, as you may block forever. open3 assumes it's going to talk to something like bc(1), both writing to it and reading from it. This is presumably safe because you "know" that commands like bc(1) will read a line at a time and output a line at a time. Programs like sort(1) that read their entire input stream first, however, are quite apt to cause deadlock.

The big problem with this approach is that if you don't have control over source code being run in the child process, you can't control what it does with pipe buffering. Thus you can't just open a pipe to cat -v and continually read and write a line from it.

See Also

IPC::Open2

Like IPC::Open3 but without STDERR capture.

IPC::Run

This is a CPAN module that has better error handling and more facilities than IPC::Open3.

WARNING

The order of arguments differs from that of open2.