Working with Stdin and Stdout
Previously, I erroneously titled my column as "SIGALRM Timers and Stdin Analysis". It turned out that by the time I'd finished writing it, I had spent a lot of time talking about SIGALRM and how to set up timers to avoid scripts that hang forever, but I never actually got to the topic of stdin analysis. Oops.
So this time, let's start with that topic. The behavior to
emulate here is something a lot of utilities do without you paying much
attention: they behave differently if their input or output is a pipe or
file than they do when it's stdin (the keyboard) or stdout (the
screen). Try ls
versus ls|cat
to see what I mean.
The test
command has a helpful flag in this regard:
-t
. From the
man page:
True if the file whose file descriptor number is
file_descriptor is open and is associated with a terminal.
Worth knowing is that file descriptor #0 is stdin; #1 is stdout, and #2 is
stderr (pronounced "standard in", "standard out" and
"standard error", respectively). That's why using
>&
to redirect by file descriptors works with 2>&1
to cause error
messages to go to stdout just like regular output messages.
Back to the topic though—in practice, the -t
test
can be used like this:
#!/bin/sh
if [ -t 0 ]; then
echo script running interactively
else
echo stdin coming from a pipe or file
fi
It's easy to test:
$ sh inter.sh
script running interactively
$ sh inter.sh < inter.sh
stdin coming from a pipe or file
$ cat inter.sh | sh inter.sh
stdin coming from a pipe or file
Perfect. Now, what about identifying if the output is an interactive terminal, file or pipe? It turns out that you can use the same basic test, just replace the file ID 0 with #1:
if [ -t 1 ] ; then
echo output going to the screen
else
echo output redirected to a file or pipe
fi
The results:
$ sh inter.sh
script running interactively
output going to the screen
$ sh inter.sh | cat
script running interactively
output redirected to a file or pipe
$ sh inter.sh > output.txt
$ cat output.txt
script running interactively
output redirected to a file or pipe
Pretty cool, actually.
Let's back up a bit and have another look at file redirection before leaving this topic, however.
I already talked about the common trick of 2>&1
to redirect stderr to
stdout—something that's very helpful on the command line. You also
can redirect specific lines of output in a shell script to stderr, so your error
messages are sent to the screen even if stdout is being sent to a pipe or
file:
echo Error: this is an error message >&2
But, what if you want to have your script force stdout to a specific target
regardless of what someone does on the command line? It can be done—of
course—although it involves a very different approach: the use of the
exec
command.
At its most basic, the exec
call is like a subshell invocation
(which is really what happens each time you invoke any system command like
ls
or fmt
), but it's the
existing shell that's
replaced with the specified command, effectively killing the current
process. If you have a shell script that sets up specific parameters for an
external call, for example, you could end it with:
exec $cmd $args
and anything you might have after that point in the original script is
jettisoned because the script is no longer running, it's replaced by
$command
.
But exec
actually is more nuanced than that, and in
particular, a
quirk of its behavior gives the solution we seek:
exec
replaces all the current assignments for stdin, stdout and stderr with
those specified as part of the invocation.
So here's the solution, redirecting stdout to a file:
exec > output.txt
In practice, you can see how it works with this snippet:
echo This is stdout
exec > output.txt
echo This is still stdout but goes elsewhere
Let's actually put a few different things together in this script, so you can see how this all works together:
echo this goes to stdout
echo and this goes to stderr >&2
exec > output.txt
echo This is still stdout but goes elsewhere
echo but where does this go\? >&2
exec date
echo this script is kaput
Here's what happens when you run the program:
$ sh test.sh
this goes to stdout
and this goes to stderr
but where does this go?
But, what's actually in output.txt?
$ cat output.txt
This is still stdout, but it goes elsewhere:
Sun Oct 7 10:29:56 MDT 2012
Interesting. Notice that, as expected, "this script is kaput"
never shows up because once the exec
invokes an external program
(in this case, date
), the script itself is done, because its process
has been replaced with the date
program.
Notice that the exec
redirected only stdout, so that the error
message at the very end still goes to the screen. Want to have both stdout
and stderr redirected to the file? It's literally a one-character
change! Instead of the above exec
redirect, use this:
exec &> output.txt
That's easy enough, isn't it?
Now, what about the opposite situation where the user has redirected stdout
to a file, but you still want it to go to the screen anyway? That's done
with yet another sequence on the exec
invocation:
1>&2
,
which redirects stdout to stderr.
Let's look at the same script as above, with exec
1>&2
. Here's what
happens:
$ sh test2.sh > /dev/null
and this goes to stderr
This is still stdout but goes elsewhere
but where does this go?
Sun Oct 7 10:47:44 MDT 2012
Pretty cool, eh?
That's it for this month. As always, if you have any interesting scripting projects, challenges or ideas, drop me a note via http://www.linuxjournal.com/contact, and I'll have a look. Input always is welcome!
Also, if you have an extraordinary memory, you might recall that Mitch Frazier wrote about similar topics in Linux Journal's Upfront section, during 2010, but his approach was considerably more complicated than mine. Sorry Mitch!