Scripting GNU in the 21st Century
Most people tend to encounter shell scripts as attempts to write portable utilities that run properly on most, if not all, UNIX workalikes. Instead of making the best use of the shell and related programs, these scripts restrict themselves to the absolute most portable baseline of features. The extreme example of this sort of programming can be seen when one looks at the configure scripts generated by the autoconf program.
But this is 2004, and the full GNU environment is now commonplace. The advanced shells and utilities now are the default user environment for GNU/Linux systems, and they are available as install options on BSD-based systems. Even proprietary UNIXes often have complete sets of GNU software laid atop them to bring them up to date with the modern world. Because GNU software can be obtained at little or no cost, there is no excuse to continue scripting in a retrograde proprietary environment.
I live in the San Francisco Bay Area, mere walking distance from one of the Bay Area Rapid Transit (BART) stations. I do not drive, so I rely on the system for my trips downtown. The BART Web site offers an on-line trip planner, but I perform the same sort of query often and find the interface less convenient than a command-line script.
In order to save time, I decided to write a shell script that would fetch the train arrival information for my station and display it in a colored ASCII table on stdout. It should accept station codes for any arbitrary trip but use defaults specified in a per-user configuration file. I did not want to write the schedule analysis code, so I decided to perform a screen scrape of the BART trip planner. wget would submit the trip planner form, and the resulting Web page would be formatted with various tools.
The first line of most shell scripts begins with #!/bin/sh, which causes the script to be interpreted by the venerable old Bourne shell. This is used largely because classic Bourne is the only shell guaranteed to be on all UNIX and UNIX-like systems. Because this script is designed to work in a modern GNU system, it begins with #!/bin/bash. Users of BSD systems may wish to change it to #!/usr/local/bin/bash or perhaps #!/usr/bin/env bash.
Using bash instead of the classic Bourne shell provides us with some useful features we can put to good use in a moment. For one thing, bash allows us to break down our script using functions. The bash string manipulation routines also can save us some time by performing operations in-line that otherwise would have to be fed into an external sed or awk process.
Any good program has configuration files, so we can set up a traditional rc filesystem. The rc at the end of many configuration files stands for run commands, and it typically refers to the fact that the configuration file is loaded in like a script.
test -r /etc/bartrc && source /etc/bartrc test -r ~/.bartrc && source ~/.bartrc
We also should set the default departure and arrival station codes to Rockridge and Embarcadero, respectively. We use the compact bash syntax for an alternate value if a variable is undefined, so users can set the BARTSTART and BARTDEST variables in their own environments if they like.
BARTSTART=${BARTSTART:-ROCKR} BARTDEST=${BARTDEST:-EMBAR}
The first function we write is the basic usage guideline message, which helps guide development of the rest of the program.
function usage { echo "Usage:" echo " $(basename $0) [-hl] [ [<source>] <destination> ]" echo " To change defaults, set the BARTSTART and BARTDEST" echo " variables in your ~/.bartrc" echo echo "Flags:" echo " -l, --list List station codes with names" echo " -h, --help This message" }
We now have a simple usage command available that prints out the argument format for the script. Notice that we used $(basename $0) to determine automatically the filename of the script. We also allow an optional destination station code as an argument, which may be preceded by an optional departure station code.
HTTP has two methods for submitting selections to a form, GET and POST. The POST method is the most powerful, but the GET method allows us to specify values in the URL itself. This makes the GET method most convenient for scripting, because we can specify all relevant form fields as an argument to a simple tool, such as wget.
First, we set up the base URL to the form, specifying options to minimize the amount of formatting around the data.
baseurl="http://bart.gov/textonly/stations/schedule.asp?ct=1&format=quick&print=yes"
Looking at the form's HTML source code, we determine which fields have which names and begin to construct additions to the above URL. The date we're interested in is the current moment, and we use the date command's own formatting options to construct the date and time portion of the form.
date_now=$(date +"&time_mode=departs&depart_month=%m&depart_date=%d&depart_time=%I:%M+%p")
The $( ... ) syntax is simply a more explicit version of the backticks, allowing us to use the output of a command as part of a line of shell code.
Next, we use the BARTSTART and BARTDEST variables to enter the stations in which we are interested.
stations="&origin=${BARTSTART}&destination=${BARTDEST}"
Then, we use the wget utility to submit the form, redirecting all warning messages to /dev/null so as not to confuse our script. The full function looks like this:
function submitform { baseurl="http://bart.gov/textonly/stations/schedule.asp?ct=1&format=quick&print=yes" date_now=$(date +"&time_mode=departs&depart_month=%m&depart_date=%d&depart_time=%I:%M+%p") stations="&origin=${BARTSTART}&destination=${BARTDEST}" wget -O - -o /dev/null ${baseurl}${date_now}${stations} }
HTML is a nested data format and often doesn't lend itself to the sort of tabular data processing at which shell tools excel. Each tag or chunk of data comes wrapped in a surrounding context, requiring more programming work to analyze the structure.
Fortunately, a tool already exists that represents nested structures in a format that's easy for shell scripts to manage: the find utility. Given a tree of directories and files, it prints output like the following:
work/ work/tmp work/NOTES work/outgoing work/outgoing/e-mail work/outgoing/done.txt work/incoming work/incoming/TODO
Dan Egnor has written a similar tool for HTML and XML called xml2. Given a stream of HTML tags such as <html><body><a href="http://linuxjournal.com">Linux Journal</a></body></html>, it prints the following output:
/html/body/a/@href=http://linuxjournal.com /html/body/a=Linux Journal
I temporarily put submitform | html2 at the bottom of the script to take a look at the resulting data. I looked for the names of stations, times and other bits of information I wished to display. As luck would have it, the HTML was nicely uniform, so it was easy to separate out the important data.
The interesting data is all in table data cells within table rows within a table within a div tag within the body of the document. This means that running
submitform | html2 | grep /html/body/div/table/tr/td=
prints something like the following for each train:
/html/body/div/table/tr/td=Rockridge /html/body/div/table/tr/td=at 4:34 pm /html/body/div/table/tr/td=San Francisco Int'l Airport train /html/body/div/table/tr/td=Embarcadero Station /html/body/div/table/tr/td=at 4:54 pm /html/body/div/table/tr/td=Bikes Allowed
Separating the HTML context from the actual data was as simple as piping the result through cut -d = -f 2 to split off everything before the first =.
The final data extraction function is as follows:
function extractdata { submitform | html2 2> /dev/null | \ grep /html/body/div/table/tr/td= | cut -d = -f 2 }
In an earlier version of this script, I relied on an external awk program to format the data. The awk language is nice for these kinds of situations, because it has a structure in which you specify a regular expression, or other pattern, and the code to execute when a line of input matches that pattern. Thus, I could write a routine that runs whenever a certain time was encountered or when a note about bicycle rules appeared.
The Bourne shell--yes, even the old classic one--provides us with an awk-like construct that is useful in this situation: the case statement. Combining a while loop and a case test can provide somewhat awk-like scripting features, especially when combined with bash's more advanced string manipulation.
The basic format is that of a series of shell glob patterns separated by pipes (|) and ending with a right-parenthesis. Then comes a set of shell commands, terminated with a double-semicolon (;;) before the next pattern can be specified.
Let's look at the formatting function:
function formatdata { echo -n "Current time: $(date +'%l:%M%p')" echo " (note that first train listed may be in the past)" board="Board:" while read i do case $i in *train) train=$i departure=$arrival beginning=$destination;; at\ *) arrival=${i#at };; Timed|Transfer) read junk board="Xfer:";; *\ Allowed) echo -n "${board} " echo -n "(${departure}) ${beginning} to ${destination} (${arrival}) " echo "[${train}] (${i% Allowed})";; *) destination=$i;; esac done }
In addition to the while loop and the case statement, this portion of the script uses an advanced feature of bash that I learned from Jim Dennis during an SVLUG meeting. ${VARIABLE#PATTERN} cuts off the left side of VARIABLE if it matches PATTERN, and ${VARIABLE%PATTERN} cuts off the right side. The trick to remembering which is which, as Jim Dennis told me, is that the # symbol (Shift+3) is to the left of the % symbol (Shift+5) on a US keyboard. This allows us to strip out unneeded text from our printout without shelling out to sed or awk.
Putting extractdata | formatdata at the bottom of the script verifies that our base functionality is working as it should.
In our usage function, we mentioned a number of command-line options. For one thing, we promised that we would allow the user to list station codes. Alas, the function to do this is nowhere near as elegant as the rest of the script:
function liststations { wget -O - -o /dev/null 'http://bart.gov/index.asp?ct=1' | \ html2 2> /dev/null | \ sed '/\/html\/body\/table\/tbody\/tr\/td\/table\/tr\/td\/form\/select\/@name=origin/,/\/html\/body\/table\/tbody\/tr\/td\/table\/tr\/td\/form\/br/p;d;' | \ cut -d = -f 2 | grep -v ^/ | tail +6 | \ while read i; do read j; echo -e "$i\t$j"; read blank; done }
The command-line arguments to a bash script are stored in numbered variables. Recall that we used $0 to get the name of the script in the usage function. The rest of the arguments are likewise stored in $1, $2 and so on. The number of arguments is stored in the $# variable.
The end of our script now reads:
case $1 in -h|--help) usage exit 0;; -l|--list) liststations exit 0;; -*|--*) usage exit 1;; esac if [ $# = 1 ] then BARTDEST=$1 elif [ $# = 2 ] then BARTDEST=$2 BARTSTART=$1 fi extractdata | formatdata
The full version I use contains all sorts of extra features, including color escape sequences, return trips with a specified delay and the ability to simply spit out the URL to be pasted into a Web browser. That version will be available here for the foreseeable future.
Nick Moffitt is a free software enthusiast living in Oakland, California, where he maintains a multiuser community shell server. He is a member of the LNX-BBC Project and maintains GAR, nwall and the popular game robotfindskitten.