Algorithmic Music Composition With Linux - athenaCL
In this conclusion to my survey of algorithmic music composition systems for Linux I present Christopher Ariza's athenaCL.
According to its Web site athenaCL is
... an open-source, object-oriented composition tool written in Python. The system can be scripted and embedded, and includes integrated instrument libraries, post-tonal and microtonal pitch modeling tools, multiple-format graphical outputs, and musical output in Csound, SuperCollider, Pure Data, MIDI, audio file, XML, and text formats ... Over eighty specialized Generator, Rhythm, and Filter ParameterObjects provide tools for stochastic, chaotic, cellular automata based, Markov based, generative grammar and Lindenmayer system (L-system), wave-form, fractional noise (1/f), genetic, Xenakis sieve, linear and exponential break-point segments, masks, and various other algorithmic models. ParameterObjects can be embedded in other ParameterObjects to provide powerful dynamic and masked value generation.
My experience with athenaCL has been a little different from my explorations of CM/Grace and CsoundAC. The system is organized in a unique manner that I found confusing at first. However, my initial confusion quickly gave way to amazement, and I must confess that I've become a devoted user.
Testing
For this review I tested athenaCL1, the most recent stable version of the program released in August 2009. AthenaCL2 is available now as alpha-stage software, but the project documentation isn't ready yet. I ran the program on 64-bit Debian with Python 2.5 and on 32-bit Ubuntu with Python 2.6. Except for the speed of the 64-bit box the systems behaved identically.
The use of Python in CsoundAC, Steven Yi's blue, and Oeyvind Brandtsegg's ImproSculpt indicates that Python has a special affinity to the task of music composition by algorithm. Indeed, the language is extensible, relatively easy to learn, and performs decently on modern hardware. However, rendering audio in realtime is not one of Python's strengths, so in athenaCL the task gets handed off to a designated audio rendering subsystem such as Csound.
Installation & Configuration
Source code packages for can be downloaded for UNIX/Linux and MacOSX machines, and an installer is available for Windows. Installation is simple on Linux machines. Download the stable tarball from the athenaCL site and unzip it in your directory of choice ($HOME is good). Read the README text file. At this point most systems should be ready to go. However, athenaCL's dependencies may vary according to your project requirements. You can use the system without any playback or graphics output support, but a complete environment includes Csound5, TkInter, the Python Image Library, and a working ALSA sound system. Audio and MIDI file players are user-definable. athenaCL employs Csound to render files produced by the system, thanks to a library of Csound-based instruments and effects. The native user interface is the Python interpreter's command-line, but the user can request GUI file dialogs and JPG/PNG graphic displays for certain commands.
I maintain the latest & greatest Csound on all my machines, but athenaCL's default MIDI player wasn't installed on any of them. I decided to set the default to TiMidity, Ubuntu's choice for its system MIDI file player. My aging 64 Studio box didn't have TiMidity, so I downloaded and built the sources. Installation was fast and flawless. On both machines I edited /usr/share/timidity/timidity.cfg to use the Fluid General MIDI soundfont, a better-sounding alternative to the default sound set. The AthenaPreferences external application command guided me through the process to set my preferred player, and I was in business with MIDI and athenaCL.
Running The System
When your preferred support packages have been installed you can run the setup and start-up scripts found in the top-level directory of the source package :
sudo python setup.py python athenacl.py
After starting athenaCL for the first time you should see something like this output at the prompt :
dlphilp@The3800:~/src/athenaCL$ python athenacl.py athenaCL 1.4.9 (on linux2 via terminal threading off) Enter "cmd" to see all commands. For help enter "?". Enter "c" for copyright, "w" for warranty, "r" for credits. :: AUup check online for updates to athenaCL? (y or n): y athenaCL 1.4.9 (2009.08.15) is up to date. [PI()TI()] ::
That weird prompt is significant. The PI() indicates that there's no Path Instance defined yet, the TI() tells us the same about something called a Texture Instance. We may not know yet what they signify but evidently they are important to the system. We'll see how important in a few moments. At this point we're ready to start running some commands and scripts. I'll start with a fairly high level action - the creation of a pitch resource, a.k.a. a path - and proceed to another such action - making a texture - before heading on to audio/MIDI file output. The example illustrates the economy of code required for musically effective results. Don't worry if you don't fully understand what's happening, I'll clarify things as we go along.
First I'll create a pitch resource, what athenaCL calls a PathInstance, the PI in the prompt. I'll start with something easy :
[PI()TI()] :: pin name this PathInstance: p1 enter a pitch set, sieve, spectrum, or set-class: c3,d3,e3,f3,g3,a3,b3,c4 SC 7-35 as (C3,D3,E3,F3,G3,A3,B3,C4)? (y, n, or cancel): y add another set? (y, n, or cancel): n PI p1 added to PathInstances. [PI(p1)TI()] ::
Case is ignored at the prompt. Case distinction is used to document athenaCL commands and to indicate the short form of a command. For example, the randomUniform number generator can be abbreviated to ru, one of the many UI amenities in athenaCL.
The change in the prompt indicates that p1 is the active PathInstance. We have a pitch collection but now we need an instrument, tempo control, a rhythm generator, panning, and other musical factors. We need to make a TextureInstance, but first we should set the defaut event mode EMo command. We can view the available event modes with the EMls command :
[PI(p1)TI(t1)] :: emls EventMode modes available: {name} csoundExternal + csoundNative csoundSilence midi midiPercussion
We want to create a MIDI file, so we'll change the mode :
[PI(p1)TI(t1)] :: emo m EventMode mode set to: midi.
Check it :
[PI(p1)TI(t1)] :: emls EventMode modes available: {name} csoundExternal csoundNative csoundSilence + midi midiPercussion
Now we can make our new TextureInstance (TIn). As I said, don't worry if it's not all coming together yet. We start with the minimum input required from the user - a name for the texture and a General MIDI instrument number :
[PI(p1)TI()] :: tin name this texture: t1 enter instrument number: (0 ... 127) or "?" for instrument help: 5 TI t1 created. [PI(p1)TI(t1)] ::
Again the prompt reflects the change. Now we have an active texture, but we're not done defining it. We'll use TIv - the TextureInstance view command - to see what else we can manipulate within t1 :
[PI(p1)TI(t1)] :: tiv TI: t1, TM: LineGroove, TC: 0, TT: TwelveEqual pitchMode: pitchSpace, polyMode: set, silenceMode: off, postMapMode: on midiProgram: ePiano status: +, duration: 000.0--20.09 (i)nstrument 5 (generalMidi: ePiano) (t)ime range 00.0--20.0 (b)pm constant, 120 (r)hythm loop, ((4,1,+),(4,1,+),(4,5,+)), orderedCyclic (p)ath p1 (C3,D3,E3,F3,G3,A3,B3,C4) 20.00(s) local (f)ield constant, 0 local (o)ctave constant, 0 (a)mplitude constant, 0.9 pan(n)ing constant, 0.5 au(x)iliary none texture (s)tatic s0 parallelMotionList, (), 0.0 s1 pitchSelectorControl, randomPermutate s2 levelFieldMonophonic, event s3 levelOctaveMonophonic, event texture (d)ynamic none
As a matter of fact we could proceed to file realization by accepting the rest of the texture's default values, but where's the fun in that ? Let's edit the time range, the tempo (bpm), the rhythm structure, the amplitude values, and the panning. This time TIe - the TextureInstance edit command - is our friend :
; Set the output duration in seconds. [PI(p1)TI(t1)] :: tie t 0.0,60 ; Set the tempo to 200 bpm. [PI(p1)TI(t1)] :: tie b c,200 ; Define the rhythm structure. [PI(p1)TI(t1)] :: tie r l,((2,2,4),(4,2,4),(6,4,2,2)),rc ; Determine amplitude scaling from uniform random ; distribution between .3 and .7. [PI(p1)TI(t1)] :: tie a ru,.3,.7 ; Set pan position from linear random distribution ; between .1 and .9. [PI(p1)TI(t1)] :: tie n rl,.1,.9
Let's look into our texture now :
[PI(p1)TI(t1)] :: tiv TI: t1, TM: LineGroove, TC: 0, TT: TwelveEqual pitchMode: pitchSpace, polyMode: set, silenceMode: off, postMapMode: on midiProgram: ePiano status: +, duration: 000.0--60.14 (i)nstrument 5 (generalMidi: ePiano) (t)ime range 00.0--60.0 (b)pm constant, 200 (r)hythm loop, ((2,2,+),(4,2,+),(6,4,+)), randomChoice (p)ath p1 (C3,D3,E3,F3,G3,A3,B3,C4) 60.00(s) local (f)ield constant, 0 local (o)ctave constant, 0 (a)mplitude randomUniform, (constant, 0.3), (constant, 0.7) pan(n)ing randomLinear, (constant, 0.1), (constant, 0.9) au(x)iliary none texture (s)tatic s0 parallelMotionList, (), 0.0 s1 pitchSelectorControl, randomPermutate s2 levelFieldMonophonic, event s3 levelOctaveMonophonic, event texture (d)ynamic none
The TImap command will create a nice graphic display of the texture and its components. Figure 1 shows off the athenaCL's support for TkInter, a Python interface to the Tk graphics widgets. The display commands are especially useful for visualizing the effect of random and other distributions. In Figure 1 you can see the maps of the amplitude and panning curves, each created with a different random distribution.
Okay, we've defined a PathInstance and a TextureInstance. All that's left is the output stage. We'll use the ELn and ELh commands to create a new event list (ELn) and then summon the default player to hear the results of the processed list (ELh). Note the semicolon-separated multiple commands :
[PI(p1)TI(t1)] :: eln ~/el-new.xml; elh EventList el-new complete: /home/dlphilp/el-new.mid /home/dlphilp/el-new.xml Playing /home/dlphilp/el-new.mid. MIDI file: /home/dlphilp/el-new.mid. Format: 1 Tracks: 1 Divisions: 960 Sequence: el-new Text: created with athenaCL Track name: t1 Playing time ~64 seconds Notes cut: 0 Notes totally lost : 0 EventList hear initiated: /home/dlphilp/el-new.mid
You can hear the results at el-new.mp3
This tutorial illustrates the central nature of the Texture Instance. Textures are at the core of the system, and users will spend most of their time developing textures for their own purposes. According to its documentation the notion of a Texture starts with
... a conception of broad structural archetypes of musical shapes, and is used in this context to mean any sort of musical gesture, phrase, form, or structure.
As noted in the example, the texture defines a variety of musical factors, some of which can take complex expressions as arguments, such as the notation for rhythm in the example :
l,((2,2,4),(4,2,4),(6,4,2,2)),rc
The l identifies a looping structure. The numbers in parentheses represent sets of rhythmic values. Each time the loop runs through the sets it makes a random choice - the rc notation - to create a different rhythmic structure on each pass. Incidentally, the randomChoice selection option could be replaced by randomPermutation, randomWalk, orderedCyclic, or orderedOscillate. Simply changing the selection mechanism can make a dramatic difference in the output.
At this point I can copy my texture and perform other operations on the copies. The Texture is indeed the center of the composer's attention when working with athenaCL. Alas, I've given only the barest hint of what can be done with the Texture Instance. Fortunately, the system is easily learned and invies extensive experimentation. The Python interpreter's command shell includes command completion and recall, multiple command stacking, and access to athenaCL's on-line help. The author recommends that you consider athenaCL as a generative system whose output is intended to be carried into other programs for further processing and composition. You can employ athenaCL to create a whole composition without the use of external software, but its design favors a more sectional approach.
A few words more about the PathInstance. Your paths provide your textures with their collections of pitch/frequency/noise resources. A path can be built in a variety of ways - including a neat "import from Audacity spectrum analysis file" function - and a single instance can contain multiple resource sets, i.e. paths. Each path has its own name and is subject to manipulation with or without its fellow paths. Microtonality is supported, and a variety of predefined temperaments are available for application to your paths. There's more, but you get the idea. The PathInstance is a powerful and flexible method of addressing pitch matters when making music with algorithms.
GUI Tools
Like Common Music and CsoundAC, athenaCL has been designed for use from the command-line, more specifically from the Python interpreter's prompt. On UNIX/Linux this design works beautifully, thanks to user-friendly amenities such as command completion and others already mentioned. However, the system also supports graphics for TImap and other map commands. The APgfx command lets the user select the default graphics format for text, tk (depends on TkInter), eps (PostScript), JPG, or PNG. When a command calls the athenaCL graphics function the display will appear in text and in the user-selected format. The graphics can't be edited, but they are quite helpful when designing textures.
A search for "athenaCL GUI" led me to ComposGUI, a browser-based interface dependent on PHP and the mod_python module. A working example page is available, but unfortunately the project appears to have become abandonware. A more developed Web-based GUI exists at envl.net (Figure 2), an interactive site maintained by Christopher Ariza himself. Though he doesn't call it a GUI for athenaCL it is "Powered by athenaCL". The site's main menu offers these interactive generators :
- CloudBeta: Beta-distribution event density generator.
- HarmonyHitch: Algorithmic harmony generator.
- HarmonyQuake: Algorithmic harmonic recombination.
- PolyPulse: Cyclical pulse-based polyrhythm generator.
- PolyPulsePlus: Algorithmic pulse-based polyrhythm generator.
- RhythmRemap: Algorithmic rhythm generator.
- RhythmWeight: Algorithmic weighted-random rhythm generator.
- SieveSequence: Sieve rhythm generator.
- TuneTile: Algorithmic pitch and rhythm canon generator.
- TuneTwine: Algorithmic pitch and rhythm generator.
- TuneWeight: Algorithmic weighted-random melody generator.
The user enters the requisite data, the site processes it and returns an XML file formatted for use with athenaCL or a MIDI file usable about everywhere. Sweet.
Mr. Ariza has also set up the athenaCL netTools, another set of Web-based utilities for his system. The currently available tools include set class and map class analyzers and a pitch class converter. MIDI and Csound have different methods for representing pitch, the conversion utility is a handy device when moving between targets during a single session.
I understand why many users prefer GUI-based tools. Most of my Linux audio work is done with such tools, but in the domain of algorithmic music composition I like to stay close to text-based interfaces. AthenaCL takes advantage of the Python environment to promote a very fast and productive workflow. However, if you want to test athenaCL without much typing at all, check out the menu at envl.net.
Documentation
The documentation for athenaCL is extensive and thorough. It includes the author's book-length dissertation An Open Design for Computer-Aided Algorithmic Music Composition: athenaCL, a tutorial manual, a variety of studies and pieces generated with the assistance of athenaCL, and some experimental videos with soundtracks by athenaCL.
System help is available with the help command, or you can add a specific command for specific help, i.e. help TIv. For a more leisurely presentation, the AUdoc command opens the athenaCL documentation in your default browser. However you find it, the assistance provides extensive information about the commands and their usage. I suggest that new users browse the help files for various commands, I guarantee you'll be impressed with the breadth of possibilities.
Assessment
In many ways athenaCL is the most comprehensive system that I've used for algorithmic composition. Its feature set is rich in familiar and unusual resources, and its reliance on Python eases the way into the system through a powerful general-purpose programming language. Like Grace and CsoundAC, athenaCL does not require proficiency in its core language - you'll learn plenty about Python as you work your way through the tutorial examples - but of course your experience will be enhanced if you've already worked with Python or a similar scripting language.
I want to emphasize the user-friendly aspect of the Python interpreter command-line interface. Its flexibility contributed greatly to a very fast work-cycle and helped make a complex system such as athenaCL a pleasure to learn and use.
Christopher Ariza is deeply engaged with research into algorithmic music composition. Thanks to his efforts we have the athenaCL system itself, its excellent documentation, and many representative compositions. Happily, those compositions include the athenaCL scripts used to generate them, giving the student even more material to guide his or her explorations into the system.
Outro
This article concludes our mini-tour of algorithmic composition environments for Linux. I hope you've enjoyed the sights and sounds along the way, and I urge readers and fellow composers to check out these systems for possibilities relevant to their own work. The element of surprise is an indispensable component of music, and the programs reviewed in this series are capable of some very surprising results.
I have a variety of articles in the works, including a look into PulseAudio, an appreciation of the AVLinux distribution, and a report on running legacy DOS and Atari MIDI software. I'm at no loss for things to do, but feel free to suggest any Linux sound-related topics you'd like to see covered here.