Automated Composition
My Unpublished Book

During the early 1980's, I authored a textbook on composing programs. This textbook set forth a curriculum that starts from rote operations in Chapter 3 and builds to AI search strategies in Chapter 12 and Chapter 14.

The heart of composing is the act of selection. Chapter 4, Chapter 5, and Chapter 7 address the topic of weighted selection in increasingly determinate ways. Chapter 6 and Chapter 7 address the implications of making one choice conditional upon the outcome of one or more previous choices; these chapters are thus concerned with issues of contextual sensitivity and of constraints. The notion that context might render certain choices unacceptable introduces black-and-white (right or wrong) reasoning into automated compositional decision making. This contrasts with heuristic methods that seek to rank-order choices by preference. Chapter 8 brings in elements of musical design, conditioning selection upon parameters which evolve gradually over time.

Chapters 3 to 8, but excluding Chapter 7, brought things up to the state of the art in force around 1980. Chapter 7 addressed a particular challenge faced by composing programs up to that time. This challenge was that a statistically generated choice could be thrown out if the choice failed one or more constraints — thus compromising the distribution. The remedy for this conflict was to introduce heuristic selection by favoring the best among the acceptable choices. The technique described in Chapter 7, was called “cumulative feedback” — it would undergo refinements and subsequently be renamed “statistical feedback”. Statistical feedback introduces shades-of-gray (worse or better) reasoning into automated compositional decision making. The two factors — constraints and heuristics — establish the foundation for merit-based compositional decision-making.

Chapter 9 expands upon the premise of heuristic selection by introducing the notion of a packed key. This is a single quantity incorporating within it several different criteria, where the criteria are ranked by significance. At a decision point, faced with some number of options, a composing program can concentrate evaluations into packed keys, then use standard sorting algorithms to arrange the options into preferential order. If the criteria enumerate desirable features, then the sort order will be descending; conversely if the criteria enumerate undesirable features, then the sort order will be ascending. Often the least-significant component of a packed key will be a random value serving as a tie breaker.

Chapter 10 introduces recursion, which is the ability of a process to nest specific entities within general ones, unfolding in a treelike structure. Chapter 11 applies recursion to musical design under the rubric of Noam Chomsky's generative grammars.

Chapter 12 introduces the first of two AI search strategies discussed in this textbook. The technique described here as “comparative search”, is the same as what I would later come call “exhaustive search”. Wikipedia designates the same search strategy as alpha-beta pruning. Regardless of what you call it, the approach involves recursively enumerating every possible solution to a problem and applying a minimax algorithm to discover the least-bad result. Partial solutions are evaluated by counting undesirable features, though at this level of complexity the counts may be too large to pack into single quantities. The exhaustive search strategy is found to become less and less practical as the search space expands.

In many applications it is simply not practical to maintain units of information in physically contiguous order. Sometimes the information might be constantly in flux, with new units being inserted and older units being deleted. Other times a programmer might desire to organize information according to several perspectives, only one of which may be reflected physically. Chapter 13 discusses how relationships between non-contiguous data elements can be implemented using links or pointers.

Chapter 14 closes the textbook with the second AI search strategy. The chapter designates this strategy “constrained search”; elsewhere I call it “heuristic search”; a fuller description would be “heuristically guided search with dependency directed backtracking”.

You're Looking in the Wrong Place for …

This textbook is not about “real-time composition”. This textbook covers AI topics but not neural nets. This textbook does not delve into analysis of existing music, with any view toward stylistic emulation.

I regard the term “real-time composition” as an oxymoron. Call it what you will, it has been a potent aspiration in computer music, one which had stood politically in opposition to the traditions I myself have followed. That does not mean that what this book has to say is irrelevant to real-time circumstances. Indeed the methods covered in my early chapters form the core of real-time methods, and all of the techiques presented up through Chapter 9 are amenable to real-time processing. However, when the practical demonstration for Chapter 9 gets into generating a piece in multiple passes, my curriculum forges on into realms where real-time cannot follow. The AI search techniques explored in Chapter 12 and Chapter 14 gain much of their power from the ability to go back and change history should one sequence of decisions lead to an impasse. Well you can't change a note you've already played, and neither can a composing program working under real-time constraints. So by my lights you can have real-time or you can have intelligence, but the two are mutually antagonistic.

This book does not cover neural nets. The Computer Music Journal once devoted an entire issue to neural nets, with the implication (as I perceived things) that neural nets were cool AI technology while the recursive searches I was using were uncool. Get past it. Neural nets are a sensory technology. In the estimation of one neural-net researcher, neural nets boil down to “nonparametric statistical analyzers” (sorry I can no longer provide the reference). So if you want to use neural nets in a composing program, the generative phase comes down to weighted selection. And my curriculum has a lot to offer on weighted selection.

On the topic of musical analysis, I certainly consider that a worthwhile pursuit, so long as the output is meaningful in human terms. If you're purpose is to compile pitch-succession statistics in order to populate an N-th order transition matrix, well Chapter 7 on Markov chains might provide some insight (or not). But pitch-succession statistics can never provide insight into cadences because cadences happen on strong beats — so a meaningful analysis must also take rhythm into account, and the analysis must be sophisticated enough to identify strong beats. That's a whole other book which someone other than me will have to write.

Music-Theoretic Basis

This curriculum limits its music-theoretic principles to rudiments deemed minimally necessary to form a musical piece. The most fundamental of these principles is Jim Tenney's recursive model of musical structure based on Gestalt Psychology. In Tenney's model, members such as notes combine into groups such as figures and chords; these groups become members in their turn and combine into larger and larger structures. The forces holding these structures together are the Gestalt factors of proximity and similarity.

With regard to rhythm, the only principle beyond Gestalt time-grouping recognized here is that of the agogic accent: long notes are held to be more accented than shorter notes.

With regard to pitch, the most basic principle is the categorization of intervals by consonance (perfect or imperfect) and dissonance (mild or sharp). Of the intervals, the perfect octave forms an identity between pitches which allows us to label all pitches sharing octave or multi-octave relationships using the same combination of a letter name and an accidental. Within the system of equal-tempered tuning, the octave divides into twelve chromatic steps, represented by the white keys (natural letter names) and black keys (sharp or flat letter names) on a piano. These twelve steps are the degrees of the chromatic scale — in the present text, chromatic degrees for short. (The term pitch-class is also used, but not by me.) Only in recent times have composers systematically exploited all twelve of chromatic degrees in close proximity. Most musical passages limit pitch usage to subsets of the chromatic scale, and these subsets are known as diatonic scales.

The practical examples in this typescript often provide phrase differentiation by changing from one diatonic scale to another. Relationships between scales are driven by the common-tone principles articulated by Arnold Schoenberg in two pedagogical works, Preliminary Exercises in Counterpoint and Structural Functions of Harmony. Simply put, scales sharing many tones in common are perceived as close or similar, while scales with many cross relations are perceived as distant or contrasting.

Guiding the active selection of material is the principle of balance, of giving each resource its fair share of usage. This principle comes originally from Schoenberg's twelve-tone technique by way of serialism and, most directly, from Gottfried Michael Koenig's article about Project Two where he writes:

The trouble taken by the composer with series and their permutations has been in vain, in the end it is the statistical distribution that determines the composition.

I have stated in print my strong belief that composing programs should be the platform by which music-theoretic models will be expressed and tested. Such models go way beyond the rudiments just enumerated. The music-theoretic power of composing programs was demonstrated by Kemal Ebcioğlu, whose pre-1986 system for harmonizing chorales in the style of J.S. Bach reproduced Bach's own solution on one occasion. My own delvings into automated style emulation began with the 1985 Mix or Match project with Lejaren Hiller, which among other things addressed the problem of non-literal imitation; that is, adapting musical motifs to changes in harmony. This effort continued with the 1987 Cybernetic composer, which emulated four distinct genres of popular music using the same compositional framework. Of these genres, the ragtime style included the part-writing engine which reappears on this site, reworked into an animated graphic demonstration called the Intelligent Part Writer. The oom-pah exchange between bass and chords in the Cybernetic Composer's ragtime genre provided the kernal for a theory of rhythm which at once explains (1) how meter becomes manifest through the convergence of parts on stronger beats and the divergence of parts on weaker beats, and (2) how melodies interact with bass lines and counter-melodies by sharing out note-attacks on the weaker beats. This model of rhythm has been formalized into a compositional framework which I currently make use of in composing programs. It has also been integrated into an animated graphic demonstration on this site called the Complementary Rhythm Generator.

All of the music-theoretic models describe in the preceding paragraph make use of the AI technique of constrained search described in Chapter 14 of my typescript. So if you want to get where I have gotten, you need to complete my curriculum. Now as far as I am aware, nobody among the community of professionals who teach theory in music departments or conservatories has made any effort to implement computer-based style-emulations. I may be wrong because I don't read music-theory journals. If I'm right, then by my lights any music-theoretical assertion eminating from this community since the mid 1980's is probably unsubstantiated crap. So here's my challenge to the upcoming generation of musical theorists: If you've got an idea about how notes are strung together, if you think you can explain how one style differs from another, why two composers working in the same milieu sound so different from one another or why one jazz soloist's lines have more edge than another's, then test your idea. Implement it as a computer-based model and see what kind of music it produces. It can be done. We've had the technology for years.

Achieving Publication

During the early 1980's, I sent my typescript around to publishers. One university press expressed interest, and the typescript was sent out for review. The review was negative so publication was declined. My disappointment was made all the more acute by my impression that the reviewer had barely skimmed the book. However I had amassed a large body of material. Rather than let it go to waste, I began parceling it out into articles, which I began sending out piecemeal to journals.

To start with there were the many historical examples which I had assembled. In these instances I had made direct contact with many of the responsible composers. They in response had generously provided not just explanations, but also scores, input data, some graphical material, and other background. I compiled the material together into a historical survey and pondered what to do with it. John Myhill had some connection with the arts and technology journal Leonardo, and John strongly urged me to send it there. I did so, they accepted it enthusiastically. The product, greatly improved by the editorial collaboration of Leonardo's Lisa Bornstein, appeared in 1987 with the title “Automated Composition in Retrospect: 1956-1986”. Leonardo subsequently bestowed upon me their annual Coler-Maxwell award for excellence (shared). But I never did receive the promised holographic medal.

All told, this typescript provided original material for eight articles in published mostly in Leonardo or its spin-off Leonardo Music Journal, but also in Interface: Journal of New Music Research and Perspectives of New Music. The chapter summaries which follow will detail which chapter contributed to which article. The articles in all instances refine and improve upon the original typescript, and they all adopt a FORTRAN-independent presentation. Thus you can gain the benefits of my curriculum by reading through the chapter summaries, linking out to the journals, and skipping the original typescript. That is in fact what I recommend. The typescript is simply a historical artifact, posted here as evidence that such a textbook existed.

One major component of the textbook only partially received the light of publication. This was the practical component, the Eleven Demonstrations for solo clarinet. Each piece was produced using one of eleven composing programs devised specifically to illustrate the methods presented in a chapter. The glimpses of the Eleven Demonstrations that did appear in publication appeared in language independent presentations, and this was just not right! In this special circumstance the whole point is to show the mechanics in play. What kind of information is required for the method to operate? How must this information be initialized? How should decisions be prepared? How do the facts come together at the decision point? Is follow-up necessary?

Such considerations motivated me to post the Eleven Demonstrations to charlesames.net after creating the web site in 2013. However by this time the typescript had long since disappeared into my ‘archives’. Only during an intensive de-hoarding effort did the typescript actually turn up, and then only in photocopy. This was immediately scanned to PDF, and I proceeded to excerpt out the Eleven Demonstrations text and transcribe it into hypertext. But all I just wrote about showing the mechanics in play — that wasn't happening in my original explanations. So in the newly posted Eleven Demonstrations pages, my original text has been refashioned to explain things that should have received attention in the first place.

Choice of Programming Language

I selected FORTRAN '77 for my early composing programs because it offered block-structured syntax, no interpretive or pseudocode overhead, and no under-the-hood memory bloat. The university's Control Data 6600 mainframe was limited to 16-bit memory addressing, and none of the compilers available on the mainframe supported heap-allocated memory. FORTRAN '77 remained the only viable language option after 1979, when I spent too much money purchasing a private minicomputer so I could run composing programs without time limits.

Thus FORTRAN, the language which computer scientists have collectively spat upon at least since the introduction of ALGOL-60. But this is not the FORTRAN of numeric statement labels and wild GOTOs. This is the readable edition. And it is the language that supported me through two pieces described in the Computer Music Journal, through five articles about pieces in Interface: Journal of New Music Research, and through the effort for Expo '85 which landed me support from Ray Kurzweil. Only once I started working for Ray in 1987 did I finally abandon FORTRAN for C.

As I have stated previously, the historical and theoretical components of my curriculum most of have mostly been refashioned into language-independent journal articles. You can therefore get through the curriculum without reference to my typescript and its FORTRAN sources. However the FORTRAN-free option does not extend to the curriculum's practical component, the Eleven Demonstrations which provide concrete source code illustrating how particular techniques can be integrated into composing programs. For these historical artifacts, conversion to a more contemporary language would have begged massive goal inflation. Why stop at translating keywords and reworking array references? Certainly the array-based data structures should be upgraded to heap-allocated objects. Why represent durations and pitches as scalar integers and write things out to text files, when I can leverage my current score framework to generate MusicXML score files? All this would have taken time and energy I no longer have; it would also have greatly complicated the programs to no functional advantage. What I have done instead is eliminate direct references from the revised Eleven Demonstrations pages back to FORTRAN code inside the typescript. You can get there if you really want to, but if there's another FORTRAN-free place to look, I direct you there first.

Gradus ad Parnassum

For the most part, each chapter in this book builds upon the chapters which precede it. As I stated above, Chapter 8 brings things up to the state of the art as of around 1980. Regrettably, the published books about composing programs which I have seen since 1984 do not bring things much further. Traveling upwards toward the lofty peaks, these books won't get you out of the valley. My curriculum at least will lead you decisively into the foothills. But don't assume for a minute that if you've worked your way through most of this curriculum you can pretend expertise. Until you master heuristically guided constrained searches with dependency-directed backtracking, which technique is described in Chapter 14, you're short of the state of the art. And you cannot begin to understand Chapter 14 without understanding both heuristic scheduling (Chapter 9) and constrained selection with statistical feedback (Chapter 7). Plus the many other bits along the way that help you decide what to do once you're capable of implementing a functional search.

© Charles Ames Page created: 2017-03-12 Last updated: 2017-03-12