Conditional Selection I: Markov Chains

The typescript for chapter 6 is provided in AutomatedComposition06.pdf.

Chapter 6 describes Markov chains, which are successions of random events,
where the outcome of each event is influenced by one or more of the event's immediate predecessors.
The chapter begins by explaining the chaining mechanism along with the implications of chain *order*: if each event is influenced
solely by its most immediate predecessor, then the chain is said to be *1st order*; if the influence extends back through the two
most recent predecessors, the chain is *2nd order*, and so forth.

The chapter next focuses upon properties of 1st-order chains: *waiting counts* and *stationary probabilities*.
Then follows a survey of Markov processes used in composing programs. These include Hiller & Isaacson's *Illiac Suite*,
Experiment 4, Iannis Xenakis's Markovian stochastic music, Hiller & Baker's *Computer Cantata* and David Zicarelli's Jam Factory.
Special mention is given to the INTERVAL feature of Gottfried Michael Koenig's Project Two.
Curtis Roads's PROCESS/ING program also receives acknowlegement here, although since Roads came to his concept by way of connection networks,
he would perhaps not be happy with my classification.

Chapter 6 was converted into a language-independent presentation and sent along to *Leonardo*, who published it in 1973 as
“The Markov Process as a Compositional Model: A Survey and Tutorial”.
In that event, the only content offered by the typescript is FORTRAN implementation detail.

P_{k} | ||||||||
---|---|---|---|---|---|---|---|---|

P_{k-2} | P_{k-1} | E4 | F4 | G4 | A4 | B4 | C5 | D5 |

E4 | F4 | 3/7 | 1/7 | 1/7 | 1/7 | 1/7 | ||

E4 | G4 | 1/1 | ||||||

E4 | A4 | 1/1 | ||||||

E4 | B4 | 1/1 | ||||||

E4 | C5 | 1/1 | ||||||

E4 | D5 | 1/1 | ||||||

F4 | E4 | 3/7 | 1/7 | 1/7 | 1/7 | 1/7 | ||

F4 | G4 | 3/9 | 3/9 | 1/9 | 1/9 | 1/9 | ||

F4 | A4 | 1/1 | ||||||

F4 | B4 | 1/1 | ||||||

F4 | C5 | 1/1 | ||||||

F4 | D5 | 1/1 | ||||||

G4 | E4 | 1/1 | ||||||

G4 | F4 | 3/9 | 3/9 | 1/9 | 1/9 | 1/9 | ||

G4 | A4 | 1/9 | 3/9 | 3/9 | 1/9 | 1/9 | ||

G4 | B4 | 1/1 | ||||||

G4 | C5 | 1/1 | ||||||

G4 | D5 | 1/1 | ||||||

A4 | E4 | 1/1 | ||||||

A4 | F4 | 1/1 | ||||||

A4 | G4 | 1/9 | 3/9 | 3/9 | 1/9 | 1/9 | ||

A4 | B4 | 1/9 | 1/9 | 3/9 | 3/9 | 1/9 | ||

A4 | C5 | 1/1 | ||||||

A4 | D5 | 1/1 | ||||||

B4 | E4 | 1/1 | ||||||

B4 | F4 | 1/1 | ||||||

B4 | G4 | 1/1 | ||||||

B4 | A4 | 1/9 | 1/9 | 3/9 | 3/9 | 1/9 | ||

B4 | C5 | 1/9 | 1/9 | 1/9 | 3/9 | 3/9 | ||

B4 | D5 | 1/1 | ||||||

C5 | E4 | 1/1 | ||||||

C5 | F4 | 1/1 | ||||||

C5 | G4 | 1/1 | ||||||

C5 | A4 | 1/1 | ||||||

C5 | B4 | 1/9 | 1/9 | 1/9 | 3/9 | 3/9 | ||

C5 | D5 | 1/7 | 1/7 | 1/7 | 1/7 | 3/7 | ||

D5 | E4 | 1/1 | ||||||

D5 | F4 | 1/1 | ||||||

D5 | G4 | 1/1 | ||||||

D5 | A4 | 1/1 | ||||||

D5 | B4 | 1/1 | ||||||

D5 | C5 | 1/7 | 1/7 | 1/7 | 1/7 | 3/7 |

transition matrix for pitches.

The selection model adopted in this chapter resembles Project Two to the extent
that decisions pertaining to a specific musical attribute all draw from the same *supply* of options.
Project Two contains no *selection feature* implementing Markov
chains, though the INTERVAL chord-selecting principle does betray Markovian influences.

Suppose you have a sequence of notes and you want to assign a pitch to each note. Suppose further that you want to draw these pitches from the set E4, F4, G4, A4, B4, C5, and D5 and that you want consecutive pitches to obey the following constraints:

- No pitch should be selected until two other pitches have been used.
- Leaps of a minor third or larger should be resolved by stepwise motion in the opposite direction.

Notice in Table 1 that whenever a source pair of pitches proceeds by stepwise motion, then five target pitches are offered; otherwise, the target pitch can only be the stepwise resolution. In the former circumstance, one of the One more thing, you want two motions to be pitch either (1) to continue stepwise in the same direction or (2) to skip by a third in the opposite direction (escaping tone).

Markov matrices impose *contraints* against particular sequences of supply elements by the simple expedient of setting
the corresponding transition probability to zero.
Constraint satisfaction was historically a core
subject in artificial intelligence. However, within a Markov matrix, the fact that a ‘source’ admits no targets
means that when a chain reaches such a source, the chain ends. Under no other circumstances will the constraints inherent
within the matrix bring about to any impasse.

The constraints and preferences just listed have been itemized in Table 1, which is technically
distinguished as a *2nd-order Markov transition matrix*. Suppose this process had previously selected
*P*_{k-2}=A4 for note *k*-2
and *P*_{k-1}=G4 for note *k*-1.
Then for note *k*:

- the probability of selecting E4 will be 1/9,
- the probability of selecting F4 will be 3/9,
- the probability of selecting B4 will be 3/9,
- the probability of selecting C4 will be 1/9, and
- the probability of selecting D5 will be 1/9.

Notice that

The denominator in any given row is always chosen so that the probabilities sum to unity.

Now it happens that whenever you have a matrix like Table 1, where each state is able to communicate (directly
or indirectly) with every other state, then it is possible to calculate *steady-state* probabilities. For example the steady-state
probabilities for Table 1 are listed in Table 2.

Pitch | E4 | F4 | G4 | A4 | B4 | C5 | D5 |
---|---|---|---|---|---|---|---|

Probability | 0.085 | 0.155 | 0.174 | 0.173 | 0.174 | 0.155 | 0.085 |

listed in Table 1.

This is the same distribution used to guide the weighted random selection scenarios described for Chapter 4 and Chapter 5, and indeed the present scenario provided the genesis for these values. The taperings off toward E4 and D5 reflect the fact that these pitches can only be approached from one side.

To generate a chain of pitches using Table 1, one would do the following:

- Initialize by choosing one of the pitch-pairs listed in the leftmost two columns of the table. For purposes of demonstration, let's choose two central pitches in the supply: G4 for note #1 and A4 for note #2.
- To select a pitch for note #3, populate an urn according to the probabilities listed in the G4-A4 row: one ball marked E4, three balls marked F4, three balls marked B4, one ball marked C5, and one ball marked D5. Mix the balls blindly, draw one out, and assign the pitch name on the ball to note #3. For purposes of demonstration, assume that the B4 ball was drawn.
- To select a pitch for note #4, populate the urn according to the probabilities listed in the A4-B4 row of Table 1. Mix the balls blindly, draw one out, and assign the pitch name on the ball to note #4. For purposes of demonstration, assume that the G4 ball was drawn.
- To select a pitch for note #5, consult the probabilities listed in the B4-G4 row of Table 1. The only probability populated in this row is the A4 choice. You could go through the motions of emptying out the urn, populating it with one ball marked A4, and drawing that ball back out again, but why bother? The choice for note #5 is determinate.
- For each remaining note, locate the row in Table 1 corresponding to the two previous pitches, then use the probabilities listed in the row to select a new pitch.

I have coded a process which selects pitches according to the transition probabilities enumerated in Table 1 and which forces the initial choices of G4 for note #1 and A4 for note #2. Table 3 presents four sequences generated by this process using different random seeds. Each sequence has an accompanying statistical analysis. This analysis provides no insight other than establishing that the steady-state probabilities listed in Table 2 need many more than 48 samples to come into force.

Position | Length | Content | Summary | E4 | F4 | G4 | A4 | B4 | C5 | D5 |
---|---|---|---|---|---|---|---|---|---|---|

0 | 48 | G4 A4 B4 G4 A4 F4 G4 E4 F4 D5 C5 B4 A4 G4 F4 A4 G4 B4 A4 F4 G4 E4 F4 C5 B4 A4 C5 B4 A4 G4 B4 A4 C5 B4 E4 F4 G4 A4 D5 C5 G4 A4 F4 G4 D5 C5 B4 A4 | Counts | 3 | 7 | 10 | 11 | 8 | 6 | 3 |

Proportions | 0.0625 | 0.146 | 0.208 | 0.229 | 0.167 | 0.125 | 0.0625 | |||

0 | 48 | G4 A4 B4 D5 C5 B4 D5 C5 F4 G4 B4 A4 E4 F4 C5 B4 A4 G4 D5 C5 A4 B4 C5 D5 F4 G4 D5 C5 B4 A4 G4 C5 B4 D5 C5 E4 F4 D5 C5 A4 B4 F4 G4 E4 F4 A4 G4 F4 | Counts | 3 | 7 | 7 | 7 | 8 | 9 | 7 |

Proportions | 0.0625 | 0.146 | 0.146 | 0.146 | 0.167 | 0.188 | 0.146 | |||

0 | 48 | G4 A4 B4 E4 F4 C5 B4 A4 C5 B4 D5 C5 F4 G4 D5 C5 B4 F4 G4 E4 F4 G4 E4 F4 A4 G4 D5 C5 B4 G4 A4 D5 C5 F4 G4 E4 F4 A4 G4 B4 A4 G4 B4 A4 G4 F4 C5 B4 | Counts | 4 | 8 | 10 | 7 | 8 | 7 | 4 |

Proportions | 0.0833 | 0.167 | 0.208 | 0.146 | 0.167 | 0.146 | 0.0833 | |||

0 | 48 | G4 A4 B4 D5 C5 B4 E4 F4 B4 A4 F4 G4 B4 A4 C5 B4 D5 C5 G4 A4 F4 G4 C5 B4 D5 C5 A4 B4 G4 A4 D5 C5 A4 B4 D5 C5 G4 A4 B4 D5 C5 G4 A4 F4 G4 A4 B4 E4 | Counts | 2 | 4 | 8 | 10 | 10 | 8 | 6 |

Proportions | 0.0417 | 0.0833 | 0.167 | 0.208 | 0.208 | 0.167 | 0.125 |

The first two choices are forced to G4 and A4 in each sequence.

The practical programming content of Chapter 6 was provided by Demonstration 4: Markov Chains, which used Markov matrices to select average durations, articulations, registers, and chromatic degrees.

© Charles Ames | Page created: 2017-03-12 | Last updated: 2017-03-12 |