Boethius's Consolation of Philosophy, written in 524, suggests two ways of comprehending the world: Mankind lives from moment to moment, while God is timeless. Out of this theological germ has evolved dual perspectives on reality, which physicists — particularly Joseph Fourier — have shown to be translatable, either perspective into the other.
Domain | (Xenakis) | (Wourinen) | Sound | Statistics |
Time | Inside-Time | Order | Waveform | Population |
Frequency | Outside-Time | Content | Spectrum | Distribution |
The dichotomy between time and frequency has even made its way into musical theory. For example, Iannis Xenakis in Formalized Music distinguishes aspects of a musical form which exist "outside time" from those unfold "inside time". Xenakis's notional of outside-time structure corresponds to the notion of precompositional decision-making, as associated with composers such as Milton Babbitt and George Perle. Charles Wuorinen's instruction manual on serialism, Simple Composition draws a parallel distinction between content and order. However Wuorinen considers these two ideas to be opposite rather than complementary; he associates “content” with tonality and “order” with serialism.
The first thing to understand about statistical distributions is that while many distributions are motivated by probability theory, probability does not figure in the fundamental nature of the thing. Rather, a distribution describes characteristics of a population. No assumptions are made about the genesis of that population, which may be random, chaotic, or entirely deterministic. A population is simply a collection samples, each of which has a characteristic value.
There are two kinds of distribution:
A statistical transform is a way of transforming a source population that is nominally uniform in the range from zero to unity into a target population that adheres to a different, prescribed distribution. Figure 3 illustrates the process with a transform that concentrates values near the upper end of the range and disperses values near the lower end of the range.
Every distribution has an associated mapping curve that will implement the appropriate transform. This mapping curve is formally called the cumulative distribution function, and it is derived by integrating the density curve over the range from zero (impossible) to unity (full certainty). Simple, right? No. In fact, mathematicians have successfully derived cumulative distribution functions for only a few distributions, and the familiar Bell Curve is not one of them.
The good news is that mapping curves can be estimated using numerical methods. In older times this meant a very tedious process of looking values up in tables at the back of a statistics book, but with the advent of computers all of these calculations can be handed off to software. There are two cases to deal with, the discrete and the continuous. Both cases require in input in the driver range — that is, in the range from zero to unity. Both cases also assume that the incoming values will be uniformly distributed. However this assumption will not be enforced, since transforms can serve a valuable coupling role even when inputs are nonuniform. A later section of this page will describe a process which a can be used to “level” non-uniform driver sequences.
Figure 4 illustrates the mapping curve for a discrete statistical transform. Discrete transforms are easy enough to implement. The method first calculates a chooser value by scaling the driver to the sum of weights, then iterates through the distribution values, comparing the chooser to each individual weight. When the chooser fell short of the weight, selection has completed. Otherwise the chooser is reduced by the current weight amount and the next iteration begins.
Figure 5 illustrates the mapping curve for a continous statistical transform. Continuous transforms can be implemented by approximating the density contour as a sequence of trapezoidal regions. The method selects a region using a chooser, much as the discrete transform chooses a value. The cumulative distribution function for a trapezoidal distribution is derivable using freshman-level calculus. This mapping function can then be applied the chooser residue.
Statistical transforms provide the passive component of a coupled pair. The active component of this pair is the driver, which generates sequences of numbers in the continuous range from zero to unity. The most familiar example of a driver is the standard random number generator. Among other things, the random number generator produces populations which, over the long term, tend towards uniformity. By this I mean that if you divide the range into equal-sized regions, the counts of values falling into these regions will, over the long term, come into agreement.
I have stated previously that an output population from a statistical transform will conform to the transform's distribution only if the input population is uniform. The contract between drivers and transforms stipulates that drivers must produce values in the range from zero to unity. However, uniformity is not necesarily a desirable trait of a driver. Indeed, uniformity cannot even be expect in small populations of random mumbers! It is unlikely, but not impossible, for a coin tosser to obtain ten heads in a row. Many other drivers actively strive for nonuniformity. Brownian motion and 1/f noise each tend to linger in limited regions, while chaotic generators such as the Logistic driver have “attractor” values around which samples congregate. Two drivers which buck the trend toward nonuniformity are Spread, which generates N values equally spaced between zero and unity, and Balance, which is discussed below.
On the flip side, the purpose of a statistical transform is not necessarily to achieve a specified distribution precisely. For example, suppose you use a transform to map the output of a driver component to a range of keys on a piano. You may well not wish the results to emphasize each key equally.
Figure 6 illustrates how levelling flattens out the input from a non-uniform driver. This component analyzes a population of samples by dividing the driver range into equal-sized regions and counting how many sample values fall within each region. A mapping function is then derived, which retains the original sequence of ups and downs while expanding dense regions and contracting sparse regions.
The chaotic drivers Logistic and Baker produce quasi-motivic sequences with “attractors” around which values tend to congregate. Each of these two generators has a control value which allows a user to adjust how erratic the generator's behavior will be. A phenomenon called “bifurcation” introduces more and more attractors as the control value increases, but once an attractor appears, it persists at the same location.
Figure 7 illustrates how fracturing derives new chaotic shapes by shuffling attractors around. The process dividies the driver range into equal-sized regions and shuffles these regions into random order. Thus, for example, the values appearing in region #8 (counting from the bottom, region #8 being the topmost populated region) of the original sequence have moved to region #3 of the fractured sequence.
I devised the balanced-bit driver to address situations which actively seek to produce an indicated distribution. This component, implemented as Balance, uses statistical feedback to ensure that the most significant bit alternates evenly between zeros and ones. A statistic is maintained for the relative usage of zeros versus ones, and the lesser-used value is favored. The same process applies recursively to bit 2, bit 3, and so forth up to a prescribed depth. Balanced-bit populations are depicted in Figure 1, Figure 2, and Figure 3.
Before the balanced-bit driver came about, the most reliable way of producing a distribution in the short term would be the fill-and-shuffle method:
The problem with fill-and-shuffle is that values are not homogeneously mixed through the sequence. Consider for example a discrete distribution with 67% zeros and 33% ones, and suppose you want to generate a population of nine samples. What you want is probably a sequence like
and the balanced-bit driver will give you something like that. The fill-and-shuffle method may give you something similar, but it just as well may give you a sequence like
Now consider increasing the number of values and allowing the percentages to change over time. To realize an evolving distribution using the fill-and-shuffle method requires several successive population-frames. Each frame should be large enough to accomodate the least-weighted value, so you have to take the evolution slowly. With the balanced-bit driver, the over-usage or under-usage of a particular value bleeds forward into the selection process. There's no need to divide the sequence into frames, and the evolution can progress much more quickly.
The approach described on this page evolved during computer music courses which I organized at the State University of New York at Buffalo beginning with the academic year 1978-1979. During this first year the topic of digital sound synthesis was handled by me, while the topic of computer composition was handled by John Myhill (with guest lectures by Lejaren Hiller). John Myhill was a great fan of Iannis Xenakis. Several of Myhill's lectures focused on Xenakis's Stochastic Music Program, on existing statistical methods which Xenakis had made use of, and on Myhill's own simplifications and improvements to Xenakis's program. In later years the sound-synthesis component of the course dropped away (we no longer had a functioning DAC) and I took over the computer-composition topic. For the subtopic of statistical generation, my approach had evolved from Myhill's. It was not my goal to teach students how to recreate Xenakis's program but rather give them a generalized toolset that would allow them to adapt Xenakis's approach to their own needs.
In those days the primary source on random number generation was Chapter 3, “Random Numbers” in Donald Knuth's Seminumerical Algorithms. Knuth remains excellent for his treatment of Derrick Henry Lehmer's “linear congruence” algorithm, the method universally used today. Linear congruence produces number sequences which satisfy all characteristics of unpredictability, but only for the uniform case. The only two non-uniform cases dealt with by Knuth were the "polar" method of generating bell-curve randomness and a method for generating negative-exponential values. The "polar" method is a dead end because it inextricably intermingles randomness with distribution. The method for generating negative-exponential samples (the same method used by Xenakis) was more promising because it separated an active step from a passive step. The active step used linear congruence to generate random decimal number uniformly in the range from zero to unity. The passive step applied a mathematical function which mapped the linear congruence values into the range from zero to infinity with densities trending to zero as values increased.
When I set about building a toolset, it had become clear that students would gain most flexibility if separating the (active) sequence-generating components from the (passive) distribution-mapping components became a policy. The number of options would increase to the number of sequence-generator types times the number of distributions. To ensure compatibility between these two components I imposed two requirements on the toolset:
Thus arose the paradigm of a system coupling an active “driver” with a passive “transform”. My usage of “driver” came from acoustics, where a “driver” indicates an active component (e.g. the vocal cords) that would be coupled with a passive resonating system (e.g. the cavities of the mouth and nose). The range from zero to unity became known as the “driver range”.
Once the paradigm had been established, it quickly begged a number of developments to round it out. The most urgent need on the transform side were numerical methods which could estimate the mapping curve for any desired distribution. Numerical methods for discrete statistical transforms had already been worked out by Xenakis. Someone must have worked out numerical methods for the continuous distributions — Where did all those statistical tables come from? — but who did this, and how they did it, was not reported by Knuth. That left me to reinvent the method of trapezoidal approximation described earlier, which effort was subsequently reported in my Catalog of Statistical Distributions.
The most urgent need on the driver side was to gather together a rich selection of processes. This activity is documented in my Catalog of Sequence Generators.
Since the chaotic drivers Logistic and Baker generate very recognizable motivic shapes, I devised the Fracture component to enrich their possibilities. Fracture divides the driver range into equal-sized regions; it then uses a randomly shuffled permutation to relocate values from one region to another. Fracturing makes sense when values concentrate around strong attractors, but it makes less sense for other driver categories. Fracturing was originally described in my Catalog of Statistical Distributions.
One more innovation was the Level component, which I first described in How to Level a Driver Sequence.
© Charles Ames | Page created: 2013-05-24 | Last updated: 2015-04-24 |