Exporting to External Formats

Exporting Scores to External Formats

The purpose of my score framework is to represent the content of a musical composition in a manner that is readily exportable into the three external output formats supported by the framework: MUSIC-N note lists, MIDI sequences, and MusicXML files. Score detail is structured into a time-ordered list of events because this is the how the framework needs to traverse the data as it go about translating things into external formats. The essential differences between export formats are (1) how the data is structured, and (2) how time is represented.

MUSIC-N

MUSIC-N note lists are the least structured of the three output formats in that the files have no structures intermediate between the score as a whole and the individual note. This means that the event list can be traversed in a single pass. Each untied note, and each sequence of tied notes, converts to a single parametric MUSIC-N note statement. If the sound-synthesis engine supports ramp statements, then contour segments from the score framework convert very readily into ramps; otherwise, parameter values are dereferenced using the access path illustrated in Figure 2.

The starting times and durations employed in note statements are absolute times expressed in seconds. The converter keeps track of two variables: the current relative time in whole notes and the current absolute time. As it proceeds from one event to the next, the converter determines how much relative time has elapsed between events. The tempo mapping is then used to convert this time interval from whole notes into seconds, using the calculations described here. The current absolute time advances by this absolute time interval, and event conversion proceeds. The same tempo-mapping calculations convert both note sounding durations and contour-segment timespans.

MIDI

MIDI sequences have a single-track option (file type #0), but this is less useful than the multi-track option (file type #1). To create type #1 MIDI sequences, my framework needs to pass through its own event sequence several times. The first pass produces a timing track; during subsequent passes the voice referenced by each framework event is consulted, and if the voice does not identify the current MIDI track name, then the event is ignored.

The MIDI file format wraps MIDI messages (defined in an earlier standard) within a chain of events separated by counts of short durations called ticks. A tick is a fraction of a quarter note — this means that MIDI times, like times in my score framework, are relative. You yourself select the number of MIDI ticks per quarter note when you configure “ticksPerUnit” in your ensemble's MIDI mapping (see Listing 8). My score framework does not create MIDI sequences from whole cloth, but rather leverages the Java Sound API to do so, and when you create a MIDI event using the Java Sound API you assign the event its tick location relative to the beginning of the sequence — not relative to the previous event. Thus the timing of each MIDI event can be determined simply by multiplying the score-framework event's start time (a ratio of whole notes) by the number of ticks per quarter note.

Each untied note, and each sequence of tied notes, converts to a pair of MIDI-Note-On and MIDI-Note-Off events. Note-On/Note-Off velocities are drawn from the Velocity contour, if one is defined.

Tempo indications are resolved into MIDI-Set-Tempo meta events; new meta events are created whenever the tempo difference encompasses a full integer. Likewise, when custom contours are mapped to MIDI controls, then MIDI-control-change events are created whenever the difference between successive control values encompasses a full integer.

Velocities
Dynamic
0-10
pppp
11-23
ppp
24-36
pp
37-49
p
50-62
mp
63-75
mf
76-88
f
89-101
ff
102-114
fff
115-127
ffff
Table 3: MIDI velocity ranges mapped to MusicXML dynamics.

MusicXML

The MusicXML standard defines both “part-wise” and “time-wise” formatting option, but in my experience finale only exports using the “part-wise” option, so this is the option I support as well. To create MusicXML files, the framework passes through the event sequence once for each framework voice, ignoring events for other framework voices.

I have stated previously that if a score is destined for MusicXML export, then its voices are additionally required to be monorhythmic. However, MusicXML vertically organizes activity by part, a part may contain multiple staves, and a part may also contain multiple voices. Thus two or more ‘voices’ from my score framework may reference the same MusicXML part, long as each of the voices has a different MusicXML voice ID. Understand that the MusicXML voice ID is a different attribute altogether than the framework voice id. Thus a part with multiple voices may coded in MusicXML by laying down MusicXML voice #1, backing up to the start of the measure, laying down MusicXML voice #2, and so forth.

Frequently when one MusicXML part contains two voices, each voice will be presented on a separate staff. This is the case, for example, with the grand keyboard staff where the right-hand voice is presented on the upper staff (often using treble clef) while the left-hand voice is presented in the lower staff (often using the bass clef). This scenario is accomodated by making including the default MusicXML staff id as an additional attribute for framework voices. The same solution accomodates the scenario of presenting two rhythmically independent voices on the same staff line, so long as pitches do not cross. When one creates a note my score-framework API sets the note's staff ID to the voice's default. However one can accomodate a third scenario by overriding this default, thus indicating that the left hand should cross over to the treble staff.

MusicXML “part-wise” files divide musical movements into parts and parts into measures. Each measure encloses a chain of XML elements such as note and rest elements (which have positive duration), direction elements (which have no duration), and backup elements (which have negative duration). Durations are expressed as “divisions”; where the number of divisions per quarter note can vary from measure to measure. Event timing derives entirely from context. The first event listed in a measure happens 0 divisions into the measure (i.e. at the beginning). If that first event is a note or rest lasting 4 divisions, then the second event listed in the measure will happen 4 divisions into the measure. If the second event lasts 2 divisions then the third event will happen 6 divisions into the measure. So things go until the entire measure is filled out. At that point either the measure completes or a new voice is laid down. To lay down a new voice you start with a backup element to push the timekeeping value back to the start of the measure, then proceed as before but with a new voice ID.

The division count of a note, rest, or backup element is calculated by multiplying the framework duration (a ratio of whole notes) by the number of divisions per quarter note. Remember that the framework represents rests as notes whose onset pitches have null degrees. Understand also that it is a framework note's period attribute that is used to calculate the number of divisions in MusicXML notes and rests. The framework note's duration attribute is ignored for MusicXML conversion. To obtain detached articulation you must either separate notes with explicit rests or apply a staccato indication.

MusicXML itself operates under the assumption that its content may be destined either for graphic rendering or for MIDI performance. This gives rise to the following design philosophy, quoted from www.musicxml.com:

MusicXML music data contains two main types of elements. One set of elements is used primarily to represent how a piece of music should sound. These are the elements that are used when creating a MIDI file from MusicXML. The other set is used primarily to represent how a piece of music should look. These elements are used when creating a Finale file from MusicXML.

This design philosophy proved most frustrating to me when I tried to encode things like accelerandi and crescendi into the MusicXML format. The philosophy required my conversion routine to present text directions setting the origin tempo, the time-extent of accelerando (crescendo), and the goal tempo. Then it was required to present separate MIDI directions indicating discrete tempi at various moments through the measure — I never bothered with that. Likewise for dynamics. For continuous MIDI controls like pitch bend, forget it! There is nothing equivalent to a contour in MusicXML Version 1.

Of the two formats, MIDI operates at (or below) the level of performance guestures while MusicXML operates note-by-note. Accelerations and ritards, ramped dynamics, pitch bend, and other continuous controls are musical features that MIDI handles well. MusicXML handles these same features clumsily or not at all. Such limitations make it difficult to consider MusicXML a viable intermediary for MIDI, at least for the foreseeable future.

Next topic: Contour Calculations

© Charles Ames Page created: 2013-10-16 Last updated: 2015-08-25