A recent exchange with fellow composer John Mackey on Facebook about saxophone samples (he uses the Vienna Symphonic Library series, and I’m currently using the built-in General MIDI samples, which possess the subtlety of a chainsaw) reminded me that I’ve long wanted to do a post comparing MIDI and live versions of a few of my pieces.
I would venture to say that most composers working today (at least those under the age of, say, 40-45) use a computer and MIDI playback at some stage in the creative process. Some, such as John, do most of their composing directly in a notation program such as Finale or Sibelius, while others, such as Eric Whitacre and Jonathan Newman, tend to write with old-fashioned pencil and paper and then enter it into the notation program, or go back and forth between the two. (Aside: around 10 years ago, Eric had a sketch of the opening two minutes or so of Equus sequenced, but was considering throwing it out. I happened to be visiting him at the time, and threatened to steal it if he insisted on trashing it.) My own approach shifts from piece to piece: I sometimes use pencil, paper, and a piano (Alchemy in Silent Spaces, A Millions Suns at Midnight), but nearly all of my composing now happens in MOTU’s Digital Performer (Ecstatic Waters, Axis Mundi, Radiant Joy, Bloom, Dusk, The Marbled Midnight Mile) or directly in Finale (Chester Leaps In, ImPercynations, Suite Dreams, Stampede, First Light, and Mvt. I of my in-progress Concerto for Wind Ensemble*). I do, however, do quite a bit of brainstorming before beginning work, and often write quite a bit of prose describing what will happen in the piece before I fire up the music software.
The great advantages of using MIDI playback during the composition process is that it gives the composer an immediate sense of the timing of a piece. In my early compositions (all written with pencil and paper, hunched over an upright piano in a practice room only slightly larger than said piano), I would only discover architectural imbalances after it was too late (i.e. after the premiere performance). For me, this is MIDI’s greatest strength – I can sit safely ensconced in my studio and tweak the timing so that it’s much, much closer to my ideal before the first reading. I still revise new works during rehearsals and after the first performance, but the changes rarely have to do with architecture, instead being of the errata and orchestrational variety. So, MIDI playback allows me to get closer to my goal, sooner.
Now for the disadvantages: In addition to being incredibly deceptive in terms of balance and orchestration, the computer never has to breathe, has (all-too-) perfect rhythm, and is lifeless. I’ve found I tend to miscalculate tempos because of this ultra-precise, effortless quality to the sound, and have to guard against cranking the tempo up too high trying to compensate for a lack of human energy. Perhaps even more importantly, the samples can, if they sound awful (Saxophones!) discourage you from writing for a particular instrument, or, if they sound fantastic (Hans Zimmer film score Horns!) give you unrealistic expectations.
Given all this, I thought it might be instructive to give some side-by-side comparisons of MIDI and live performances from some of my own works. You’ll notice I don’t go to a lot of trouble to make my MIDI mockups sound good – it can become very time consuming to enter all the volume and expression changes (this was especially true 5-10 years ago, but I still find it tedious and disruptive to the creative process). More importantly, I feel leaving the MIDI sounding “rough” forces me to use my imagination more, and not get suckered into the good sounds, though that still happens sometimes, anyway (I love an epic Horn patch).
1) Stampede (2003)
Finale 2003, Roland JV-1080 synth module, Mac G3 Blue&White 450MHz
SCORE (PDF, mm. 1-35, pp. 1-5)
Mm. 1-35, MIDI
Mm. 1- 35, University of North Texas Wind Symphony, Eugene Corporon, conductor, Poetics
Note how long the MIDI note durations are compared to the actual band.
Bonus: Here’s what the score looked like on the first day I started work on the piece. It appears I wrote the “intro” that day, but hadn’t yet come up with the Trumpet solo melody:
Stampede score, September 2, 2003 (first day of composing)
Studio Vision sequencer, Proteus Orchestral synth module (I think), Mac Quadra 630
This sounds pretty ridiculous in retrospect, but it was my very first MIDI mockup, as far as I can recall.
3) Ecstatic Waters (2008)
Digital Performer sequencer, MachFive software sampler (instrument samples from many different sources), MacBook Pro 2.16GHz Core 2 Duo
Ecstatic Waters presented a special challenge. When I do “electronic” music (such as Hummingbrrd, Veo Hex, etc.), I tweak the sounds and the mix endlessly. This is a very different manner of working from when I make mockups of acoustic music. Ecstatic Waters required that I mix these two working methods, putting acoustic samples of questionable expressiveness next to the electronics tracks, which would indeed be exactly what the audience would hear. The good thing was that I could use subwoofers.
SCORE (PDF, mm. 166-394, pp. 21-51) – in particular, note the difference in saxophone sounds between MIDI and live versions, beginning in m. 253.
Mm. 166-394, MIDI
Mm. 166-394, University of Texas Austin Wind Ensemble, Jerry Junkin, conductor, Live at CBDNA, March, 2009
Hope these examples are enlightening for some of the young composers out there. My advice is to not get too obsessed with making the MIDI sound good, and don’t necessarily believe your ears when it does sound good. You’re not truly finished with the piece until you’ve heard real human beings bring your music to life!
*For more on the Concerto for Wind Ensemble, and how I’m using Digital Performer to compose Mvts. II – V, check out my video series.