The Synful Orchestra: Better Music Through Database Splicing
Pages: 1, 2, 3, 4

RA: Is it safe to say you've been frustrated for years with the level of expression available on electronic music products?

EL: Yes. Before I was an engineer I worked professionally in L.A. as a musician. I became involved with synthesizers in the mid to late '60s with the first big modular Moog synthesizers, but I was never much interested in electronic music because it always sounded kind of, um, cheap to me. [Laughs.] It wasn't rich emotionally; it didn't move me.

Then I became interested in computer music in the mid-'70s, but more for the algorithmic aspects of music, not the electronic nature of the sounds. Processing live sound and doing interesting things with algorithmic composition using computers has always interested me. Still, in those days, I wasn't interested in electronic sound because it didn't move me emotionally.

RA: I love your analogy of "musical note transitions being the connective tissue of musical expression." Is that what was, or perhaps still is, missing for you in electronic music performance?

EL: I was attached to violins, saxophones, and such, so for me a fusion Minimoog solo didn't nearly have the power of a John Coltrane solo, you know? But for the past 15 years now, I have become much more interested in whether I could make convincing imitations of those natural instrument sounds. I hope this will prove to be a springboard for me into new sounds that have that same emotional power.

Aspen Music Festival Lecture

Lindemann explains Synful's database-driven, real-time synthesis in this Aspen Music Festival lecture.

RA: Tell me about your research at the legendary IRCAM.

EL: My main work at IRCAM was designing a general-purpose computer music machine, but there were a couple of abortive attempts there and later to try to tackle the problem of musical expression. Those projects weren't so much about solving the note transitions problem, but rather just in getting more expressive sounds that you could control. So my Synful research project is largely about note transitions. I kept restarting from scratch in these ways each time over the years and then finally got something I thought was workable, and good, and that applied across a number of instrument families.

Synful Interface

The main interface screen of Synful Orchestra. Planned versions include Synful Jazz, Synful Rock, and Synful Fiction—the latter to be used in creating entirely new expressive instrument voicings.

RA: So how does Synful better translate a musician's keyboard performance?

EL: First, a musical synthesizer is something that translates gestures on a controller, or gestures you've drawn in a sequencer or MIDI editor, into sound. Built into the output synthesis section of Synful Orchestra is an additive synthesis engine expressing sound as sums of sine waves with time-varying amplitudes—in other words, harmonics.

Additive synthesis is a traditional method of synthesis. But, to me, there is no such thing as an additive synthesizer—to me, a synth is something that translates control into sound. All additive synthesis says is how you're representing the output of that sound: "I am representing this sound as a sum of sine waves." But it doesn't say anything about getting from control to sound, which lies at the heart of the synthesizer expression problem and is where all the subtle issues of expression come into play. That's why I say that an additive synthesizer is not a synthesizer: it's a sound representation, like MP3 is a sound representation.

Pages: 1, 2, 3, 4

Next Pagearrow