O'Reilly    
 Published on O'Reilly (http://oreilly.com/)
 See this if you're having trouble printing code examples


Jump to music examples

Eric Lindemann strongly doubts there were computers in the Garden of Eden. "Computer music is a post bite-of-the-apple development," he writes in his mission statement. "It represents the human compulsion to deconstruct nature, understand it, and dominate it. It is technology par excellence. Does technology bring happiness? No. As we all know, it brings aggravation. Yet we are compelled to take the next step. I believe this is the destiny of our species."

Through his company, Synful, Lindemann is working hard to shape that destiny. The name derives from "synthesis" rather than sin, but hints at his novel approach. Lindemann's goal is to help musicians play more expressively, and this inventor, composer, and former session keyboardist has developed some groundbreaking technology to do it.

Lindemann has a 30-year resume in the electronic music field. He designed signal processors for sampling pioneers Linn Electronics and Waveframe, DSP for Cirrus Logic, computer music systems for IRCAM, and speech-enhancement algorithms for digital hearing aids. He's earned 12 patents—including three for Synful—with names like "Encoding and Synthesis of Tonal Audio Signals Using Dominant Sinusoids and a Vector-Quantized Residual Tonal Signal."

Eric Lindemann Headshot

In addition to designing music hardware and software, Synful CEO Eric Lindemann has played keyboards on numerous film and television scores.

His latest achievement, Synful Orchestra, is not a sampler or a sample library; it's not exactly an additive synthesis module and it's not a physical modeling plugin. It's a new concept in virtual instruments that got audiences buzzing when Lindemann debuted it at this year's enormous NAMM show after seven years of development.

Randy Alberts (RA): You helped create the world's first truly programmable DSP hearing aid. How did your experience in digital signal processing influence the development of Synful?

Eric Lindemann (EL): I've always worked with signal processing, but until the hearing aid project it usually had to do with designing hardware. I specifically took that job because I could do full-time development of new signal processing algorithms—original ones as opposed to implementing an MP3 encoder, for instance, where you're simply implementing someone else's ideas. But it was also a chance for me to really understand hearing better. A lot of engineers in the audio industry have a need to better understand psychoacoustics, which is the science of what in the soundfield one is actually aware of and how our brains put it all together.

The psychoacoustics aspect of that work was important for this synthesizer project. I also use some of that knowledge in optimizing the additive synthesis engine that I use. Just having had a full-time signal-processing research job for a number of years helped me a lot with my work and designs in synthesis at Synful.


Digital Audio Essentials

Related Reading

Digital Audio Essentials
A comprehensive guide to creating, recording, editing, and sharing music and other audio
By Bruce Fries, Marty Fries

RA: Is it safe to say you've been frustrated for years with the level of expression available on electronic music products?

EL: Yes. Before I was an engineer I worked professionally in L.A. as a musician. I became involved with synthesizers in the mid to late '60s with the first big modular Moog synthesizers, but I was never much interested in electronic music because it always sounded kind of, um, cheap to me. [Laughs.] It wasn't rich emotionally; it didn't move me.

Then I became interested in computer music in the mid-'70s, but more for the algorithmic aspects of music, not the electronic nature of the sounds. Processing live sound and doing interesting things with algorithmic composition using computers has always interested me. Still, in those days, I wasn't interested in electronic sound because it didn't move me emotionally.

RA: I love your analogy of "musical note transitions being the connective tissue of musical expression." Is that what was, or perhaps still is, missing for you in electronic music performance?

EL: I was attached to violins, saxophones, and such, so for me a fusion Minimoog solo didn't nearly have the power of a John Coltrane solo, you know? But for the past 15 years now, I have become much more interested in whether I could make convincing imitations of those natural instrument sounds. I hope this will prove to be a springboard for me into new sounds that have that same emotional power.

Aspen Music Festival Lecture

Lindemann explains Synful's database-driven, real-time synthesis in this Aspen Music Festival lecture.

RA: Tell me about your research at the legendary IRCAM.

EL: My main work at IRCAM was designing a general-purpose computer music machine, but there were a couple of abortive attempts there and later to try to tackle the problem of musical expression. Those projects weren't so much about solving the note transitions problem, but rather just in getting more expressive sounds that you could control. So my Synful research project is largely about note transitions. I kept restarting from scratch in these ways each time over the years and then finally got something I thought was workable, and good, and that applied across a number of instrument families.

Synful Interface

The main interface screen of Synful Orchestra. Planned versions include Synful Jazz, Synful Rock, and Synful Fiction—the latter to be used in creating entirely new expressive instrument voicings.

RA: So how does Synful better translate a musician's keyboard performance?

EL: First, a musical synthesizer is something that translates gestures on a controller, or gestures you've drawn in a sequencer or MIDI editor, into sound. Built into the output synthesis section of Synful Orchestra is an additive synthesis engine expressing sound as sums of sine waves with time-varying amplitudes—in other words, harmonics.

Additive synthesis is a traditional method of synthesis. But, to me, there is no such thing as an additive synthesizer—to me, a synth is something that translates control into sound. All additive synthesis says is how you're representing the output of that sound: "I am representing this sound as a sum of sine waves." But it doesn't say anything about getting from control to sound, which lies at the heart of the synthesizer expression problem and is where all the subtle issues of expression come into play. That's why I say that an additive synthesizer is not a synthesizer: it's a sound representation, like MP3 is a sound representation.

RA: Where does the definition of "sampler" fit in?

EL: In sampling you have a model that involves recordings of individual notes. There is then a simple mapping between keys on the keyboard and recordings: when you play this key you get a certain recording. This is a bit simplified, but that's the basic idea—you have very simple and predictable behavior with little of the interesting interaction from one note to the next that occurs in a real instrument. So that's where I really try with Synful's RPM [Reconstructive Phrase Modeling] engine to move out ahead of where sampling is today.

RA: Hang on—is Synful Orchestra a software synthesizer plugin, or a "reconstructive phrase-modeling synthesizer"?

EL: The implementation is in the form of a soft synth plugin, but the underlying technology is the interesting part with RPM. The idea of RPM is that you're trying to figure out from the incoming MIDI note events what kind of phrase you're playing. Specifically, let's say you have a series of four notes, for instance, C-D-E-G [sings the "Tennessee Waltz" intro melody]. Or let's make it a little more interesting with more character to be played as [stretches the same phrase out] "Da-da-daaaa-daaaa," so you have some articulation in there. Now, let's make this phrase go a little faster. We now have a C and a D that are separated by a very short little silence, a D and E separated by the same silence, and then finally an E-to-G slur at the end. All in this little group are one quick phrase gesture.

When a real instrument plays this phrase, all these notes influence each other—especially the short silences between the first three notes of the phrase and when there's a slur between the E and the G. The way those notes sound and are played on the instrument is affected by their context in our example phrase. In speech technology, we call this "co-articulation," meaning the syllables and vowel sounds and their pronunciations are affected by the sound that comes before and after each note or event. It's the same thing for the notes in our phrase here, especially when they're in such close proximity.

RPM Instrument Flow Chart

Synful Reconstructive Phrase Modeling maps incoming MIDI data to the phrase database to shape the output of the synthesizer in real time. (Click to see the complete flow chart.)

RA: How do you build and play an RPM instrument?

EL: I begin by recording a bunch of musical phrases from an instrument and storing them in a phrase database. When MIDI comes in [from a subsequent performer or sequencer], Synful Orchestra analyzes it in terms of separation or overlap between notes, note duration, velocity, expression control, etc. The idea is to make as clear a picture as possible of the phrase being played from the incoming MIDI.

Then, in real time, the database searches for little pieces of phrases that correspond to the incoming MIDI. This could be just a transition between two notes or a series of several fast notes. For example, if someone plays C-D-E-G on the keyboard with a little separation between C and D and between D and E and a little overlap between E and G, then in real time the database is searched for phrase examples like that. Somewhere in the database we might find a C#-D-E phrase with the right kind of separation and a legato transition F to A.

Now, if we transpose C# to C and transpose F/A to E/G, adjust the timing a bit, and splice the pieces together, we have our desired phrase. That's the way Synful Orchestra and RPM work. There's a lot of searching for little phrase fragments, a lot of pitch shifting and time adjustment—or let's just say "morphing"—and a lot of splicing of phrase fragments. That's something that you cannot do with a sampler with sounds stored as PCM [pulse code modulation] samples.

RA: Why? Because the resulting database would be too large?

EL: Not only that, but ignoring the size problem, it's just not a flexible or malleable way to store sound. In PCM it is very difficult to change the pitch without changing the timbre. It's also difficult to change the length of a note without changing the speed of the vibrato, and it's difficult or impossible to splice phrases without generating a noticeable timbral discontinuity. That's why these sample libraries are getting so huge in trying to cover just the most simple note transitions.

Here's an analogy from the graphics world: a bitmapped versus a vector representation of an image. In Photoshop you're for the most part manipulating bitmaps, and in Illustrator you're manipulating vector graphics. So, in Illustrator you're manipulating objects that are represented by formulas, such as a circle, a square, a rectangle, and so on. In Photoshop, there are some filters that try to get smart about a bitmapped object's form, but essentially there's no knowledge of what the objects "are" in the picture. It's all just color represented by bits. So, in a sense, the traditional sampled representation of an instrumental sound is like a bitmap. It's a dumb image in which you have no knowledge of the contents. Let's say there was some circular object in your picture, a light bulb, for instance—

RA: Perfect example! There's a dim light bulb over my head beginning to glow brighter.

EL: [Laughs.] Right! So let's say you want to move that light bulb to the right, or make it bigger or smaller. In Photoshop it is very difficult to do that because there's no knowledge that there's a vector object within all those colors that are the light bulb. In Illustrator it'd be easy because it's represented by an intelligent circle object that you can grab, move, enlarge, and even recolor.

In a sense what I'm doing with Synful is starting with a bitmapped image of sound, a PCM recording of a musical phrase, and trying to convert that bitmap into a smarter vector sound image—the RPM-additive representation—that knows about the objects but still keeps it sounding like the original PCM recording, except now with the flexibility of moving objects around, the notes, as you do with vector objects in Illustrator. But it's still difficult to do with sound. For me, Synful Orchestra is technology [for] getting far more expressive musical synth performances and compositions.

Jamming with Roger Linn

Lindemann (R) jams with guitarist and drum machine inventor Roger Linn.

RA: Does each instrument have its own phrase database?

EL: Yes. I record musicians playing phrases and capture a wide variety of phrases at different pitches, a collection of phrases that represent the various ways each instrument can be played, especially in orchestral settings. It's a subtle, difficult problem getting the right collection of phrases. I'm working on that now by doing new recordings of musicians and building better databases of phrases.

RA: For a solo violin, for instance, how many notes are actually sampled into Synful Orchestra?

EL: For the current violin database there are 750 notes. Now, one of the interesting by-products of the additive synthesis representation is that it is much smaller than the equivalent recorded PCM phrases—like less than one one-hundredth or even one one-thousandth the size. That's why my entire orchestra fits inside of 32 megabytes of RAM.

RA: Thirty-two megs? That's amazing.

EL: Especially when you consider that many sampled orchestral collections these days are on the order of 300 gigabytes and even into the terabyte range. But it's not just one-to-one that additive is smaller than PCM; it's also that I'm able to reuse a piece of a note over a much wider pitch range. So it's additive synthesis and the way I use this additive synth engine that allows me to transpose over a larger range, and reuse, and recombine materials in a much more flexible way.

RA: All this technology is impressive, but at some point haven't you just thought about giving up and simply investing your time in taking actual violin lessons?

EL: Yes, exactly—that does come up in this line of research! But, of course, we're talking here about the desire to compose and perform realistically on a computer using a keyboard controller. Composers want and need to hear their music realized as realistically as possible but it's still very difficult for them to get performances. When I was a young composer at 18, writing my first orchestral pieces, I would have just loved to have this current technology. I would have even loved to use a traditional sampler way back then, for that matter.

RA: Would you say that emerging computer technology is making a positive difference in the traditional art of compositional music?

EL: Oh, yes—technology is changing the compositional process. It's here to stay. I'm trying to make the technology as expressive as possible.

Synful Breakfast

Breakfast of the music technology titans: Tom Oberheim, Eric Lindemann, David Wessel, Max Mathews, and Keith McMillen dine in Berkeley.

Synful QuickTip #1: Step on the Expression Pedal

If you've never plugged an expression pedal in to the back of your keyboard, then your performances are just that much less expressive. An expression pedal, which pairs a variable resistor with a seesaw-like surface, can be used to control the performance parameters that each synthesizer engine provides. Want a little modulation on the flute solo or breath in the trumpet part? Just move the pedal up and down as you play and now you're really expressing yourself. The same goes for using a volume pedal or breath controller.

"In order to use Synful Orchestra effectively you need to use a volume or expression pedal or similar controller," says Lindemann. "Synful Orchestra responds to continuous changes in volume or expression with timbre changes. If you're playing a trumpet sound and step on the volume pedal, the trumpet gets brassier. Without this control, you can't contour long notes and the phrasing will sound stiff."

Synful QuickTip #2: Express Yourself in Sequence

"Synful Orchestra looks at little separations between notes, note overlaps, and velocity values to determine what kind of articulation to use—slurred, tongued, detached, etc," Lindemann says. "So you either need to take care when playing at the keyboard or you need to edit the tracks you've recorded later, in your MIDI sequencer."

Lindemann suggests editing and adjusting the note separations and velocities to achieve the desired phrasing of an orchestral part—a good tip for any MIDI performance. "Synful Orchestra is very responsive," he concludes. "But you still need to control the musical phrasing with MIDI."

Music Examples: Synful Orchestra

These MP3 examples were made by playing the Synful Orchestra with standard MIDI files. The Beethoven string quartet, Copland's "Fanfare for the Common Man," and Hari-Hara were sequenced in Cakewalk Sonar. Stravinsky's Rite of Spring was sequenced in Steinberg Cubase. The production began by inputting basic notes, then adding volume-pedal and velocity adjustments. There is also a bit of pitch-bend in the Beethoven example. Note lengths were adjusted so notes would overlap (producing legato) or separate (producing detached phrasing). All examples feature Lexicon Pantheon reverb processing, and open in new windows when clicked.

The final example is the first of four movements of the ballet Hari-Hara, composed by Lindemann's 18-year-old daughter Anna, the program's alpha tester.

Randy Alberts is an author, musician, and photographer who lives on Lummi Island, Washington.


Return to digitalmedia.oreilly.com

Copyright © 2009 O'Reilly Media, Inc.