A beginners notes on physical modeling synthesis with AssemblyScript

NOTE: This article should not be taken as a reference or proper teaching material on physical modeling synthesis. First of all these are my notes and experience from my experiments on the subject. If you are on the beginner level, and run into similar issues when experimenting on your own, maybe you'll find some inspiration here. Also it's a showcase on creating instruments as WebAssembly modules.

String and wind instruments are often sampled for using in electronic music. Many samples are required to support varitions across octaves, velocities and pressures and you can never really get enough samples. An alternative to sampling is to try simulate what happens when you pluck a string or air flows through a pipe.

I've just started exploring this myself, and I'm amazed by how little computing power it takes to create an instrument that feels alive, compared to the more "static" sounds of samples and additive/subtractive synthesis.

Realistic instruments is another chapter though. Synthesizing a real piano or guitar takes a lot more than the initial experiments I've explored here, and I'm not even close to mastering that art at this point. The basics and principles are however the same, and there are many professional plugins that offer such realistic sounding instruments using physical modeling synthesis.

In this article I'll focus on what I've learned and discovered so far, and challenges that had to be solved for basic things such as playing a note in tune. Still, with these few beginner-skills, it's quite amazing to develop sounds that feels so rich. It sounds a little different for every note, how hard you press the keys, or how you combine notes, and it makes the whole sound a bit more interesting.

I'll start with the showcase, and you can go on to the details below if you're interested:

The basic concept

It all starts with applying force. Plucking a string, or blowing air into a wind instrument. We can simulate this with noise, which again is just random numbers. Using Math.random() is the simplest way of creating a noise signal. In physical modeling synthesis terms we call this the exciter.

When force is applied to a string it starts to vibrate, and it's the length and tension of the string that decides the vibration frequency or pitch. We can simulate this by feeding the noise signal into a buffer with a length that corresponds to the pitch we want to achieve. And so we'll take the first signal value from the buffer, and add to the output, and add it to the input noise and put it onto the buffer with some attenution. It becomes a feedback loop that after a short while becomes a steady tone, which resembles a string instrument.

Apply an envelope to the noise input signal to simulate plucking or drawing the bow on a violin. A short attack and decay will resemble plucking, while longer attack/decays will sound more like violins and similar.

Also applying a low-pass filter on the envelope will make a clearer tone with less noise and reduce the initial "crispiness", and additional low-pass filtering when adding to the feedback loop will make a warmer sound.

This approach to physical modeling synthesis is called WaveGuide, and so my first WaveGuide implementation in AssemblyScript looks like you can see below. The realtime processing happens in the process() method, and as you can see there's not much more to it than I've described above. The rest is for setting up the WaveGuide.

class WaveGuide {
  envExciter: Envelope;
  filterExciter: BiQuadFilter = new BiQuadFilter();
  delay: DelayLineFloat = new DelayLineFloat((SAMPLERATE / notefreq(1)) as i32);
  filterFeedback: BiQuadFilter = new BiQuadFilter();
  feedbackLevel: f32;
  feedbackFilterFreq: f32;
  freq: f32;
  exciterenvlevel: f32;

  constructor(exciterAttack: f32, exciterRelease: f32, exciterFilterFreq : f32, feedbackLevel: f32) {
    this.envExciter = new Envelope(exciterAttack,
                                   exciterRelease, 0,
                                   exciterRelease);
    this.filterExciter.update_coeffecients(FilterType.LowPass, SAMPLERATE,
                          exciterFilterFreq, Q_BUTTERWORTH);
    
    this.feedbackLevel = feedbackLevel;
  }
  
  setFilterExciterFreq(freq: f32): void {
   	this.filterExciter.update_coeffecients(FilterType.LowPass, SAMPLERATE,
                          freq, Q_BUTTERWORTH);
     
  }

  start(freq: f32, feedbackFilterFreq: f32): void {
    if (freq != this.freq) {
      this.freq = freq;
      const maxFeedbackFilterFreq: f32= 20000;
      if (feedbackFilterFreq > maxFeedbackFilterFreq as f32) {
        feedbackFilterFreq = maxFeedbackFilterFreq as f32;
      } else if (feedbackFilterFreq < 10) {
          feedbackFilterFreq = 10; 
      }
      this.filterFeedback.update_coeffecients(FilterType.LowPass, SAMPLERATE,
                            feedbackFilterFreq, Q_BUTTERWORTH);

      this.filterFeedback.y1 = 0;
      this.filterFeedback.y2 = 0;
      this.filterFeedback.x1 = 0;
      this.filterFeedback.x2 = 0;
      this.filterFeedback.s1 = 0;
      this.filterFeedback.s2 = 0;

      const filterphase: f32 = filterPhase(freq, 
                                           SAMPLERATE, 
                                           this.filterFeedback.coeffs.b0, 
                                           this.filterFeedback.coeffs.b1, 
                                           this.filterFeedback.coeffs.b2, 
                                           this.filterFeedback.coeffs.a1, 
                                           this.filterFeedback.coeffs.a2);

      this.filterExciter.y1 = 0;
      this.filterExciter.y2 = 0;
      this.filterExciter.x1 = 0;
      this.filterExciter.x2 = 0;
      this.filterExciter.s1 = 0;
      this.filterExciter.s2 = 0;

      this.feedbackFilterFreq = feedbackFilterFreq;
      this.delay.setNumFramesAndClear(
        (SAMPLERATE  /
          (freq)
         ) - filterphase
        );
      this.envExciter.val = 0;
      
    }
    this.exciterenvlevel = 1;
    this.envExciter.attack();
    
  }
   
  process(): f32 {
    const envexciter = this.envExciter.next() * this.exciterenvlevel;
    let exciterSignal: f32 =  noise() * envexciter;
    exciterSignal = this.filterExciter.process(exciterSignal);

    const feedback = this.delay.read();      	
    let signal = exciterSignal + feedback;

    signal = this.filterFeedback.processForm2(signal);
    this.delay.write_and_advance(          
      signal * this.feedbackLevel
    );
	return signal;
    
  }
}

Go to the Wasm-music link above for the full source.

Challenges

Though it was quite easy to get a sound, I found very early that some things could easily get out of control. We are dealing with feedback here, which is kind of unpredictable and you need the mute button at hand. Pitch is not straightforward, as we're not just having a sinewave with our preset frequency as an input signal.

Feedback out of control

As long as you feed your delay line with a signal multiplied with less than 1, you are reducing the risk of feedback out of control. Also filtering it is essential to control the characteristics of the instrument. However if the cutoff frequency is approching the desired pitch, it will also change it. Sometimes you'd like to have feedback at 1 or even higher, where it might be worth considering introducing soft clipping to avoid the system spinning out of control. A simple soft clipping method is tanh.

Getting the desired pitch

This was one of the points where I struggled the most. It wasn't just as easy as setting the delay line buffer length according to the desired frequency. First I discovered that the low-pass filter into the feedback loop where causing the pitch to change. I tried compensating it by applying measures of the phase delay of the filter, which worked, but almost felt like tuning a real instrument to some extent. A better way was to make the cutoff frequency relative to the desired pitch, which in turn was more relevant for the instruments I tried to create. Then I could make sure that the cutoff frequency was far enough from the desired pitch, and so it is not affected by the filter.

But that wasn't all. There's another thing to delay lines, which is that the number of samples it contains is an integer. This means that the higher the frequency, the more likely it is that the desired buffer length will not fit into an integer. And rounding it up or down to the closest integer will immediately make the instrument sound of of tune. To compensate for this we have to support fractional lengths of the delay line, and after experimenting with alternating length and various interpolation, I ended up using an AllPass filter. An AllPass filter will keep the levels intact for all frequencies, but will introduce a phase shift (delay) that we can use to our advantage to compensate for the integer length of the delay line.

export class AllPass {
  	coeff: f32;
  	previousinput: f32;
  	previousoutput: f32;
  
  	setDelta(delta: f32): void {
      this.coeff = (1 - delta) / (1 + delta);
    }
  
	process(input: f32): f32 {
      const output = this.coeff * (input                                     
                                   	- this.previousoutput)
                                  + this.previousinput;
      this.previousoutput = output;
      this.previousinput = input;
      return output;
    }
}

export class DelayLineFloat {
    buffer: StaticArray<f32>;
    frame: f64 = 0;
  	numframes: f64 = 1;
  	previous: f32;
  	allpass: AllPass = new AllPass();
  
    constructor(private buffersizeframes: i32) {        
        this.buffer = new StaticArray<f32>(buffersizeframes);
    }

    read(): f32 {        
      const index = this.frame as i32 % this.buffer.length;
      return this.allpass.process(this.buffer[index]);
    }
  	
  	setNumFramesAndClear(numframes: f64): void {
      this.numframes = Math.floor(numframes);
      this.allpass.setDelta ( (numframes - this.numframes) as f32 );
    }

    write_and_advance(value: f32): void {
      const index = ((this.frame++) + this.numframes) as i32 % this.buffer.length;
      this.buffer[index] = value;
    }
}

But what if you want the lopass cutoff frequency close to the note frequency

When working on the piano for my WebAssembly Music entry at Revision 2021 executable music competition, I found that just keeping the cutoff frequency far away from the note in order to avoid impact on the pitch was not good enough. For higher notes on a piano, filtering the tops is essential. So had to figure out a way to calculate the frequency change caused by the filter, and luckily I stumbled upon this from the comment field on some audio site that I don't remember now:

function filterPhase(freq: f32, sampleRate: f32, a0: f32, a1: f32, a2: f32, b1: f32, b2: f32): f32 {
  const w: f32 = 2 * Mathf.PI * freq / sampleRate;

  const cos1: f32 = Mathf.cos(-1 * w);
  const cos2: f32 = Mathf.cos(-2 * w);

  const sin1: f32 = Mathf.sin(-1 * w);
  const sin2: f32 = Mathf.sin(-2 * w);

  const realZeros: f32 = a0 + a1 * cos1 + a2 * cos2;
  const imagZeros: f32 = a1 * sin1 + a2 * sin2;

  const realPoles: f32 = 1 + b1 * cos1 + b2 * cos2;
  const imagPoles: f32 = b1 * sin1 + b2 * sin2;

  const divider: f32 = realPoles * realPoles + imagPoles * imagPoles;

  const realHw: f32 = (realZeros * realPoles + imagZeros * imagPoles) / divider;
  const imagHw: f32 = (imagZeros * realPoles - realZeros * imagPoles) / divider;

  const magnitude: f32 = Mathf.sqrt(realHw * realHw + imagHw * imagHw);
  const phase: f32 = Mathf.atan2(imagHw, realHw);

  return -(phase / (2*Mathf.PI)) * (SAMPLERATE / freq); //phase in samples
}

Here you can simply pass the coeffecients of your biquad filter, and you'll get the phase response of the given frequency in return. And that you can use as an input to the AllPass filter in order to "repair" the frequency impact of the filter.

Smooth transitions between notes

This is still not 100% solved for me. I could of course go the easy way and just initialize a new WaveGuide for each note to play. That would be very inefficient though. A realtime synthesizer performs better if you have a "voice pool" that you initialize once, and reuse for the notes to be played. And the same goes for the WaveGuide, the buffers and filters are allocated once and reused with different configurations.

This reuse causes a side effect as the delay and filter buffers will contain "leftovers" from time to time. In an additive synth you could simply change the frequency of an existing voice and there would be no clicks, but changing the size of the feedback buffer or filter coeffecients could cause a sudden unwanted transient.

Having a large voice pool and clearing the filter coeffecients before each note reduce the problem to only happen when there are no voices left in the pool. In this case my synth normally takes over the oldest note, but this will cause a transient for WaveGuide synthesis. So for now try avoid reaching the limit of voices in the pool. Might also consider two delay buffers for this scenario, and crossfade during the transition.

Clicks and DC level

Another source of clicks is the DC level drifting. This should however be solved on the master mix, and not per instrument. This happens not just for this type of synthesis, but for any sound mixing. It's most easily solved by filtering out the lowest frequency on the master mix, but it's easy to forget.

My instruments

In the demo song, I've made a few instruments, can't say that they sound like the real thing - but they resemble some known instruments. However this is packed into a 50kb WebAssembly module, there are no sampled sounds. It would take several megabytes to represent the same in sampled audio data.

Like you can see in the end of the video, you can play with these instruments on their own by switching off the sequencer and select an instrument from the drop down. Use either your computer keyboard, or connect your midi-keyboard which is even better.

Not really a piano but

more like an electronic piano sound. In this instruments I have 3 waveguides with different exciter attacks/decays and filters. So kind of like 3 strings.

class Piano extends MidiVoice {
    env: Envelope = new Envelope(0.01, 1, 1.0, 0.1);
  	waveguide1: WaveGuide = new WaveGuide(0.06, 0.25, 50, 0.9999);
  	waveguide2: WaveGuide = new WaveGuide(0.01, 0.1, 100, 0.999);
  	waveguide3: WaveGuide = new WaveGuide(0.02, 0.05,150, 0.99);
  
  	
    noteon(note: u8, velocity: u8): void {
        super.noteon(note, velocity);
      	const freq = notefreq(note);
      	this.waveguide1.start(freq, freq * 6000 / (Mathf.pow(note, 1.35) ));
      	this.waveguide2.start(freq , freq * 4000 / (Mathf.pow(note, 1.3) ));
      	this.waveguide3.start(freq , freq * 2000 / (Mathf.pow(note, 1.3) ));
      	
      	this.env.attack();   
    }

    noteoff(): void {
        this.env.release();
    }

    isDone(): boolean {
      const ret = this.env.isDone();
      if(ret) {
        this.waveguide1.stop();
        this.waveguide2.stop();
        this.waveguide3.stop();
      }
      return ret;
    }

    nextframe(): void {
        const env = this.env.next();
      
      	const wg1: f32 = this.waveguide1.process();
		const wg2: f32 = this.waveguide2.process();
 		const wg3: f32 = this.waveguide3.process();
              
      	const velocity = env * this.velocity / 8;
       	this.channel.signal.add(
          (wg1 + wg2 * 0.8 + wg3) * velocity , 
          (wg1 + wg2 + wg3 * 0.8) * velocity
        );
        
    }
}

String sound

A bit simpler this one. One waveguide, long exciter decay to simulate drawing the bow on the string.

class String extends MidiVoice {
    env: Envelope = new Envelope(0.1, 1, 0.9, 0.3);
  	waveguide1: WaveGuide = new WaveGuide(0.2, 20.0, 100, 0.995);
  	
  	
    noteon(note: u8, velocity: u8): void {
        super.noteon(note, velocity);
      	const freq = notefreq(note);
      	this.waveguide1.start(freq, freq * 3500 / Mathf.pow(note, 1.3) );
      	this.env.attack();   
    }

    noteoff(): void {
        this.env.release();
    }

    isDone(): boolean {
      const ret = this.env.isDone();
      if(ret) {
      	this.waveguide1.stop();
      }
      return ret;
    }

    nextframe(): void {
        const env = this.env.next();
      	const signal = (
          		this.waveguide1.process()              	
              )
        	* env * this.velocity / 8 as f32;
       	this.channel.signal.add(
          signal, signal
        );
        
    }
}

The brass sound

This one sound a bit weak for single notes. You get the best "punch" effect by playing octaves, which is what I do in the demo song.

class Brass extends MidiVoice {
    env: Envelope = new Envelope(0.01, 1.0, 1.0, 0.1);
  	waveguide1: WaveGuide = new WaveGuide(0.02, 0.15, 1000, 0.99999);  	
  	waveguide2: WaveGuide = new WaveGuide(0.03, 0.2, 5000, 1.0);  	
  	
    noteon(note: u8, velocity: u8): void {
        super.noteon(note, velocity);
      	const freq = notefreq(note);
      	this.waveguide1.start(freq, freq * 10);
      	this.waveguide2.start(freq, freq * 8);
      	this.env.attack();   
    }

    noteoff(): void {
        this.env.release();
    }

    isDone(): boolean {
      const ret = this.env.isDone();
      if(ret) {
      	this.waveguide1.stop();
        this.waveguide2.stop();
      }
      return ret;
    }

    nextframe(): void {
        const env = this.env.next();
      	let signal = (
          		this.waveguide1.process()  +
          		this.waveguide2.process()
              )
        	* env * this.velocity / 24 as f32;
       	this.channel.signal.add(
          signal, signal
        );
        
    }
}

Guitar

The guitar is also a single string. Short exciter attack and decay, but a long lasting feedback.

class Guitar extends MidiVoice {
    env: Envelope = new Envelope(0.02, 1, 1.0, 0.1);
  	waveguide1: WaveGuide = new WaveGuide(0.005, 0.01, 3000, 0.9999);
  	
  	
    noteon(note: u8, velocity: u8): void {
        super.noteon(note, velocity);
      	const freq = notefreq(note);
      	this.waveguide1.start(freq, freq * 3500 / Mathf.pow(note, 1.3) );
      	this.env.attack();   
    }

    noteoff(): void {
        this.env.release();
    }

    isDone(): boolean {
      const ret = this.env.isDone();
      if(ret) {
      	this.waveguide1.stop();
      }
      return ret;
    }

    nextframe(): void {
        const env = this.env.next();
      	const signal = (
          		this.waveguide1.process()              	
              )
        	* env * this.velocity / 8 as f32;
       	this.channel.signal.add(
          signal, signal
        );
        
    }
}

Bass

and the same goes for the base, except that the exciter and feedback lo-pass filter cutoff frequencies are much lower than for the guitar.

class Bass extends MidiVoice {
    env: Envelope = new Envelope(0.1, 1, 1.0, 0.1);
  	waveguide1: WaveGuide = new WaveGuide(0.04, 0.01, 300, 0.999999);
  	
  	
    noteon(note: u8, velocity: u8): void {
        super.noteon(note, velocity);
      	const freq = notefreq(note);
      	this.waveguide1.start(freq, freq * 7);
      	this.env.attack();   
    }

    noteoff(): void {
        this.env.release();
    }

    isDone(): boolean {
      const ret = this.env.isDone();
      if(ret) {
      	this.waveguide1.stop();
      }
      return ret;
    }

    nextframe(): void {
        const env = this.env.next();
      	const signal = (
          		this.waveguide1.process()              	
              )
        	* env * this.velocity / 2 as f32;
       	this.channel.signal.add(
          signal, signal
        );
        
    }
}

UPDATE: Improving the piano

After spending some time on the piano, I found a few tricks to make it sound more realistic. Instead of a simple noise burst exciter, I created a WaveGuide with the sound of a piano hammer stroke which is then used as an exciter for the string.

Here you can see a video when playing the piano: https://youtu.be/RlydTTr952w

And the full sources can be found and tested here:

https://petersalomonsen.com/webassemblymusic/livecodev2/?gist=d71387112368a2692dc1d84c0ab5b1d2

Wrapping up

AssemblyScript is a general purpose language, but the fact that it compiles fast to WebAssembly, and even directly in the browser, makes it very efficient for rapidly developing instruments. In this example I was able to create the building blocks for waveguide synthesis and instruments with relatively few lines of code. You could for sure get even less code with a specialized language like e.g. Faust, but still there's something with the freedom and flexibility of a full featured general purpose language.

Also hoping that you see the possibilities with WebAssembly for music production. What I'm showing here is just a tiny fraction of what can be done and is being created with Wasm these days. Wasm is an excellent plugin format and I expect to see more and more instrument and effect plugins ship as WebAssembly modules.

Useful links: