Chip music generated by WebAssembly

Thinking of chip music, my first association is the music of the Commodore 64 with it's 3-channel SID chip. For each channel you could choose a waveform (saw, triangle, square, noise), control the envelope, apply some basic filters, and by writing some clever code to control it in real time you could produce quite impressing pieces of music - though with the characteristic sound we think of as "chip music".

Sine then chiptunes have evolved, while still preserving the characteristic sound, sometimes chip music is so advanced that it's hard to believe that it's generated fully in code without samples. One example of this is the 4klang synth that generates rich sounding music with less than 4kb of data and code. All generated by the chip (CPU).

So after playing with 4klang for a while, I thought about WebAssembly. Why not try generating chip music in web assembly, and also take use of the portability of the WASM module format to create instruments in code. And for such a purpose a language allowing rapid development and at the same time creating small and performant webassembly modules was needed. I found AssemblyScript to be great for that task.

Putting together the WebAssembly music project

AudioWorklet technology provides low latency audio processing in the browser, and WebAssembly makes it possible to synthesize audio in real time with consistent / predictable performance. This project use both technologies to create a performant synthesizer and sequencer with midi connectivity, and a live coding environment for expressing music in Javascript. The synthesizer is written in AssemblyScript producing small and performant WebAssembly binaries.

overview diagram

Synthesizer & sequencer

The synthesizer and sequencer is contained in the same WebAssembly module. The sequencer is pattern based, so that a pattern contains note numbers, and there are instrument pattern lists pointing to the chain of patterns to be played for each instrument. This is in fact the same as found in 4klang which is a synthesizer / sequencer for 4kb demos/intros (x86 assembly), and the javascript api's for writing music can produce pattern data for both 4klang and the WebAssemly synth.

Example of song data represented in JSON:

{
    patterns: [
        [64 , , 65, , , 67],  // pattern no 0
        [22 ,23 , 34, 34 , 34] // pattern no 1
    ],
    instrumentPatternLists: [
        [0, 1], // play pattern 0, then 1 for instrument 0
        [1, 1], // play pattern 1 and then 1 for instrument 1
    ]
}

Both 4klang and the WebAssembly synth receives this as a sequence of numbers (bytes) in memory.

Coding an instrument in AssemblyScript

The primary downside of coding an instrument is that you have to compile it which takes a little more time than just adjusting parameters in real time. The upside is that you can do whatever you want - you're not limited to the parameters available. And the AssemblyScript compiler is fast, so you don't have to wait for long.

Coding an instrument is about receiving note signals, and implementing the method producing the next sample for the audio signal.

Here's a simple sine lead instrument:

import { StereoSignal } from '../../synth/stereosignal.class';
import { Envelope } from '../../synth/envelope.class';
import { notefreq } from '../../synth/note';
import { SineOscillator } from '../../synth/sineoscillator.class';

export class SineLead {
    private _note: f32;
    
    // two sine oscillators
    readonly osc: SineOscillator = new SineOscillator();
    readonly osc2: SineOscillator = new SineOscillator();
    
    // envelope
    readonly env1: Envelope = new Envelope(0.02, 0.15, 0.05, 0.3);

    // output signal
    readonly signal: StereoSignal = new StereoSignal();

    set note(note: f32) {        
        if(note > 1) { 
            // note values > 1 triggers a note playing
            this.osc.frequency = notefreq(note) * 2; // one octave up for one osc
            this.osc2.frequency = notefreq(note);
            this._note = note;
            this.env1.attack();           
        } else {
            this.env1.release();            
        }  
    }

    get note(): f32 {
        return this._note;
    }

    next(): void {        
        const env1: f32 = this.env1.next();
                
        let osc: f32 = this.osc.next();
        let osc2: f32 = this.osc2.next() * 0.2 * env1;
        osc *= env1;

        const pan = this._note / 127;

        // output signal
        this.signal.left = osc * pan + osc2 * (1-pan);
        this.signal.right = osc * (1 - pan) + osc2 * pan;
    } 
}

See more instruments here

Creating a mix

When you've created your instruments, you need to mix them so that they can play together.

So for each audio signal frame you'll call the next() method on the instrument and add to the mixer output lines:

    bass.next();
    mainline.addStereoSignal(bass.signal, 4.5, 0.5);
    reverbline.addStereoSignal(bass.signal, 0.3, 0.5);    

    sinelead.next();
    mainline.addStereoSignal(sinelead.signal, 2.4, 0.5);

Have a look at a few mixes here.

Creating a song using the Javascript API

Expressing music in code is powerful. I think of sheet music as some kind of code too - you have loops, pauses, commands for increasing/decreasing velocity etc. By using a programming language you can use all the power from the language, and editing music in a text editor means that you can easily copy / move / paste code. You can also use versioning tools like GIT and easily see the diff from version to version.

It's also easy to share songs through github gists:

I'll come back and write more on the details of the API, but until then take a look at the song sources above. You can also find my song sources here