Music can be thought of as sounds organized to be played at given points in time. It can be represented as a list of sound events. A MIDI sequence is such a list. For example it can contain a command to start playing a note on a channel with a given velocity, and then later on a command to stop that note. Also commands to set the main volume on the channel, pressing the sustain pedal, or altering the pitch bend.
Here's a visualization (in an actual MIDI sequence this would just be hex numbers):
time (seconds) | command | note | velocity | value |
---|---|---|---|---|
0 | noteon | c5 | 100 | |
0.5 | pitchbend | 2048 | ||
0.7 | noteoff | c5 |
Instead of writing noteon and noteoff events, it would be easier just to write a note command with a duration parameter. Also it would be easier if we didn't have to write the time in seconds, but instead could think in terms of beats based on the current tempo.
setBPM(120);
createTrack(2).play([
[ 0, controlchange(64, 127) ],
[ 1/2, c4(1) ],
[ 4/2, d5(1/4) ]
]);
Even better if we could just write a comma separated list of notes and each comma represented a step, and we could decide the number of steps per beat.
await createTrack(0).steps(4, [
c5,,e5(1/2),,g5
]);
And so this is the purpose of the simple javascript midi API provided in the javascriptmusic project. When running the javascript code above the result is a list of midi events with note on/off events, control change events, timed in seconds.
When writing the JS midi code it's important to wait for each part to finish before you start the next, unless you want the next part to start at the same time. So for example if you've created a function for playing drums like this:
const drums = async () => createTrack(0).steps(4, [
c3,,fs3,,
d3,,fs3,,
]);
and another for base:
const base = async () => createTrack(1).steps(4, [
d2,,d3,,
d2,,d3,,
]);
and you want them to play simultaneously in a loop:
for ( let n=0; n<10; n++) {
base();
await drums();
}
notice that base()
is called without waiting, but we put await
in front of drums()
since we want the drums to finish before looping.
So what about recording from your MIDI-keyboard when representing your music in code? We can convert the recorded MIDI events to JS code.
Simply insert the commands startRecording()
and stopRecording()
where you want recording to start and stop. Start playing and when done, click the button for inserting the recording at the current cursor in the editor.
In this song most of the parts are recorded from a MIDI-keyboard. For example the first lead solo looks like this in code:
const leadsolo1 = async () => createTrack(3).play([
[ 2.50, d5(0.19, 30) ],
[ 3.00, f5(0.47, 62) ],
[ 3.48, g5(0.04, 62) ],
[ 3.77, gs5(0.11, 78) ],
[ 3.86, a5(0.91, 60) ],
[ 5.57, g5(0.17, 54) ],
[ 5.99, a5(0.45, 74) ],
[ 6.50, c6(0.07, 65) ],
[ 6.92, c6(0.13, 72) ],
[ 7.00, d6(0.45, 58) ],
[ 7.49, c6(0.15, 49) ],
[ 7.69, d6(0.98, 74) ],
[ 9.60, e6(0.04, 77) ],
[ 9.59, f6(1.09, 95) ],
[ 10.71, e6(0.10, 63) ],
[ 10.82, f6(0.14, 90) ],
[ 10.94, e6(0.04, 58) ],
[ 10.95, d6(0.58, 84) ],
[ 11.49, c6(0.15, 69) ],
[ 11.88, d6(0.16, 82) ],
[ 12.02, e6(0.99, 83) ],
[ 13.07, c6(0.84, 83) ],
[ 13.97, a5(0.97, 87) ],
[ 14.97, g5(0.56, 87) ]
].quantize(4)); // Quantize to 4 steps per beat
See some live examples in this video.
This music project now has several modes, and for the first versions as shown at the WebAssembly summit 2020 you could export a WebAssembly module that would generate raw audio data. However that mode is based on a pattern-sequencer which is not compatible with this MIDI-sequencer song mode here. The project also has a mode for creating Amiga-Protracker modules, which exports a .mod
file.
In the midi-sequencer mode described here ( files starting with // SONGMODE=YOSHIMI
), the export button will export to a WAV file. Export to WASM module is something I would like to create for this MIDI mode too, but then it would be for the AssemblyScript synthesizer. Extending the AssemblyScript synth with MIDI capabilities like polyphony and control changes is on the list of what I'd like to look into, and also simplifying the methods for synthesizing sounds.