The ADSR Envelope with Audiolib.js

So, let’s talk envelopes. First what they are, and what they can do to the quality of a sound.

Demo here: https://www.benfarrell.com/labs/examples/envelopes-12-17/, but read on for how it all works!

An envelope is a real simple concept actually. Take any signal, like a sound. Maybe that sound plays at a constant volume. When you apply an “envelope” to that sound, you are changing the volume of that sound while it’s played. It might go up and down, back up again, whatever.

How it goes up and down, and the speed at which the volume is changed is up to the details of the envelope used.

You could create an envelope that takes 8 hours to complete. Maybe you want to go to sleep with some music, and then wake up in the morning with music. If you know it takes you 30 minutes to fall asleep, you’ll start the music playing at a loud volume. Over the next 30 minutes, you envelope your sound from loud to quiet, to off. In the morning, 30 minutes before you wake up, the envelope makes the music go from off, to quiet, to loud again. It wakes you up!

While this 8 hours illustrates how an envelope works, it’s a bit different when talking musical tones.

It’s not that different, however. We’re still talking about volume over time, but we’re talking milliseconds instead of hours or even minutes or seconds.

The character or personality of a musical tone can be changed greatly by altering the envelope. While we talk about this in terms of 1/1000th of a second, you don’t really notice the volume changing as you listen to the tone. Your ear doesn’t necessarily detect that the volume is going up and down.

Instead, the sound just has a different tonal quality! A piano for example wouldn’t sound as sharp if it didn’t go from no sound to loud that quickly. An accordion though, has a longer time as it goes from quiet to loud. And that quality – the “attack” (amongst other factors) create the personality of the tone.

Let’s talk specifics now. ADSR.

That’s Attack, Decay, Sustain, Release. The ADSR envelope is just one type of envelope, but it’s a popular one that’s been used in electronic music for decades.

  • Attack is the period of time after the initial release, it’s typically the loudest part of the sound
  • Decay is the phase while you’re going from the attack to the sustain – you’re “decaying”  the volume from this sharp initial phase to the normal volume phase
  • Sustain is the normal phase of the sound.  It is typically less volume than the attack, and go on for an indefinite period of time, or for a specific amount of time
  • Release is the draw down from the sustain period to no sound.  A fade out
the attack, decay, sustain, and release phases

I’ve talked about Audiolib.js in previous posts.  Audiolib is the Javascript library that enables you to make these dynamic sounds in Chrome and Firefox.

Audio programming isn’t easy though!  So while Audiolib helps out in awesome ways, it doesn’t have concepts of notes and music theory.  You have to tell it what frequencies to play, and if playing a chord, which individual frequencies make up the chord – which I explored and created my own helpers for.

The ADSR envelope is another example of something that Audiolib.js provides, however, it doesn’t provide any obvious usage for it.

There’s actually a VERY good reason for this.  While, we’re talking about envelopes on musical tones, the 8 hour sleepy-time envelope is another good usage example.  And that’s restricting envelopes to the volume of a sound.  There are tons more examples in audio synthesis that envelopes can be applied.  Like effects – you can apply a distortion effect to a sound (like a rock guitar).  But you can envelope the amount of distortion which is applied to the guitar.  This has nothing to do with volume, and everything to do with just how much of something is applied to something else.

So, I’d like to create a usage of our ADSR envelope that is limited to producing a musical tone – especially in the example of producing live sound by using a trigger (here it will be your computer/laptop keyboard).

Let’s start with our previous example where we extend the Audiolib.js Oscillator via a plugin.  We extended it to simply take a musical notation, like an “A” and set the correct frequency:

audioLib.generators('Note', function (sampleRate, notation, octave){ // extend Oscillator for ( var prop in audioLib.generators.Oscillator.prototype) { this[prop] = audioLib.generators.Oscillator.prototype[prop]; } // do constructor routine for Note and Oscillator var that = this; // are we defining the octave separately? If so add it if (octave) { notation += octave; } that.frequency = Note.getFrequencyForNotation(notation); that.waveTable = new Float32Array(1); that.sampleRate = sampleRate; that.waveShapes = that.waveShapes.slice(0); }, {});

So let’s figure out how to work in an ADSR envelope. The usage for the Audiolib.js version is like so:

myEnvelope = audioLib.ADSREnvelope(sampleRate, attack, decay, sustain, release, sustainTime, releaseTime);

The parameters work like so:

  1. Sample Rate:  The sample rate of the audio – I won’t go into it here, as it’s a basic setting for audiolib
  2. attack – the amount of time (in milliseconds) that the attack phase takes to complete
  3. decay – the amount of time (in milliseconds) that the decay phase takes to complete
  4. sustain – the level of volume during the sustain phase (from 0 to 1)  – the default is 1
  5. release – the amount of time it takes for the release phase to complete (in milliseconds)
  6. sustainTime – the amount of time it takes for the sustain phase to complete (in milliseconds).  This param is pretty important though, because if you pass in null, the sustain period is indefinite.  Unless you call into the envelope with a trigger, it will continue being in the sustain phase forever
  7. releaseTime – the amount of time between the release phase and the envelope looping around to the attack phase again

The envelope has 6 states of being (0-indexed inside the code):

  1. Attack Phase
  2. Decay Phase
  3. Sustain Phase
  4. Release Phase
  5. Timed Sustain Phase
  6. Timed Release Phase

To kick off our envelope, you trigger it:

myEnvelope.triggerGate(true);

Now we can start using our envelope.  The usage is a little weird to me, as the Audiolib.js library treats it like a “generator” which seems a bit complicated for what it does.  I just want a stream of numbers, but OK I’ll bite.  I’ll use it with the byte arrays and whatnot, as if it’s an Oscillator.

var buffer = new Float32Array(1);
myEnvelope.append(buffer, 1);

So, I’m just pulling one value at a time from the envelope, and putting it into my “buffer”.  But my buffer only has one value in it at any time.  Like I said, I feel like I’m being forced into using it in more complicated of a way than I need!  Maybe there’s something I’m missing.

Now, every time, I get create an audio data point in my sound, I can multiply the data point by my envelope.  Thus my envelope is applied!

this[this.waveShape]() * buffer[0];

To do this, I overrode the “getMix” function in the Audiolib Oscillator.  But I needed to do other things too.

Since I trigger the envelope with triggerGate, it will cycle through the attack and release to the sustain phase as the envelope is used, automatically.

It gets complicated at the release phase though.  After your computer/laptop keyboard is released, we need to enter the release phase.  But we’re still grabbing sound, because the release phase still produces sound as it fades.  So our Oscillator needs to track that it’s in the release phase (it knows by keeping track of the envelope.state, which is 3 or 5 here for release or timed release).

Then finally when it gets back to state 0, or the cycle begins again, we mark this note as “released”, so our buffer knows that it doesn’t need to pull from it anymore.  We have to be very careful of note pulling from more notes than we need, cause all this music stuff is hard work, and too many notes slows down your CPU and breaks the audio processing.

The above is if the sustain phase goes on as long as you hold the key!  What if it’s a timed sustain – then we need the logic in there to release the key when the envelope is done, rather than when our user releases the key.

Here’s our final Note.js code.  And here’s a controller for keeping track of keys being pressed.

It all comes together in my (Chrome only) demo:

https://www.benfarrell.com/labs/examples/envelopes-12-17/

The demo starts out by not using an envelope at all.  You’ll hear some clicky-ness when you press and release a key.  That’s because you’re hearing the transition between no sound and the abrupt start in the phase of the waveform.  It’s EXACTLY one of the reasons why envelopes are useful – to ease these transitions in and out.

When you turn on the envelope, you can start adjusting parameters to see how the different properties of attack, decay, sustain, and release affect the overall personality of the tone.

16 thoughts on “The ADSR Envelope with Audiolib.js”

  1. Great post on ADSR envelopes. That’s awesome that you are blogging about audiolib.js. I am a big fan of audiolib.js too 🙂

    I have been messing with ADSR envelopes and ran into a problem with how to use them with a dynamic tempo when producing musical tones. I am working with notes that have a fixed duration and am setting the sustainTime for each note. For example, say I set the attack, decay, and release to 50ms which works great for all notes that have a duration greater than 150ms. But how should I handle a very short note like a 64th note that has a total duration of say 100ms? If I leave the ADSR settings the same I hear a clicky-ness after each 64th note because it never reaches the release phase. For now, I dynamically set the attack, decay, and release values to a much smaller value when the note duration is less than 150ms which prevents the clicky-ness but it changes the character of the note. But maybe thats the price you have to pay if you want to support a dynamic tempo and no minimum on note duration.

    You mentioned that envelopes can be used to apply effects. I would love to see a demo of how to do this with audiolib. Perhaps that could be a topic for another post.

  2. Thanks! Be warned, I’m self taught on this audio stuff and learning as I go.

    My opinion is that the attack, decay, and sustain time counts toward the length of your note, but the release wouldn’t. So, when you release, even if you’re over the time you wanted, you’d still do the release phase. I’d say this means you can even jump from the attack to the release phase if that’s all the time you’ve allotted. (or maybe the decay to the release phase, I’m not decided). Either way, the attack, decay, release phases are important to the character of your note, and you’ll want to included them to make it all sound like what you want. Ditching phases is up to you and how you want to do things, but the attack and release are pretty important I think, and I don’t think I’d ever want to leave them out, even with the problem you describe

    You’re probably working with some kind of sequencer for this, when you go between notes, right? I really think that if you have this problem, and don’t fix it by fudging things to work by adjusting the envelope time to be less than the note you want to play, then you’ll need to mix your tones. Like you’ll have to mix that second note with the tail end of the first note so you’re basically playing the two notes at once.

    I’ll probably tackle something similar yet, I made an arpeggiator in a previous demo, but I’m looking to apply some enveloping to that. Then I plan to play with the white noise to do some drums, and THEN finally mess around with the effects. I just haven’t got there yet!

  3. Hi,

    Great blog and very interesting as I am playing around with audiolib.js as part of my college course.

    I am new to coding and especially JavaScript so deciphering and following through the code is a challenge. I really enjoyed the envelope post and have been trying to apply effects to this code and emulate a simple softsynth.

    Before seeing your code I had the most basic of tones generated and was able to apply a delay effect to it by simply appending it to the buffer and managed to adjust the delay time using a html slider.

    delay.append(buffer);

    dtime = parseFloat(document.getElementById(“slide”).value);
    delay = audioLib.Delay.createBufferBased(2, dev.sampleRate, slide);

    Using your code I have been unable to apply the delay effect to it. I am hoping that if I can apply the delay effect, I should be able to add other effects and make the sound more interesting.

    My inexperience in coding is holding me back so I am hoping you could give me a few pointers in how to apply this effect to the generated sound.

    Cheers.

    1. Hey, so I really hadn’t gotten into effects yet. However, the easiest place to apply an effect would be the buffer. I’ve been slowly building out a framework of sorts here, but haven’t really messed with the buffer all that much. I betcha it would be easy (and no different than any other Audiolib code you see) to slide in some effects right after all the other code on the audioCallback function in my Main.js. What we’re doing in this function is just grabbing all the keys and generating/mixing the bytes.

      It seems to me it would make the most sense to put the effect on this final buffer right before it gets output. If you put it anywhere else, you could find yourself putting an effect on each keypress – which would be interesting if it worked, but you’d probably end up pegging the CPU.

      So I’d say start with those bytes right before that get sent in the audioCallback method. Good luck!

      1. Apologies that last post turned out a mess.

        I got some effects applied to the the sounds.

        I created some effects like so:

        fx = new audioLib.Delay(sampleRate, 250, 0.5);

        Or we could have a chain of effects:

        fx = (new audioLib.LP12Filter(sampleRate, 8000, 20)).join(new audioLib.Delay(sampleRate, 960, .7), new audioLib.Distortion(sampleRate));

        This effect is then pushed to the buffer using the pushSample method in the audioCallback method.

        buffer[current + n] = fx.pushSample(sample);

        I have been looking at Using a MIDI keyboard the last few nights to control the synth rather than the qwerty keyboard.
        So far I can the MIDI messages and generate tones by passing the note (A4 for example) into your generator class.

        osc = audioLib.generators.Note(dev.sampleRate,myNote);

        I will now look at trying to apply effects to the MIDI keyboard controlled tones. A second audioCallback method will likely be needed in order to mix and send the sound to the buffer.

        Is MIDI something you have looked at yet?

        1. Awesome! You’re making me jealous now because I had to put this down and get going on some other things. I hope to pick this back up again soon. I actually hadn’t looked into MIDI – in fact, I SPECIFICALLY didn’t look into MIDI because I didn’t think it would work in a browser. Are you using Node.js for this or something? If you’re doing this in a browser, I’d love to know how – maybe you’re opening a websocket and using some kind of MIDI server instance? You have a blog? I’d love to read up if you’re posting

          1. Sorry for the late reply.

            I have seen someone using Node.js to pass OSC messages to a browser before but I am using the excellent MidiBridge which you can find at http://abumarkub.net/abublog/?page_id=399

            This uses a combination of Java and JSON to send midi messages to the browser. So far I have been able to retrieve these messages and send them to the generator and output the desired sound.

            Unfortunately, I have not been able to exit gracefully from the note using the envelope. Once the key is released the sound comes to an abrupt stop with no sustain or release. I might have to store each note played in an array and apply the envelope to them on release but I haven’t figured it out yet.

            I don’t have a blog at the moment but once I get this project fully up and running I’ll and create a blog and publish it. Any progress I make, i’ll let you know.

          2. Hi. Do you know if audiolib can handle the creation of multiple oscillators on start up. I have created 12, one for each note in an octave. Each of these can then handle a note-on and note-off MIDI message in order to play the notes in any combination for as long as we want.
            The problem I am seeing is the volume of each generated note is not consistent and if I introduce more than 12 oscillators on start-up, I don’t get any sound output at all.
            I am not sure of the cause of the problem, perhaps the buffer is too full. Would you be able to shed any light on this issue? Thanks.

          3. Yah, absolutely you can have multiple oscillators, but on my machine it seems to start bugging out after 4 or 5. I think pulling bytes from 12 at the same time is a little too much to handle (in my experience anyway). As I add too many, I’m definitely hearing uneven sound just based on Javascript trying to keep up.

            If I understand what you’re trying to do, I think you need to be a little clever about how many oscillators you have on at a time. If you’re only receiving a MIDI message for one note, then only pull bytes from the one oscillator. Only use multiples when you’re pressing more than one key – and even then put a limit on how many you support (if someone were to mash a bunch of keys at once).

  4. This is slightly off topic, but in your generation/note.js file, when you inherit from Oscillator, why do you loop through all of the Oscillator.prototype props? Why not just set Note.prototype = new Oscillator(…)? I’m trying to learn some js inheritance basics and your code is really throwing me for a loop (no pun intended).

    -Mike

    1. Oh man, I’m really trying to think back on this one! It could be that I was getting used to JS inheritance myself when I put this up, but I think there was a better reason. I feel like it had something to do with the Audiolib code itself. It did something kind of weird where you’d define your Oscillators, but then there was an Audiolib finalization routine where after all your objects and plugins are defined it would lookup all the things that are Generators and tack on some additional properties to make it an official Generator through the use of prototype. I think I recall that doing typical inheritance didn’t work for some reason because of this.

      There were a few things like that with Audiolib.js that made me do things a bit…..differently. Chalk it up to either that I’m getting used to Javascript myself or that the Audiolib author does things a bit differently than I’m accustomed to and I do some wacky things when I try to follow it.

      I’m still not accustomed or see a reason for avoiding function hoisting, that’s what really threw me with this at first. But yah, by all means, take this with a grain of salt in regards to project architecture – definitely look for audio ideas, but don’t use this as a shining model of code!

    2. So FYI – I went back and rewrote note some time after I did this blog post. I basically did things the same way, but tried to organize a bit better. I’m still looping through the methods to extend Note.

      As you can see, I’m avoiding copying the _CLASSCONSTRUCTOR and getMix (both are Audiolib concepts) because I’m redefining them in my class. The way Audiolib had done their constructors threw me for a loop, and I did things the best I could.

      I’ve since taken a liking to just doing canned object inheritance from a framework like jQuery or Dojo, because IMHO everybody seems to do things like this different and you have to figure out each nuanced way.

      Anyway, here’s my revised code – again not too much different:

      audioLib.generators(‘Note’, function (sampleRate, notation, octave){
      // do constructor routine for Note and Oscillator
      var that = this;
      that.superconstruct();
      that.sampleRate = sampleRate;
      that.waveTable = new Float32Array(1);
      that.waveShapes = this.waveShapes.slice(0);
      that.releasePhase = false;
      that.released = false;

      // are we defining the octave separately? If so add it
      if (octave) {
      notation += octave;
      }
      that.frequency = Note.getFrequencyForNotation(notation);
      }, {
      /**
      * release key – trigger the release phase if not done
      */
      releaseKey: function() {
      if (!this.released) {
      this.releasePhase = true;

      if (this._envelope) {
      this._envelope.triggerGate(false);
      } else {
      this.released = true;
      }
      }
      },

      /**
      * override get mix
      */
      getMix: function(){
      // if there’s no envelope, then just return the normal sound
      if (!this._envelope) {
      return this[this.waveShape]();
      }

      var buffer = new Float32Array(1);
      this._envelope.append(buffer, 1);

      // state #5 is a timed release, so enter the release phase if here
      if (this._envelope.state === 5) {
      this.releasePhase = true;
      }

      // if in the release phase, release key when state cycles back to 0
      if (this.releasePhase && this._envelope.state === 0) {
      this.released = true;
      return 0;
      }

      // if released, don’t return any buffer
      if (this.released == true) {
      return 0;
      } else {
      return this[this.waveShape]() * buffer[0];
      }
      },

      /**
      * set envelope
      *
      * @param envelope
      */
      setEnvelope: function(value) {
      this._envelope = value;
      if (value) {
      this._envelope.triggerGate(true);
      }
      },

      /**
      * get envelope
      *
      * @return envelope
      */
      getEnvelope: function(value) {
      return this._envelope;
      }
      });

      // extend Oscillator to note
      for ( var prop in audioLib.generators.Oscillator.prototype) {
      if (prop != “getMix” &&
      prop != “__CLASSCONSTRUCTOR”) {
      audioLib.generators.Note.prototype[prop] = audioLib.generators.Oscillator.prototype[prop];
      }
      audioLib.generators.Note.prototype[“superconstruct”] = audioLib.generators.Oscillator.prototype[“__CLASSCONSTRUCTOR”];
      }

  5. Hi Ben. I don’t know if you’re still interested in this topic, but I’m trying to use the ADSRenvelope to create a monophonic sequencer/sampler. Notes can be added in any time interval down to the length of a sequencer step, so I need to set the sustain time dynamically for each note. I’ve tried various ways, but each seems to apply the envelope to the entire sampler track rather than the individual note. So when the sequencer moves on the next note, the previous one ramps back up and keeps resonating.

    I’d appreciate any help you can offer. Target your response for “abject noob” reading level.

    1. I still love this stuff, but keep jumping through topics so haven’t been back here for awhile. Are you saying that you took my code to do the envelope? Or are you saying that you’re writing your own stuff and having a hard time?
      If the latter – and it’s monophonic, I’d think you just better make DAMN sure that you reset the envelope with each note that’s pressed. Back to time 0 when a new note is added to your sequencer, It seems like you know this, but it’s just not working for you – so this is the point I stop and ask again if you’re just taking my code and it’s buggy.

      With my code, it’s meant to be polyphonic (as much as I’ve tried), so if you’re using that I did put in a mechanism in the keyboard controller that tracks a list of notes that are pressed. If you keep recycling the same note and changing the frequency, then the envelope will just keep on keeping on.

      I have a method on each note called “releaseKey” that will set that envelope into the release phase and might do a good job ending your cycle.

      At this point though, I just don’t know if you’re going with my code or solo, so not sure how to advise! Good luck! Happy to help with as much as I can remember from several months back….its such fun stuff, I can’t blame you for asking

      1. Thanks for the reply! I ultimately decided not to base it on your code since you seemed ambivalent about certain aspects of Jussi’s approach. Since both his approach and your reservations about it are way over my head, I thought I’d be better off picking one way and trying my best to work through it.

        I based my sequencer on this example: kindohm samples

        I ripped out the radio-button-grid UI and replaced it with a rather obscure UI based on emotional parameters that generate tonal and rhythmic properties based on some arbitrary probability distributions. Here’s the current state of the project (Firefox only):

        sequencer6

        The envelope code is disabled in this version because I never got it working properly. (You’ll see confused traces of several different approaches).

        This older example has the envelopes enabled, and has also been simplified down to a single instrument to better demonstrate the (failed) effect: sequencer5

        Tonight I found this github issue-reply from Jussi that explains a couple of things (for example, that the triggerGate() method and the sustainTime argument are mutually incompatible): https://github.com/jussi-kalliokoski/audiolib.js/issues/56

        I tried his suggestion of omitting the sustainTime argument from the ASDRenvelope function call, but the result is the same. Worse, actually — after a minute or so the sound just cuts out completely. Anyway, I think I need the sustainTime argument in order to match the length of the envelope to its note.

        I feel like I understand the basic mechanics of an ADSR envelope in principle. I get that the generate() method advances the envelope at the sample time scale. But I don’t get what the triggerGate() method does. Does it just start and stop the envelope based on the boolean passed in? And I don’t understand why some examples append the envelope to the buffer (as yours does) while others call the getMix() method as a multiplier for the sample buffer.

        1. I think basing things of Jussi’s approach would be by far the best thing to do – I’m just playing around vs officially buckling down and doing a formal library that folks contribute to like Jussi is.

          Your triggerGate question, if I recall correctly – basically triggerGate(false) advances to the release phase, whereas triggerGate(true) would start things over at the attack phase. So basically, if you’re pressing a key on a keyboard you are cycling though A+D+S. Sustain can be infinite time (like a pipe organ) or can be sharp and quick like a guitar pluck. Depending on how long A+D+S is, you may be releasing that keyboard key midway through A, or midway through D. At that point you’d triggerGate(false) on the envelope – immediately jumping to the release phase to allow the note to drop off.

          You’re then indefinitely in the release phase after you’ve triggered that gate. So if you were to press your key again, you’d need to get back to the attack phase. So triggerGate(true) would get you there.

          I’m sure there’s some sort of electrical engineering/signal processing line of thinking where this makes sense.

          Anyway, just checked out your examples – pretty cool! I haven’t tried out samples yet, they sound great and look pretty quick to drop in. I’m seeing things slow down and make Firefox complain a little. You may want to try being extremely careful about how many things you’re mixing at one time. Your question about appending to the buffer and getMix might be key to getting it working. Appending to the buffer straightaway is easier on your CPU. You create some data, dump it to the sound buffer – DONE. Mixing stuff is creates the sound you might want, but it can wreak havok if you’re not careful. I’ve mixed 2 or 3 notes, but beyond that, Javascript just can’t seem to keep up. What you’re doing when mixing, is of course taking several different bytestreams, averaging them out, and sending the resulting bytestream to the buffer. This is a pretty heavy operation to do. I haven’t read your code, but I’m imagining that you might not be removing notes from the mixing process. So it performs fine at those first two or three tones, but as you keep adding tones to the mix, Javascript chokes – so its important to manage this and cleanup.

          Hope that helps, great work!

Comments are closed.