Understanding the Web Audio API | Code it | Scoop.it

Understand more about the Web Audio API, an API that allows us to create and manage sounds in our browser very easily.

Web Audio API

Let’s start with the basics about the Web Audio API. This is how the API works:

  1. The Web Audio API has a main audio context.
  2. Inside that audio context, we can handle and manage our audio operations. The audio operations are handled by audio nodes.
  3. We can have a lot of different audio nodes inside the same audio context, allowing us to create some nice things such as drum kits, synthesizers, etc.

Let’s create our first audio context using the Web Audio API and start to make some noise in our browser. This is how you can create an audio context:

const audioContext = new (window.AudioContext || window.webkitAudioContext);
 

The audio context is an object that contains all stuff audio related. It’s not a good idea to have more than one audio context in your project—this can cause you a lot of trouble in the future.

 

The Web Audio API has an interface called OscillatorNode. This interface represents a periodic waveform, pretty much a sine wave. Let’s use this interface to create some sound.

Now that we have our audioContext const initiating the audio context, let’s create a new const called mySound, passing the audioContext const and calling the createOscillator method, like this:

const mySound = audioContext.createOscillator();
 

We created our OscillatorNode, now we should start the mySound, like this:

mySound.start();
 

But, as you can see, it’s not playing anything in your browser. Why? We create our audioContext const initiating the audio context, but we didn’t pass any destination to it. We should always pass a property called destination to our audioContext const, otherwise, it won’t work.

So, now, just use the mySound const, call a method called connect and pass our audioContext.destination, like this:

mySound.connect(audioContext.destination);
 

Now we’re using the Web Audio API to very easily create noises in our browser.

Properties

The OscillatorNode has some properties, such as type. The type property specifies the type of waveform that we want our OscillatorNode to output. We can use 5 forms of output: sine (default), square, sawtooth, triangle and custom.

To change the type of our OscillatorNode, all we must do is pass after the start() method a type to our mySound, like this:

mySound.type = "square"
 

The OscillatorNode also has another property called frequency. We can use this property to represent the oscillation of our OscillatorNode in hertz.

To change the oscillation of our OscillatorNode in hertz, we must call the frequency property, and call the setValueAtTime function. This function receives two arguments: the value in hertz and our audio context. We can use it like this:

mySound.frequency.setValueAtTime(400, audioContext.currentTime);
 

By using the Web Audio API, we can manage audio pretty easily now in our browsers, but if you’re wanting to use this API to create something more difficult and powerful, you’ll probably need to use a library for it.

 

read the entire article including the details on how to use Howler at https://www.telerik.com/blogs/understanding-web-audio-api