Iwiziwen-nneɣ ur suqlen ara amagrad-agi yakan ar Taqbaylit. Ddu-d yid-neɣ sakin mudd-d afus akken ad tettwag tsuqilt!
Tzemreḍ daɣen ad teɣreḍ amagrad-agi s English (US).
The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.
Web audio concepts and usage
The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.
Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Sources provide arrays of sound intensities (samples) at very small timeslices, often tens of thousands of them per second. These could be either computed mathematically (such as
OscillatorNode), or they can be recordings from sound/video files (like
MediaElementAudioSourceNode) and audio streams (
MediaStreamAudioSourceNode). In fact, sound files are just recordings of sound intensities themselves, which come in from microphones or electric instruments, and get mixed down into a single, complicated wave.
Outputs of these nodes could be linked to inputs of others, which mix or modify these streams of sound samples into different streams. A common modification is multiplying the samples by a value to make them louder or quieter (as is the case with
GainNode). Once the sound has been sufficiently processed for the intended effect, it can be linked to the input of a destination (
AudioContext.destination), which sends the sound to the speakers or headphones. This last connection is only necessary if the user is supposed to hear the audio.
A simple, typical workflow for web audio would look something like this:
- Create audio context
- Inside the context, create sources — such as
<audio>, oscillator, stream
- Create effects nodes, such as reverb, biquad filter, panner, compressor
- Choose final destination of audio, for example your system speakers
- Connect the sources up to the effects, and the effects to the destination.
Timing is controlled with high precision and low latency, allowing developers to write code that responds accurately to events and is able to target specific samples, even at a high sample rate. So applications such as drum machines and sequencers are well within reach.
The Web Audio API also allows us to control how audio is spatialized. Using a system based on a source-listener model, it allows control of the panning model and deals with distance-induced attenuation or doppler shift induced by a moving source (or moving listener).
You can read about the theory of the Web Audio API in a lot more detail in our article Basic concepts behind Web Audio API.
Web Audio API target audience
The Web Audio API can seem intimidating to those that aren't familiar with audio or music terms, and as it incorporates a great deal of functionality it can prove difficult to get started if you are a developer.
It can be used to simply incorporate audio into your website or application, by providing atmosphere like futurelibrary.no, or auditory feedback on forms. However, it can also be used to create advanced interactive instruments. With that in mind, it is suitable for both developers and musicians alike.
We have a simple introductory tutorial for those that are familiar with programming but need a good introduction to some of the terms and structure of the API.
There's also a Basic Concepts Behind Web Audio API article, to help you understand the way digital audio works, specifically in the realm of the API. This also includes a good introduction to some of the concepts the API is built upon.
Learning coding is like playing cards — you learn the rules, then you play, then you go back and learn the rules again, then you play again. So if some of the theory doesn't quite fit after the first tutorial and article, there's an advanced tutorial which extends the first one to help you practise what you've learnt, and apply some more advanced techniques to build up a step sequencer.
We also have other tutorials and comprehensive reference material available that covers all features of the API. See the sidebar on this page for more.
If you are more familiar with the musical side of things, are familiar with music theory concepts, want to start building instruments, then you can go ahead and start building things with the advance tutorial and others as a guide (the above linked tutorial covers scheduling notes, creating bespoke oscillators and envelopes, as well as an LFO among other things.)
Web Audio API Interfaces
The Web Audio API has a number of interfaces and associated events, which we have split up into nine categories of functionality.
General audio graph definition
General containers and definitions that shape audio graphs in Web Audio API usage.
AudioContextinterface represents an audio-processing graph built from audio modules linked together, each represented by an
AudioNode. An audio context controls the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an
AudioContextbefore you do anything else, as everything happens inside a context.
AudioContextOptionsdictionary is used to provide options when instantiating a new
AudioNodeinterface represents an audio-processing module like an audio source (e.g. an HTML
<video>element), audio destination, intermediate processing module (e.g. a filter like
BiquadFilterNode, or volume control like
AudioParaminterface represents an audio-related parameter, like one of an
AudioNode. It can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern.
- Provides a maplike interface to a group of
AudioParaminterfaces, which means it provides the methods
values(), as well as a
BaseAudioContextinterface acts as a base definition for online and offline audio-processing graphs, as represented by
OfflineAudioContextrespectively. You wouldn't use
BaseAudioContextdirectly — you'd use its features via one of these two inheriting interfaces.
endedevent is fired when playback has stopped because the end of the media was reached.
Defining audio sources
Interfaces that define audio sources for use in the Web Audio API.
AudioScheduledSourceNodeis a parent interface for several types of audio source node interfaces. It is an
OscillatorNodeinterface represents a periodic waveform, such as a sine or triangle wave. It is an
AudioNodeaudio-processing module that causes a given frequency of wave to be created.
AudioBufferinterface represents a short audio asset residing in memory, created from an audio file using the
AudioContext.decodeAudioData()method, or created with raw data using
AudioContext.createBuffer(). Once decoded into this form, the audio can then be put into an
AudioBufferSourceNodeinterface represents an audio source consisting of in-memory audio data, stored in an
AudioBuffer. It is an
AudioNodethat acts as an audio source.
SourceNodeinterface represents an audio source consisting of an HTML5
<video>element. It is an
AudioNodethat acts as an audio source.
SourceNodeinterface represents an audio source consisting of a WebRTC
MediaStream(such as a webcam, microphone, or a stream being sent from a remote computer). It is an
AudioNodethat acts as an audio source.
Defining audio effects filters
Interfaces for defining effects that you want to apply to your audio sources.
BiquadFilterNodeinterface represents a simple low-order filter. It is an
AudioNodethat can represent different kinds of filters, tone control devices, or graphic equalizers. A
BiquadFilterNodealways has exactly one input and one output.
Nodeinterface is an
AudioNodethat performs a Linear Convolution on a given
AudioBuffer, and is often used to achieve a reverb effect.
DelayNodeinterface represents a delay-line; an
AudioNodeaudio-processing module that causes a delay between the arrival of an input data and its propagation to the output.
DynamicsCompressorNodeinterface provides a compression effect, which lowers the volume of the loudest parts of the signal in order to help prevent clipping and distortion that can occur when multiple sounds are played and multiplexed together at once.
GainNodeinterface represents a change in volume. It is an
AudioNodeaudio-processing module that causes a given gain to be applied to the input data before its propagation to the output.
WaveShaperNodeinterface represents a non-linear distorter. It is an
AudioNodethat use a curve to apply a waveshaping distortion to the signal. Beside obvious distortion effects, it is often used to add a warm feeling to the signal.
- Describes a periodic waveform that can be used to shape the output of an
- Implements a general infinite impulse response (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers as well.
Defining audio destinations
Once you are done processing your audio, these interfaces define where to output it.
AudioDestinationNodeinterface represents the end destination of an audio source in a given context — usually the speakers of your device.
DestinationNodeinterface represents an audio destination consisting of a WebRTC
MediaStreamwith a single
AudioMediaStreamTrack, which can be used in a similar way to a
getUserMedia(). It is an
AudioNodethat acts as an audio destination.
Data analysis and visualization
If you want to extract time, frequency, and other data from your audio, the
AnalyserNode is what you need.
AnalyserNodeinterface represents a node able to provide real-time frequency and time-domain analysis information, for the purposes of data analysis and visualization.
Splitting and merging audio channels
To split and merge audio channels, you'll use these interfaces.
ChannelSplitterNodeinterface separates the different channels of an audio source out into a set of mono outputs.
ChannelMergerNodeinterface reunites different mono inputs into a single output. Each input will be used to fill a channel of the output.
These interfaces allow you to add audio spatialization panning effects to your audio sources.
AudioListenerinterface represents the position and orientation of the unique person listening to the audio scene used in audio spatialization.
PannerNodeinterface represents the position and behavior of an audio source signal in 3D space, allowing you to create complex panning effects.
StereoPannerNodeinterface represents a simple stereo panner node that can be used to pan an audio stream left or right.
Worklet interface, a lightweight version of the
Worker interface. As of January 2018, audio worklets are available in Chrome 64 behind a flag.
AudioWorkletinterface is available via
BaseAudioContext.audioWorkletand allows you to add new modules to the audio worklet.
AudioWorkletNodeinterface represents an
AudioNodethat is embedded into an audio graph and can pass messages to the
AudioWorkletProcessorinterface represents audio processing code running in a
AudioWorkletGlobalScopethat generates, processes, or analyses audio directly, and can pass messages to the
AudioWorkletGlobalScopeinterface is a
Before audio worklets were defined, the Web Audio API used the
ScriptProcessorNode is kept for historic reasons but is marked as deprecated and will be removed in a future version of the specification.
AudioNodeaudio-processing module that is linked to two buffers, one containing the current input, one containing the output. An event, implementing the
AudioProcessingEventinterface, is sent to the object each time the input buffer contains new data, and the event handler terminates when it has filled the output buffer with data.
audioprocessevent is fired when an input buffer of a Web Audio API
ScriptProcessorNodeis ready to be processed.
- The Web Audio API
AudioProcessingEventrepresents events that occur when a
ScriptProcessorNodeinput buffer is ready to be processed.
Offline/background audio processing
It is possible to process/render an audio graph very quickly in the background — rendering it to an
AudioBuffer rather than to the device's speakers — with the following.
OfflineAudioContextinterface is an
AudioContextinterface representing an audio-processing graph built from linked together
AudioNodes. In contrast with a standard
OfflineAudioContextdoesn't really render the audio but rather generates it, as fast as it can, in a buffer.
completeevent is fired when the rendering of an
OfflineAudioCompletionEventrepresents events that occur when the processing of an
OfflineAudioContextis terminated. The
completeevent implements this interface.
The following interfaces were defined in old versions of the Web Audio API spec, but are now obsolete and have been replaced by other interfaces.
- Used to define a periodic waveform. This interface is obsolete, and has been replaced by
You can find a number of examples at our webaudio-example repo on GitHub.
|Web Audio API||Working Draft|
We're converting our compatibility data into a machine-readable JSON format. This compatibility table still uses the old format, because we haven't yet converted the data it contains. Find out how you can help!
|Feature||Chrome||Edge||Firefox (Gecko)||Internet Explorer||Opera||Safari (WebKit)|
|Basic support||14 webkit||(Yes)||23||No support||15 webkit
|Feature||Android||Chrome||Edge||Firefox Mobile (Gecko)||Firefox OS||IE Phone||Opera Mobile||Safari Mobile|
|Basic support||No support||28 webkit||(Yes)||25||1.2||No support||No support||6 webkit|
- Basic concepts behind Web Audio API
- Using the Web Audio API
- Advanced techniques: creating sound, sequencing, timing, scheduling
- Using IIR filters
- Visualizations with Web Audio API
- Web audio spatialisation basics
- Controlling multiple parameters with ConstantSourceNode
- Mixing Positional Audio and WebGL
- Developing Game Audio with the Web Audio API
- Porting webkitAudioContext code to standards based AudioContext
- Tones: a simple library for playing specific tones/notes using the Web Audio API.
- Tone.js: a framework for creating interactive music in the browser.
- howler.js: a JS audio library that defaults to Web Audio API and falls back to HTML5 Audio, as well as providing other useful features.
- Mooog: jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.
- XSound: Web Audio API Library for Synthesizer, Effects, Visualization, Recording ... etc
- OpenLang: HTML5 video language lab web application using the Web Audio API to record and combine video and audio from different sources into a single file (source on GitHub)