The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context.

An AudioContext can be a target of events, therefore it implements the EventTarget interface.

Properties

AudioContext.currentTime Read only
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0.
AudioContext.destination Read only
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
AudioContext.listener Read only
Returns the AudioListener object, used for 3D spatialization.
AudioContext.sampleRate Read only
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.
AudioContext.state Read only
Returns the current state of the AudioContext.
AudioContext.mozAudioChannelType Read only
Used to return the audio channel that the sound playing in an AudioContext will play in, on a Firefox OS device.

Event handlers

AudioContext.onstatechange
An event handler that runs when an event of type statechange has fired. This occurs when the AudioContext's state changes, due to the calling of one of the state change methods (AudioContext.suspend, AudioContext.resume, or AudioContext.close.)

Methods

Also implements methods from the interface EventTarget.

AudioContext.close()
Closes the audio context, releasing any system audio resources that it uses.
AudioContext.createBuffer()
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
AudioContext.createBufferSource()
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.
AudioContext.createMediaElementSource()
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.
AudioContext.createMediaStreamSource()
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
AudioContext.createMediaStreamDestination()
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
AudioContext.createScriptProcessor()
Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.
AudioContext.createStereoPanner()
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
AudioContext.createAnalyser()
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
AudioContext.createBiquadFilter()
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
AudioContext.createChannelMerger()
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
AudioContext.createChannelSplitter()
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
AudioContext.createConvolver()
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
AudioContext.createDelay()
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.
AudioContext.createDynamicsCompressor()
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
AudioContext.createGain()
Creates a GainNode, which can be used to control the overall volume of the audio graph.
AudioContext.createOscillator()
Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.
AudioContext.createPanner()
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
AudioContext.createPeriodicWave()
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
AudioContext.createWaveShaper()
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
AudioContext.createAudioWorker()
Creates an AudioWorkerNode, which can interact with a web worker thread to generate, process, or analyse audio directly. This was added to the spec on August 29 2014, and is not implemented in any browser yet.
AudioContext.decodeAudioData()
Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.
AudioContext.resume()
Resumes the progression of time in an audio context that has previously been suspended.
AudioContext.suspend()
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Obsolete methods

AudioContext.createJavaScriptNode()
Creates a JavaScriptNode, used for direct audio processing via JavaScript. This method is obsolete, and has been replaced by AudioContext.createScriptProcessor().
AudioContext.createWaveTable()
Creates a WaveTableNode, used to define a periodic waveform. This method is obsolete, and has been replaced by AudioContext.createPeriodicWave().

Examples

Basic audio context declaration:

var audioCtx = new AudioContext;

Cross browser variant:

var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();

var oscillatorNode = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
var finish = audioCtx.destination;
// etc.

Specifications

Specification Status Comment
Web Audio API
The definition of 'AudioContext' in that specification.
Working Draft  

Browser compatibility

Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 10.0webkit
35
25.0 (25.0)  Not supported 15.0webkit
22
6.0webkit
createStereoPanner() 42.0 37.0 (37.0)  Not supported Not supported Not supported
onstatechange, state, suspend(), resume() (Yes) 40.0 (40.0) Not supported Not supported Not supported
Feature Android Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
Basic support Not supported 37.0 (37.0)  2.2 Not supported Not supported Not supported (Yes)
createStereoPanner() Not supported (Yes) (Yes) Not supported Not supported Not supported (Yes)
onstatechange, state, suspend(), resume() Not supported (Yes) (Yes) Not supported Not supported Not supported (Yes)

See also

Document Tags and Contributors

Last updated by: chrisdavidmills,
Hide Sidebar