The decodeAudioData() method of the AudioContext Interface is used to asynchronously decode audio file data contained in an ArrayBuffer. In this case the ArrayBuffer is loaded from XMLHttpRequest and FileReader. The decoded AudioBuffer is resampled to the AudioContext's sampling rate, then passed to a callback or promise.

This is the preferred method of creating an audio source for Web Audio API from an audio track.


Older callback syntax:

audioCtx.decodeAudioData(audioData, function(decodedData) {
  // use the dec​oded data here

Newer promise-based syntax:

audioCtx.decodeAudioData(audioData).then(function(decodedData) {
  // use the decoded data here


In this section we will first cover the older callback-based system and then the newer promise-based syntax.

Older callback syntax

In this example, the getData() function uses XHR to load an audio track, setting the responseType of the request to arraybuffer so that it returns an array buffer as its response that we then store in the audioData variable . We then pass this buffer into a decodeAudioData() function; the success callback takes the successfully decoded PCM data, puts it into an AudioBufferSourceNode created using AudioContext.createBufferSource(), connects the source to the AudioContext.destination and sets it to loop.

The buttons in the example simply run getData() to load the track and start it playing, and stop it playing, respectively. When the stop() method is called on the source, the source is cleared out.

Note: You can run the example live (doesn't work, the cdn audio file returns a 403) (or view the source.)

// define variables

var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;

var pre = document.querySelector('pre');
var myScript = document.querySelector('script');
var play = document.querySelector('.play');
var stop = document.querySelector('.stop');

// use XHR to load an audio track, and
// decodeAudioData to decode it and stick it in a buffer.
// Then we put the buffer into the source

function getData() {
  source = audioCtx.createBufferSource();
  var request = new XMLHttpRequest();'GET', 'viper.ogg', true);

  request.responseType = 'arraybuffer';

  request.onload = function() {
    var audioData = request.response;

    audioCtx.decodeAudioData(audioData, function(buffer) {
        source.buffer = buffer;

        source.loop = true;

      function(e){"Error with decoding audio data" + e.err});



// wire up buttons to stop and play audio

play.onclick = function() {
  play.setAttribute('disabled', 'disabled');

stop.onclick = function() {

// dump script to pre element

pre.innerHTML = myScript.innerHTML;

New promise-based syntax

ctx.decodeAudioData(compressedBuffer).then(function(decodedData) {
 // use the decoded data here


An ArrayBuffer containing the audio data to be decoded, usually grabbed from XMLHttpRequest and FileReader.
A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.
An optional error callback, to be invoked if an error occurs when the audio data is being decoded.


An Promise object.


Specification Status Comment
Web Audio API
The definition of 'decodeAudioData()' in that specification.
Working Draft  

Browser compatibility

Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 10.0webkit 25.0 (25.0)  No support 15.0webkit
22 (unprefixed)
Promise-based syntax 49.0 (Yes) No support (Yes) No support
Feature Android Android Webview Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
Basic support ? (Yes) 26.0 1.2 ? ? ? 33.0
Promise-based syntax ? 49.0 (Yes) (Yes) No support ? ? 49.0

See also

Document Tags and Contributors

 Last updated by: Taoja,