mozilla
您的搜尋結果

    AudioProcessingEvent

    Our volunteers haven't translated this article into 正體中文 (繁體) yet. Join us and help get the job done!

    The Web Audio API AudioProcessingEvent represents events that occur when a ScriptProcessorNode input buffer is ready to be processed.

    Note: As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Properties

    The list below includes the properties inherited from its parent, Event.

    Property Type Description
    target Read only EventTarget The event target (the topmost target in the DOM tree).
    type Read only DOMString The type of event.
    bubbles Read only boolean Does the event normally bubble?
    cancelable Read only boolean Is it possible to cancel the event?
    playbackTime Read only double The time when the audio will be played, as defined by the time of AudioContext.currentTime
    inputBuffer Read only AudioBuffer The buffer containing the input audio data to be processed. The number of channels is defined as a parameter, numberOfInputChannels, of the factory method AudioContext.createScriptProcessor(). Note the the returned AudioBuffer is only valid in the scope of the onaudioprocess function.
    outputBuffer Read only AudioBuffer The buffer where the output audio data should be written. The number of channels is defined as a parameter, numberOfOutputChannels, of the factory method AudioContext.createScriptProcessor(). Note the the returned AudioBuffer is only valid in the scope of the onaudioprocess function.

    Example

    The following example shows basic usage of a ScriptProcessorNode to take a track loaded via AudioContext.decodeAudioData, process it, adding a bit of white noise to each audio sample of the input track (buffer) and play it through the AudioDestinationNode. For each channel and each sample frame, the scriptNode.onaudioprocess function takes the associated audioProcessingEvent and uses it to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.

    Note: For a full working example, see our script-processor-node github repo (also view the source code.)

    var myScript = document.querySelector('script');
    var myPre = document.querySelector('pre');
    var playButton = document.querySelector('button');
          
    // Create AudioContext and buffer source
    var audioCtx = new AudioContext();
    source = audioCtx.createBufferSource();
    
    // Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
    var scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
    console.log(scriptNode.bufferSize);
    
    // load in an audio track via XHR and decodeAudioData
    
    function getData() {
      request = new XMLHttpRequest();
      request.open('GET', 'viper.ogg', true);
      request.responseType = 'arraybuffer';
      request.onload = function() {
        var audioData = request.response;
    
        audioCtx.decodeAudioData(audioData, function(buffer) {
        myBuffer = buffer;   
        source.buffer = myBuffer;
      },
        function(e){"Error with decoding audio data" + e.err});
      }
      request.send();
    }
    
    // Give the node a function to process audio events
    scriptNode.onaudioprocess = function(audioProcessingEvent) {
      // The input buffer is the song we loaded earlier
      var inputBuffer = audioProcessingEvent.inputBuffer;
    
      // The output buffer contains the samples that will be modified and played
      var outputBuffer = audioProcessingEvent.outputBuffer;
    
      // Loop through the output channels (in this case there is only one)
      for (var channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
        var inputData = inputBuffer.getChannelData(channel);
        var outputData = outputBuffer.getChannelData(channel);
    
        // Loop through the 4096 samples
        for (var sample = 0; sample < inputBuffer.length; sample++) {
          // make output equal to the same as the input
          outputData[sample] = inputData[sample];
    
          // add noise to each output sample
          outputData[sample] += ((Math.random() * 2) - 1) * 0.2;         
        }
      }
    }
    
    getData();
    
    // wire up play button
    playButton.onclick = function() {
      source.connect(scriptNode);
      scriptNode.connect(audioCtx.destination);
      source.start();
    }
          
    // When the buffer source stops playing, disconnect everything
    source.onended = function() {
      source.disconnect(scriptNode);
      scriptNode.disconnect(audioCtx.destination);
    }
    

    Specifications

    Specification Status Comment
    Web Audio API
    The definition of 'AudioProcessingEvent' in that specification.
    Working Draft  

    Browser compatibility

    Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
    Basic support 10.0webkit 25.0 (25.0)  Not supported 15.0webkit
    22 (unprefixed)
    6.0webkit
    Feature Android Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
    Basic support ? 26.0 1.2 ? ? ? 33.0

    See also

    Document Tags and Contributors

    Contributors to this page: Sheppy, fscholz, teoli, chrisdavidmills, kscarfone
    最近更新: fscholz,