AudioContext.createScriptProcessor()

这篇翻译不完整。请帮忙从英语翻译这篇文章

AudioContext 接口的createScriptProcessor() 方法创建一个ScriptProcessorNode 用于通过JavaScript直接处理音频.

语法

var audioCtx = new AudioContext();
myScriptProcessor = audioCtx.createScriptProcessor(1024, 1, 1);

参数

bufferSize
缓冲区大小,以样本帧为单位。具体来讲,缓冲区大小必须去是下面这些值当中的某一个: 256, 512, 1024, 2048, 4096, 8192, 16384. 如果不传,或者参数为0,则取当前环境最合适的缓冲区大小, 取值为2的幂次方的一个常数,在该node的整个生命周期中都不变.
该取值控制着audioprocess事件被分派的频率,以及每一次调用多少样本帧被处理. 较低bufferSzie将导致一定的延迟。较高的bufferSzie就要注意避免音频的崩溃和故障。推荐作者不要给定具体的缓冲区大小,让系统自己选一个好的值来平衡延迟和音频质量。
numberOfInputChannels
值为整数,用于指定输入node的声道的数量,默认值是2,最高能取32.
numberOfOutputChannels
值为整数,用于指定输出node的声道的数量,默认值是2,最高能取32.

重要: Webkit (version 31)要求调用这个方法的时候必须传入一个有效的bufferSize .

注意: numberOfInputChannelsnumberOfOutputChannels的值都不能为0,为0是无效的

返回

A ScriptProcessorNode.

示例

下面的例子展示了一个ScriptProcessorNode的基本用法,数据源取自 AudioContext.decodeAudioData, 给每一个音频样本加一点白噪声,然后通过AudioDestinationNode播放(其实这个就是系统的扬声器)。 对于每一个声道和样本帧,在把结果当成输出样本之前,scriptNode.onaudioprocess方法关联audioProcessingEvent ,并用它来遍历每输入流的每一个声道,和每一个声道中的每一个样本,并添加一点白噪声。

注意: 完整的示例参照 script-processor-node github (查看源码 source code.)

var myScript = document.querySelector('script');
var myPre = document.querySelector('pre');
var playButton = document.querySelector('button');
      
// Create AudioContext and buffer source
var audioCtx = new AudioContext();
source = audioCtx.createBufferSource();

// Create a ScriptProcessorNode with a bufferSize of 4096 and a single input and output channel
var scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);
console.log(scriptNode.bufferSize);

// load in an audio track via XHR and decodeAudioData

function getData() {
  request = new XMLHttpRequest();
  request.open('GET', 'viper.ogg', true);
  request.responseType = 'arraybuffer';
  request.onload = function() {
    var audioData = request.response;

    audioCtx.decodeAudioData(audioData, function(buffer) {
    myBuffer = buffer;   
    source.buffer = myBuffer;
  },
    function(e){"Error with decoding audio data" + e.err});
  }
  request.send();
}

// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
  // The input buffer is the song we loaded earlier
  var inputBuffer = audioProcessingEvent.inputBuffer;

  // The output buffer contains the samples that will be modified and played
  var outputBuffer = audioProcessingEvent.outputBuffer;

  // Loop through the output channels (in this case there is only one)
  for (var channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
    var inputData = inputBuffer.getChannelData(channel);
    var outputData = outputBuffer.getChannelData(channel);

    // Loop through the 4096 samples
    for (var sample = 0; sample < inputBuffer.length; sample++) {
      // make output equal to the same as the input
      outputData[sample] = inputData[sample];

      // add noise to each output sample
      outputData[sample] += ((Math.random() * 2) - 1) * 0.2;         
    }
  }
}

getData();

// wire up play button
playButton.onclick = function() {
  source.connect(scriptNode);
  scriptNode.connect(audioCtx.destination);
  source.start();
}
      
// When the buffer source stops playing, disconnect everything
source.onended = function() {
  source.disconnect(scriptNode);
  scriptNode.disconnect(audioCtx.destination);
}

规范

Specification Status Comment
Web Audio API
createScriptProcessor
Working Draft  

浏览器兼容性

Feature Chrome Edge Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 10.0webkit (Yes) 25.0 (25.0)  未实现 15.0webkit
22 (unprefixed)
6.0webkit
Feature Android Edge Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
Basic support ? (Yes) 26.0 1.2 ? ? ? 33.0

See also

文档标签和贡献者

 此页面的贡献者: Remond, ice.HG
 最后编辑者: Remond,