AudioContext

Diese Übersetzung ist unvollständig. Bitte helfen Sie, diesen Artikel aus dem Englischen zu übersetzen.

Die AudioContext Schnittstelle bildet einen Audioverarbeitungsdiagramm, aus mehreren miteinander verbundenen Audiomodulen bestehend, ab. Bei jedem dieser Module handelt es sich um einen Knoten (AudioNode). Ein AudioContext kontrolliert sowohl die Erstellung der einzelnen in ihm enthaltenen Knoten als auch den Prozess der Audioverarbeitung oder des Dekodierens. Als erster Schritt muss immer ein Audio Kontext angelegt werden, da sämtliche Funktionen innerhalb dieses Kontextes ausgeführt werden.

Ein AudioContext kann das Ziel von Events sein, darum wird unterstütz er auch die EventTarget Schnittstelle.

Eigenschaften

AudioContext.currentTime Read only
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0.
AudioContext.destination Read only
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
AudioContext.listener Read only
Returns the AudioListener object, used for 3D spatialization.
AudioContext.sampleRate Read only
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.
AudioContext.state Read only
Returns the current state of the AudioContext.
AudioContext.mozAudioChannelType Read only
Used to return the audio channel that the sound playing in an AudioContext will play in, on a Firefox OS device.

Event handlers

AudioContext.onstatechange
An event handler that runs when an event of type statechange has fired. This occurs when the AudioContext's state changes, due to the calling of one of the state change methods (AudioContext.suspend, AudioContext.resume, or AudioContext.close.)

Methoden

Implementiert zusätzlich die Methoden der Schnittstelle EventTarget.

AudioContext.close()
Closes the audio context, releasing any system audio resources that it uses.
AudioContext.createBuffer()
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
AudioContext.createBufferSource()
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.
AudioContext.createMediaElementSource()
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.
AudioContext.createMediaStreamSource()
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
AudioContext.createMediaStreamDestination()
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
AudioContext.createScriptProcessor()
Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.
AudioContext.createStereoPanner()
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
AudioContext.createAnalyser()
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
AudioContext.createBiquadFilter()
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
AudioContext.createChannelMerger()
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
AudioContext.createChannelSplitter()
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
AudioContext.createConvolver()
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
AudioContext.createDelay()
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.
AudioContext.createDynamicsCompressor()
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
AudioContext.createGain()
Creates a GainNode, which can be used to control the overall volume of the audio graph.
AudioContext.createOscillator()
Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.
AudioContext.createPanner()
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
AudioContext.createPeriodicWave()
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
AudioContext.createWaveShaper()
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
AudioContext.createAudioWorker()
Creates an AudioWorkerNode, which can interact with a web worker thread to generate, process, or analyse audio directly. This was added to the spec on August 29 2014, and is not implemented in any browser yet.
AudioContext.decodeAudioData()
Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.
AudioContext.resume()
Resumes the progression of time in an audio context that has previously been suspended.
AudioContext.suspend()
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Obsolete Methoden

AudioContext.createJavaScriptNode()
Creates a JavaScriptNode, used for direct audio processing via JavaScript. This method is obsolete, and has been replaced by AudioContext.createScriptProcessor().
AudioContext.createWaveTable()
Creates a WaveTableNode, used to define a periodic waveform. This method is obsolete, and has been replaced by AudioContext.createPeriodicWave().

Beispiele

Grundsätzliche Deklarierung eines Audio Kontextes:

var audioCtx = new AudioContext();

Browserunabhängige Variante:

var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();

var oscillatorNode = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
var finish = audioCtx.destination;
// etc.

Spezifikationen

Specification Status Comment
Web Audio API
Die Definition von 'AudioContext' in dieser Spezifikation.
Arbeitsentwurf  

Browserkompatibilität

Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 10.0webkit
35
25.0 (25.0)  Nicht unterstützt 15.0webkit
22
6.0webkit
createStereoPanner() 42.0 37.0 (37.0)  Nicht unterstützt Nicht unterstützt Nicht unterstützt
onstatechange, state, suspend(), resume() (Ja) 40.0 (40.0) Nicht unterstützt Nicht unterstützt Nicht unterstützt
Feature Android Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
Basic support Nicht unterstützt 37.0 (37.0)  2.2 Nicht unterstützt Nicht unterstützt Nicht unterstützt (Ja)
createStereoPanner() Nicht unterstützt (Ja) (Ja) Nicht unterstützt Nicht unterstützt Nicht unterstützt (Ja)
onstatechange, state, suspend(), resume() Nicht unterstützt (Ja) (Ja) Nicht unterstützt Nicht unterstützt Nicht unterstützt (Ja)

Siehe auch

Schlagwörter des Dokuments und Mitwirkende

 Mitwirkende an dieser Seite: mwalter
 Zuletzt aktualisiert von: mwalter,