BaseAudioContext
The BaseAudioContext
interface acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext
and OfflineAudioContext
respectively. You wouldn't use BaseAudioContext
directly — you'd use its features via one of these two inheriting interfaces.
A BaseAudioContext
can be a target of events, therefore it implements the EventTarget
interface.
Properties
BaseAudioContext.audioWorklet
(en-US) 实验性 只读-
Returns the
AudioWorklet
(en-US) object, used for creating custom AudioNodes with JavaScript processing. BaseAudioContext.currentTime
只读-
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at
0
. BaseAudioContext.destination
只读-
Returns an
AudioDestinationNode
representing the final destination of all audio in the context. It can be thought of as the audio-rendering device. BaseAudioContext.listener
只读-
Returns the
AudioListener
object, used for 3D spatialization. BaseAudioContext.sampleRate
只读-
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an
AudioContext
cannot be changed. BaseAudioContext.state
只读-
Returns the current state of the
AudioContext
.
Event handlers
BaseAudioContext.onstatechange
-
An event handler that runs when an event of type
statechange
(en-US) has fired. This occurs when theAudioContext
's state changes, due to the calling of one of the state change methods (AudioContext.suspend
,AudioContext.resume
, orAudioContext.close
).
Methods
Also implements methods from the interface EventTarget
.
BaseAudioContext.createBuffer()
-
Creates a new, empty
AudioBuffer
object, which can then be populated by data and played via anAudioBufferSourceNode
. BaseAudioContext.createConstantSource()
-
Creates a
ConstantSourceNode
(en-US) object, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value. BaseAudioContext.createBufferSource()
-
Creates an
AudioBufferSourceNode
, which can be used to play and manipulate audio data contained within anAudioBuffer
object.AudioBuffer
s are created usingAudioContext.createBuffer
or returned byAudioContext.decodeAudioData
when it successfully decodes an audio track. BaseAudioContext.createScriptProcessor()
-
Creates a
ScriptProcessorNode
, which can be used for direct audio processing via JavaScript. BaseAudioContext.createStereoPanner()
(en-US)-
Creates a
StereoPannerNode
(en-US), which can be used to apply stereo panning to an audio source. BaseAudioContext.createAnalyser()
-
Creates an
AnalyserNode
, which can be used to expose audio time and frequency data and for example to create data visualisations. BaseAudioContext.createBiquadFilter()
-
Creates a
BiquadFilterNode
, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc. BaseAudioContext.createChannelMerger()
-
Creates a
ChannelMergerNode
, which is used to combine channels from multiple audio streams into a single audio stream. BaseAudioContext.createChannelSplitter()
-
Creates a
ChannelSplitterNode
(en-US), which is used to access the individual channels of an audio stream and process them separately. BaseAudioContext.createConvolver()
-
Creates a
ConvolverNode
, which can be used to apply convolution effects to your audio graph, for example a reverberation effect. BaseAudioContext.createDelay()
-
Creates a
DelayNode
(en-US), which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph. BaseAudioContext.createDynamicsCompressor()
(en-US)-
Creates a
DynamicsCompressorNode
, which can be used to apply acoustic compression to an audio signal. BaseAudioContext.createGain()
(en-US)-
Creates a
GainNode
, which can be used to control the overall volume of the audio graph. BaseAudioContext.createIIRFilter()
(en-US)-
Creates an
IIRFilterNode
(en-US), which represents a second order filter configurable as several different common filter types. BaseAudioContext.createOscillator()
-
Creates an
OscillatorNode
, a source representing a periodic waveform. It basically generates a tone. BaseAudioContext.createPanner()
(en-US)-
Creates a
PannerNode
(en-US), which is used to spatialise an incoming audio stream in 3D space. BaseAudioContext.createPeriodicWave()
-
Creates a
PeriodicWave
, used to define a periodic waveform that can be used to determine the output of anOscillatorNode
. BaseAudioContext.createWaveShaper()
-
Creates a
WaveShaperNode
, which is used to implement non-linear distortion effects. BaseAudioContext.decodeAudioData()
-
Asynchronously decodes audio file data contained in an
ArrayBuffer
. In this case, the ArrayBuffer is usually loaded from anXMLHttpRequest
'sresponse
attribute after setting theresponseType
toarraybuffer
. This method only works on complete files, not fragments of audio files. BaseAudioContext.resume()
(en-US)-
Resumes the progression of time in an audio context that has previously been suspended/paused.
Examples
Basic audio context declaration:
var audioCtx = new AudioContext();
Cross browser variant:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
var oscillatorNode = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
var finish = audioCtx.destination;
// etc.
Specifications
Specification |
---|
Web Audio API # BaseAudioContext |
Browser compatibility
BCD tables only load in the browser