Join MDN and developers like you at Mozilla's View Source conference, November 2-4 in Portland, Oregon. Learn more at


by 1 contributor:

This translation is incomplete. Please help translate this article from English.

« AudioNode

The AudioNode interface is a generic interface for representing an audio processing module like an audio source (e.g. an HTML <audio> or <video> element), audio destination, intermediate processing module (e.g. a filter like BiquadFilterNode or ConvolverNode), or volume control (like GainNode).

AudioNodes participating in an AudioContext create a audio routing graph.

An AudioNode has inputs and outputs, each with a given amount of channels. An AudioNode with zero inputs and exactly one output is called a source node. The exactly processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs.

Different nodes can be linked together to build a processing graph. Such a graph is contained in an AudioContext. Each AudioNode participates in exactly one such context. In general, processing nodes inherit the properties and methods of AudioNode, but also define their own functionality on top. See the individual node pages for more details, as listed on the Web Audio API homepage.

Note: An AudioNode can be target of events, therefore it implements the EventTarget interface.


AudioNode.context 読取専用
Returns the associated AudioContext, that is the object representing the processing graph the node is participating in.
AudioNode.numberOfInputs 読取専用
Returns the number of inputs feeding the node. Source nodes are defined as nodes having a numberOfInputs property with a value of 0.
AudioNode.numberOfOutputs 読取専用
Returns the number of outputs coming out of the node. Destination nodes — like AudioDestinationNode — have a value of 0 for this attribute.
Represents an integer used to determine how many channels are used when up-mixing and down-mixing connections to any inputs to the node. Its usage and precise definition depend on the value of AudioNode.channelCountMode.
Represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.
Represents an enumerated value describing the meaning of the channels. This interpretation will define how audio up-mixing and down-mixing will happen.
The possible values are "speakers" or "discrete".


Also implements methods from the interface EventTarget.

Allows us to connect one output of this node to one input of another node.
Allows us to connect one output of this node to one input of an audio parameter.
Allows us to disconnect the current node from another one it is already connected to.

This simple snippet of code shows the creation of some audio nodes, and how the AudioNode properties and methods can be used. You can find examples of such usage on any of the examples linked to on the Web Audio API landing page (for example Violent Theremin.)

var AudioContext = window.AudioContext || window.webkitAudioContext;

var audioCtx = new AudioContext();

var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();




Specification Status Comment
Web Audio API
The definition of 'AudioNode' in that specification.


Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 10.0webkit 25.0 (25.0) 未サポート 15.0webkit
22 (unprefixed)
channelCount channelCountMode (有) webkit (有) 未サポート (有) 未サポート
connect(AudioParam) (有) webkit (有) 未サポート (有) 未サポート
Feature Android Firefox Mobile (Gecko) Firefox OS (Gecko) IE Phone Opera Mobile Safari Mobile
Basic support ? 26.0 1.2 ? ? ?
未サポート (有) (有) 未サポート 未サポート 未サポート
connect(AudioParam) 未サポート (有) (有) 未サポート 未サポート 未サポート



Contributors to this page: chikoski
最終更新者: chikoski,