The Media Capture and Streams API, often called the Media Streams API or MediaStream API, is an API related to WebRTC which provides support for streaming audio and video data.
It provides the interfaces and methods for working with the streams and their constituent tracks, the constraints associated with data formats, the success and error callbacks when using the data asynchronously, and the events that are fired during the process.
MediaStream consists of zero or more
MediaStreamTrack objects, representing various audio or video tracks. Each
MediaStreamTrack may have one or more channels. The channel represents the smallest unit of a media stream, such as an audio signal associated with a given speaker, like left or right in a stereo audio track.
MediaStream objects have a single input and a single output. A
MediaStream object generated by
getUserMedia() is called local, and has as its source input one of the user's cameras or microphones. A non-local
MediaStream may be representing a media element, like
<audio>, a stream originating over the network, and obtained via the WebRTC
RTCPeerConnection API, or a stream created using the Web Audio API
In these reference articles, you'll find the fundamental information you'll need to know about each of the interfaces that make up the Media Capture and Streams API.
The Capabilities, constraints, and settings article discusses the concepts of constraints and capabilities, as well as media settings, and includes a Constraint Exerciser that lets you experiment with the results of different constraint sets being applied to the audio and video tracks coming from the computer's A/V input devices (such as its webcam and microphone).
BCD tables only load in the browser