使用MediaStream的录制API

这篇翻译不完整。请帮忙从英语翻译这篇文章

媒体流(音/视频)录制API让记录音频流或视频流信息更加容易。当使用navigator.mediaDevices.getUserMedia()"时,它提供了一种简单的方式从用户的输入设备中记录信息,并且可以马上在web apps中查看记录的信息。音/视频信息都可以被录制,可以分开也可以一块儿。本文针对于提供一个基础引导去让大家了解提供了这个API的MediaRecorder的界面。

示例应用: Web录音机

An image of the Web dictaphone sample app - a sine wave sound visualization, then record and stop buttons, then an audio jukebox of recorded tracks that can be played back.

为了验证MediaRecorder API的基础用法,我们做了一个基于web的录音机。它允许你录制音频片段并播放它。通过使用这个web音频API,它甚至给你提供了一个设备音频输入信息的可视化波浪图。我们在本文中专注于录制和回放功能的实现。

你可以看到实例演示或是Github上的源码(也可以点此直接下载)。

CSS goodies

在这个app应用中的网页是相当简单的,所以我们不会在这里大费周章;但有几个有点意思的CSS样式还是有必要提一下,所以接下来我们会讨论一下。如果你对CSS没有半毛钱兴趣并且想对JavaSdcript单刀直入,请跳转到下面的应用基础设置章节。

保持主界面对显示区域的约束,用calc()来忽略设备的尺寸

calc()函数是CSS3中出现的非常实用的功能之一,虽然现在的用处和这个名称看上去关系不大,但是你很快就会觉得“WC,这个功能为什么我们之前没有?为什么之前CSS2的布局会这么蛋疼?”它允许你计算一个CSS单元的计算值,在这个过程中混合不同的单元。

例如,在Web录音机中,我们有主要的UI区域,垂直堆叠。我们先给出前两块地方(头部和控制件)的固定高度:

header {
  height: 70px;
}

.main-controls {
  padding-bottom: 0.7rem;
  height: 170px;
}

然而,我们希望使第三块区域(其中包含你可以回放的记录样例)占用任何空间,而不用担心设备的高度。Flexbox流动样式可能是这里的答案,但是对于这样一个简单的布局来说有点过头了。相反,问题是通过使第三个容器的高度等于父高度的100%,再减去另两个的高度和填充来解决的。

.sound-clips {
  box-shadow: inset 0 3px 4px rgba(0,0,0,0.7);
  background-color: rgba(0,0,0,0.1);
  height: calc(100% - 240px - 0.7rem);
  overflow: scroll;
}

注意:现在的浏览器对calc()有着良好的支持,即使是像IE9那样的浏览器也可以。

用于显示/隐藏的复选框

虽然目前已经做的不错了,但是我们认为我们会提到一个复选框hack做法,它滥用了一个事实,你可以点击复选框的label标签来切换选中/未选中。在web录音机中,通过点击屏幕右上角的问号图标来显示/隐藏信息屏幕。首先,在得到<label>标签之前我们得先设计它的样式,通过设置足够的Z-index堆叠次序来确保它总是坐落于其他元素之上,所以它应该是可点击的:

label {
    font-family: 'NotoColorEmoji';
    font-size: 3rem;
    position: absolute;
    top: 2px;
    right: 3px;
    z-index: 5;
    cursor: pointer;
}

然后,我们隐藏实际的复选框,因为我们不希望它在我们的UI上乱七八糟:

input[type=checkbox] {
   position: absolute;
   top: -100px;
}

接下来,我们将设计信息显示区域(包括在<aside>元素中),给它固定的位置,使它不出现在布局流程中去影响主要的UI三个户,将它转换为默认的位置,并使它平滑显示/隐藏:

aside {
   position: fixed;
   top: 0;
   left: 0;
   text-shadow: 1px 1px 1px black;  
   width: 100%;
   height: 100%;
   transform: translateX(100%);
   transition: 0.6s all;
   background-color: #999;
   background-image: linear-gradient(to top right, rgba(0,0,0,0), rgba(0,0,0,0.5));
}

最后,我们编写一个规则,当选中复选框(当我们点击/聚焦标签)时,相邻的<aside >元素将使它的水平平移值发生变化,并平滑地转换成视图:

input[type=checkbox]:checked ~ aside {
  transform: translateX(0);
}

应用基础设置

我们使用getUserMedia()来捕获我们想要的媒体流。我们使用MediaRecorder API来记录信息流,并将每个记录的片段输出到生成的<audio>元素的源中,以便可以回放。

我们将声明记录和停止按钮变量,<article>元素将包含生成的音频播放器:

var record = document.querySelector('.record');
var stop = document.querySelector('.stop');
var soundClips = document.querySelector('.sound-clips');

最后,在本节中,我们建立了基本的getUserMedia结构:

if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
   console.log('getUserMedia supported.');
   navigator.mediaDevices.getUserMedia (
      // constraints - only audio needed for this app
      {
         audio: true
      })

      // Success callback
      .then(function(stream) {
 
        
      })

      // Error callback
      .catch(function(err) {
         console.log('The following getUserMedia error occured: ' + err);
      }
   );
} else {
   console.log('getUserMedia not supported on your browser!');
}

整个事件被封装在一个测试中,该测试在运行其他操作之前检查是否支持getUserMedia。接下来,我们调用getUserMedia,并在其内部定义:

  • 限制:只有音频才能被捕获到我们的录音机
  • 成功回调:一旦成功完成getUserMedia调用,此代码就会运行。
  • 错误/失败回调:如果getUserMedia调用由于任何原因而失败,则代码将运行。

注意:下面的所有代码都放在getUserMedia成功回调中。

捕获媒体流

一旦getUserMedia成功创建了媒体流,您可以使用MediaRecorder()构造函数创建一个新的媒体记录器实例,并直接传递该媒体流流。这是使用MediaRecorder API的入口点。现在,可以使用浏览器的默认编码格式将流捕获到Blob

var mediaRecorder = new MediaRecorder(stream);

There are a series of methods available in the MediaRecorder interface that allow you to control recording of the media stream; in Web Dictaphone we just make use of two, and listen to some events. First of all, MediaRecorder.start() is used to start recording the stream once the record button is pressed:

record.onclick = function() {
  mediaRecorder.start();
  console.log(mediaRecorder.state);
  console.log("recorder started");
  record.style.background = "red";
  record.style.color = "black";
}

When the MediaRecorder is recording, the MediaRecorder.state property will return a value of "recording".

As recording progresses, we need to collect the audio data. We register an event handler to do this using mediaRecorder.ondataavailable:

var chunks = [];

mediaRecorder.ondataavailable = function(e) {
  chunks.push(e.data);
}

The browser will fire dataavailable events as needed, but you can also include a timeslice when invoking the start() method — for example start(10000) — to control this interval, or call MediaRecorder.requestData() to trigger an event when you need it.

Lastly, we use the MediaRecorder.stop() method to stop the recording when the stop button is pressed, and finalize the Blob ready for use somewhere else in our application.

stop.onclick = function() {
  mediaRecorder.stop();
  console.log(mediaRecorder.state);
  console.log("recorder stopped");
  record.style.background = "";
  record.style.color = "";
}

Note that the recording may also stop naturally if the media stream ends (e.g. if you were grabbing a song track and the track ended, or the user stopped sharing their microphone).

Grabbing and using the blob

When recording has stopped, the state property returns a value of "inactive", and a stop event is fired. We register an event handler for this using mediaRecorder.onstop, and finalize our blob there from all the chunks we have received:

mediaRecorder.onstop = function(e) {
  console.log("recorder stopped");

  var clipName = prompt('Enter a name for your sound clip');

  var clipContainer = document.createElement('article');
  var clipLabel = document.createElement('p');
  var audio = document.createElement('audio');
  var deleteButton = document.createElement('button');
           
  clipContainer.classList.add('clip');
  audio.setAttribute('controls', '');
  deleteButton.innerHTML = "Delete";
  clipLabel.innerHTML = clipName;

  clipContainer.appendChild(audio);
  clipContainer.appendChild(clipLabel);
  clipContainer.appendChild(deleteButton);
  soundClips.appendChild(clipContainer);

  var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
  chunks = [];
  var audioURL = window.URL.createObjectURL(blob);
  audio.src = audioURL;

  deleteButton.onclick = function(e) {
    var evtTgt = e.target;
    evtTgt.parentNode.parentNode.removeChild(evtTgt.parentNode);
  }
}

Let's go through the above code and look at what's happening.

First, we display a prompt asking the user to name their clip.

Next, we create an HTML structure like the following, inserting it into our clip container, which is an <article> element.

<article class="clip">
  <audio controls></audio>
  <p>your clip name</p>
  <button>Delete</button>
</article>

After that, we create a combined Blob out of the recorded audio chunks, and create an object URL pointing to it, using window.URL.createObjectURL(blob). We then set the value of the <audio> element's src attribute to the object URL, so that when the play button is pressed on the audio player, it will play the Blob.

Finally, we set an onclick handler on the delete button to be a function that deletes the whole clip HTML structure.

Specifications

Specification Status Comment
MediaStream Recording Working Draft Initial definition

Browser compatibility

We're converting our compatibility data into a machine-readable JSON format. This compatibility table still uses the old format, because we haven't yet converted the data it contains. Find out how you can help!

Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 47 25.0 (25.0) 未实现 未实现 未实现
Feature Android Android Webview Firefox Mobile (Gecko) Firefox OS IE Phone Opera Mobile Safari Mobile Chrome Mobile
Basic support 未实现 47 25.0 (25.0) 1.3[1] 未实现 未实现 未实现 47

[1] The initial Firefox OS implementation only supported audio recording.

See also

文档标签和贡献者

此页面的贡献者: wonschangge
最后编辑者: wonschangge,