MDN wants to learn about developers like you: https://qsurvey.mozilla.com/s3/MDN-dev-survey

这篇翻译不完整。请帮忙从英语翻译这篇文章

这是一个实验中的功能
此功能某些浏览器尚在开发中,请参考浏览器兼容性表格以得到在不同浏览器中适合使用的前缀。由于该功能对应的标准文档可能被重新修订,所以在未来版本的浏览器中该功能的语法和行为可能随之改变。

Web Speech API 使您能够将语音数据合并到Web应用程序中。 Web Speech API有两个部分:SpeechSynthesis 语音合成 (文本到语音)和 SpeechRecognition  语音识别(异步语音识别)。

Web 语音概念和用法

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recogition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognise. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.

For more details on using these features, see Using the Web Speech API.

Web 语音API接口

 语音 识别

SpeechRecognition
The controller interface for the recognition service; this also handles the SpeechRecognitionEvent sent from the recognition service.
SpeechRecognitionAlternative
Represents a single word that has been recognised by the speech recognition service.
SpeechRecognitionError
Represents error messages from the recognition service.
SpeechRecognitionEvent
The event object for the result and nomatch events, and contains all the data associated with an interim or final speech recognition result.
SpeechGrammar
The words or patterns of words that we want the recognition service to recognize.
SpeechGrammarList
Represents a list of SpeechGrammar objects.
SpeechRecognitionResult
Represents a single recognition match, which may contain multiple SpeechRecognitionAlternative objects.
SpeechRecognitionResultList
Represents a list of SpeechRecognitionResult objects, or a single one if results are being captured in continuous mode.

语音 合成

SpeechSynthesis
The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
SpeechSynthesisErrorEvent
Contains information about any errors that occur while processing SpeechSynthesisUtterance objects in the speech service.
SpeechSynthesisEvent
Contains information about the current state of SpeechSynthesisUtterance objects that have been processed in the speech service.
SpeechSynthesisUtterance
Represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)
SpeechSynthesisVoice
Represents a voice that the system supports. Every SpeechSynthesisVoice has its own relative speech service including information about language, name and URI.
Window.speechSynthesis
Specced out as part of a [NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the SpeechSynthesis controller, and therefore the entry point to speech synthesis functionality.

范例

The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.

规范

Specification Status Comment
Web Speech API Draft Initial definition

浏览器兼容性

Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support 33[1] 49 (49)[2] 未实现 未实现 未实现
Feature Android Chrome Firefox Mobile (Gecko) Firefox OS IE Phone Opera Mobile Safari Mobile
Basic support ? (Yes)[1] ? 2.5 未实现 未实现 未实现
  • [1] Speech recognition interfaces are currently prefixed in Chrome, so you'll need to prefix interface names appropriately, e.g. webkitSpeechRecognition; You'll also need to serve your code through a web server for recognition to work. Speech synthesis is fully supported without prefixes.
  • [2] Recognition can be enabled via the media.webspeech.recognition.enable flag in about:config; synthesis is switched on by default. Note that currently only the speech synthesis part is available in Firefox Desktop — the speech recognition part will be available soon, once the required internal permissions are sorted out.

Firefox 操作系统权限

To use speech recognition in an app, you need to specify the following permissions in your manifest:

"permissions": {
  "audio-capture" : {
    "description" : "Audio capture"
  },
  "speech-recognition" : {
    "description" : "Speech recognition"
  }
}

You also need a privileged app, so you need to include this as well:

  "type": "privileged"

Speech synthesis needs no permissions to be set.

也可以看看

文档标签和贡献者

 此页面的贡献者: xgqfrms-GitHub
 最后编辑者: xgqfrms-GitHub,