此頁面發生指令碼錯誤。網站編輯修正完成前,您可繼續檢視下方的其他內容。

翻譯不完整。請協助 翻譯此英文文件

{{APIRef("Web Speech API")}}{{SeeCompatTable}}

Web Speech API中的SpeechRecognition介面是一個recognition service的控制器介面。他同時也負責處理由識別服務發出的{{domxref("SpeechRecognitionEvent")}}事件。

建構子

{{domxref("SpeechRecognition.SpeechRecognition()")}}
建立一個新的SpeechRecognition物件

屬性

SpeechRecognition 繼承了父介面 {{domxref("EventTarget")}} 的屬性。

{{domxref("SpeechRecognition.grammars")}}
Returns and sets a collection of {{domxref("SpeechGrammar")}} objects that represent the grammars that will be understood by the current SpeechRecognition.
{{domxref("SpeechRecognition.lang")}}
Returns and sets the language of the current SpeechRecognition. If not specified, this defaults to the HTML {{htmlattrxref("lang","html")}} attribute value, or the user agent's language setting if that isn't set either.
{{domxref("SpeechRecognition.continuous")}}
Controls whether continuous results are returned for each recognition, or only a single result. Defaults to single (false.)
{{domxref("SpeechRecognition.interimResults")}}
Controls whether interim results should be returned (true) or not (false.) Interim results are results that are not yet final (e.g. the {{domxref("SpeechRecognitionResult.isFinal")}} property is false.)
{{domxref("SpeechRecognition.maxAlternatives")}}
Sets the maximum number of {{domxref("SpeechRecognitionAlternative")}}s provided per result. The default value is 1.
{{domxref("SpeechRecognition.serviceURI")}}
Specifies the location of the speech recognition service used by the current SpeechRecognition to handle the actual recognition. The default is the user agent's default speech service.

事件處理

{{domxref("SpeechRecognition.onaudiostart")}}
Fired when the user agent has started to capture audio.
{{domxref("SpeechRecognition.onaudioend")}}
Fired when the user agent has finished capturing audio.
{{domxref("SpeechRecognition.onend")}}
Fired when the speech recognition service has disconnected.
{{domxref("SpeechRecognition.onerror")}}
Fired when a speech recognition error occurs.
{{domxref("SpeechRecognition.onnomatch")}}
Fired when the speech recognition service returns a final result with no significant recognition. This may involve some degree of recognition, which doesn't meet or exceed the {{domxref("SpeechRecognitionAlternative.confidence","confidence")}} threshold.
{{domxref("SpeechRecognition.onresult")}}
Fired when the speech recognition service returns a result — a word or phrase has been positively recognized and this has been communicated back to the app.
{{domxref("SpeechRecognition.onsoundstart")}}
Fired when any sound — recognisable speech or not — has been detected.
{{domxref("SpeechRecognition.onsoundend")}}
Fired when any sound — recognisable speech or not — has stopped being detected.
{{domxref("SpeechRecognition.onspeechstart")}}
Fired when sound that is recognised by the speech recognition service as speech has been detected.
{{domxref("SpeechRecognition.onspeechend")}}
Fired when speech recognised by the speech recognition service has stopped being detected.
{{domxref("SpeechRecognition.onstart")}}
Fired when the speech recognition service has begun listening to incoming audio with intent to recognize grammars associated with the current SpeechRecognition.

方法

SpeechRecognition also inherits methods from its parent interface, {{domxref("EventTarget")}}.

{{domxref("SpeechRecognition.abort()")}}
Stops the speech recognition service from listening to incoming audio, and doesn't attempt to return a {{domxref("SpeechRecognitionResult")}}.
{{domxref("SpeechRecognition.start()")}}
Starts the speech recognition service listening to incoming audio with intent to recognize grammars associated with the current SpeechRecognition.
{{domxref("SpeechRecognition.stop()")}}
Stops the speech recognition service from listening to incoming audio, and attempts to return a {{domxref("SpeechRecognitionResult")}} using the audio captured so far.

範例

In our simple Speech color changer example, we create a new SpeechRecognition object instance using the {{domxref("SpeechRecognition.SpeechRecognition", "SpeechRecognition()")}} constructor, create a new {{domxref("SpeechGrammarList")}}, and set it to be the grammar that will be recognised by the SpeechRecognition instance using the {{domxref("SpeechRecognition.grammars")}} property.

After some other values have been defined, we then set it so that the recognition service starts when a click event occurs (see {{domxref("SpeechRecognition.start()")}}.) When a result has been successfully recognised, the {{domxref("SpeechRecognition.onresult")}} handler fires,  we extract the color that was spoken from the event object, and then set the background color of the {{htmlelement("html")}} element to that colour.

var grammar = '#JSGF V1.0; grammar colors; public <color> = aqua | azure | beige | bisque | black | blue | brown | chocolate | coral | crimson | cyan | fuchsia | ghostwhite | gold | goldenrod | gray | green | indigo | ivory | khaki | lavender | lime | linen | magenta | maroon | moccasin | navy | olive | orange | orchid | peru | pink | plum | purple | red | salmon | sienna | silver | snow | tan | teal | thistle | tomato | turquoise | violet | white | yellow ;'
var recognition = new SpeechRecognition();
var speechRecognitionList = new SpeechGrammarList();
speechRecognitionList.addFromString(grammar, 1);
recognition.grammars = speechRecognitionList;
//recognition.continuous = false;
recognition.lang = 'en-US';
recognition.interimResults = false;
recognition.maxAlternatives = 1;

var diagnostic = document.querySelector('.output');
var bg = document.querySelector('html');

document.body.onclick = function() {
  recognition.start();
  console.log('Ready to receive a color command.');
}

recognition.onresult = function(event) {
  var color = event.results[0][0].transcript;
  diagnostic.textContent = 'Result received: ' + color;
  bg.style.backgroundColor = color;
}

規格

Specification Status Comment
{{SpecName('Web Speech API', '#speechreco-section', 'SpeechRecognition')}} {{Spec2('Web Speech API')}}  

瀏覽器相容性

{{CompatibilityTable}}
Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support {{CompatChrome(33)}}{{property_prefix("webkit")}} [1] {{CompatNo}} [2] {{CompatNo}} {{CompatNo}} {{CompatNo}}
continuous {{CompatChrome(33)}} [1] {{CompatNo}} {{CompatNo}} {{CompatNo}} {{CompatNo}}
Feature Android Chrome Firefox Mobile (Gecko) Firefox OS IE Phone Opera Mobile Safari Mobile
Basic support {{CompatUnknown}} {{CompatVersionUnknown}}[1] {{CompatGeckoMobile(44)}} 2.5 {{CompatNo}} {{CompatNo}} {{CompatNo}}
continuous {{CompatUnknown}} {{CompatVersionUnknown}}[1] {{CompatUnknown}} {{CompatNo}} {{CompatNo}} {{CompatNo}} {{CompatNo}}
  • [1] Speech recognition interfaces are currently prefixed in Chrome, so you'll need to prefix interface names appropriately, e.g. webkitSpeechRecognition; You'll also need to serve your code through a web server for recognition to work.
  • [2] Can be enabled via the media.webspeech.recognition.enable flag in about:config on mobile. Not implemented at all on Desktop Firefox — see {{bug(1248897)}}.

Firefox OS 權限

欲在你的APP裡面使用語音辨識功能,你必須在APP的manifest中要求下列權限:

"permissions": {
  "audio-capture" : {
    "description" : "Audio capture"
  },
  "speech-recognition" : {
    "description" : "Speech recognition"
  }
}

你的APP也必須是一個privileged app,所以你同時需要加入

  "type": "privileged"

了解更多

文件標籤與貢獻者

 此頁面的貢獻者: hinet60613
 最近更新: hinet60613,