Web Speech API

Experimental: Esta es una tecnología experimental
Comprueba la Tabla de compabilidad de navegadores cuidadosamente antes de usarla en producción.

The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis (Text-to-Speech), and SpeechRecognition (Asynchronous Speech Recognition.)

Web Speech Concepts and Usage

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recognition is accessed via the SpeechRecognition (en-US) interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition (en-US) object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar (en-US) interface represents a container for a particular set of grammar that your app should recognise. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis (en-US) interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesiser.) Different voice types are represented by SpeechSynthesisVoice (en-US) objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance (en-US) objects. You can get these spoken by passing them to the SpeechSynthesis.speak() (en-US) method.

For more details on using these features, see Using the Web Speech API.

Web Speech API Interfaces

Speech recognition

SpeechRecognition (en-US)
The controller interface for the recognition service; this also handles the SpeechRecognitionEvent (en-US) sent from the recognition service.
SpeechRecognitionAlternative (en-US)
Represents a single word that has been recognised by the speech recognition service.
SpeechRecognitionError (en-US)
Represents error messages from the recognition service.
SpeechRecognitionEvent (en-US)
The event object for the result and nomatch events, and contains all the data associated with an interim or final speech recognition result.
SpeechGrammar (en-US)
The words or patterns of words that we want the recognition service to recognize.
SpeechGrammarList (en-US)
Represents a list of SpeechGrammar (en-US) objects.
SpeechRecognitionResult (en-US)
Represents a single recognition match, which may contain multiple SpeechRecognitionAlternative (en-US) objects.
SpeechRecognitionResultList (en-US)
Represents a list of SpeechRecognitionResult (en-US) objects, or a single one if results are being captured in continuous (en-US) mode.

Speech synthesis

SpeechSynthesis (en-US)
The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
SpeechSynthesisErrorEvent (en-US)
Contains information about any errors that occur while processing SpeechSynthesisUtterance (en-US) objects in the speech service.
SpeechSynthesisEvent (en-US)
Contains information about the current state of SpeechSynthesisUtterance (en-US) objects that have been processed in the speech service.
SpeechSynthesisUtterance (en-US)
Represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)
SpeechSynthesisVoice (en-US)
Represents a voice that the system supports. Every SpeechSynthesisVoice has its own relative speech service including information about language, name and URI.
Window.speechSynthesis (en-US)
Specced out as part of a [NoInterfaceObject] interface called SpeechSynthesisGetter, and Implemented by the Window object, the speechSynthesis property provides access to the SpeechSynthesis (en-US) controller, and therefore the entry point to speech synthesis functionality.


The Web Speech API repo on GitHub contains demos to illustrate speech recognition and synthesis.


Specification Status Comment
Web Speech API Draft Initial definition

Browser compatibility

See also


If you're able to see this, something went wrong on this page.


If you're able to see this, something went wrong on this page.