We're looking for a user researcher to understand the needs of developers and designers. Is this you or someone you know? Check out the post: https://mzl.la/2IGzdXS

This is an experimental technology
Check the Browser compatibility table carefully before using this in production.

The SpeechSynthesisUtterance interface of the Web Speech API represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)

Constructor

SpeechSynthesisUtterance.SpeechSynthesisUtterance()
Returns a new SpeechSynthesisUtterance object instance.

Properties

SpeechSynthesisUtterance also inherits properties from its parent interface, EventTarget.

SpeechSynthesisUtterance.lang
Gets and sets the language of the utterance.
SpeechSynthesisUtterance.pitch
Gets and sets the pitch at which the utterance will be spoken at.
SpeechSynthesisUtterance.rate
Gets and sets the speed at which the utterance will be spoken at.
SpeechSynthesisUtterance.text
Gets and sets the text that will be synthesised when the utterance is spoken.
SpeechSynthesisUtterance.voice
Gets and sets the voice that will be used to speak the utterance.
SpeechSynthesisUtterance.volume
Gets and sets the volume that the utterance will be spoken at.

Event handlers

SpeechSynthesisUtterance.onboundary
Fired when the spoken utterance reaches a word or sentence boundary.
SpeechSynthesisUtterance.onend
Fired when the utterance has finished being spoken.
SpeechSynthesisUtterance.onerror
Fired when an error occurs that prevents the utterance from being succesfully spoken.
SpeechSynthesisUtterance.onmark
Fired when the spoken utterance reaches a named SSML "mark" tag.
SpeechSynthesisUtterance.onpause
Fired when the utterance is paused part way through.
SpeechSynthesisUtterance.onresume
Fired when a paused utterance is resumed.
SpeechSynthesisUtterance.onstart
Fired when the utterance has begun to be spoken.

Examples

In our basic Speech synthesiser demo, we first grab a reference to the SpeechSynthesis controller using window.speechSynthesis. After defining some necessary variables, we retrieve a list of the voices available using SpeechSynthesis.getVoices() and populate a select menu with them so the user can choose what voice they want.

Inside the inputForm.onsubmit handler, we stop the form submitting with preventDefault(),  use the constructor to create a new utterance instance containing the text from the text <input>, set the utterance's voice to the voice selected in the <select> element, and start the utterance speaking via the SpeechSynthesis.speak() method.

var synth = window.speechSynthesis;

var inputForm = document.querySelector('form');
var inputTxt = document.querySelector('input');
var voiceSelect = document.querySelector('select');

var voices = synth.getVoices();

for(i = 0; i < voices.length ; i++) {
  var option = document.createElement('option');
  option.textContent = voices[i].name + ' (' + voices[i].lang + ')';
  option.setAttribute('data-lang', voices[i].lang);
  option.setAttribute('data-name', voices[i].name);
  voiceSelect.appendChild(option);
}

inputForm.onsubmit = function(event) {
  event.preventDefault();

  var utterThis = new SpeechSynthesisUtterance(inputTxt.value);
  var selectedOption = voiceSelect.selectedOptions[0].getAttribute('data-name');
  for(i = 0; i < voices.length ; i++) {
    if(voices[i].name === selectedOption) {
      utterThis.voice = voices[i];
    }
  }
  synth.speak(utterThis);
  inputTxt.blur();
}

Specifications

Specification Status Comment
Web Speech API
The definition of 'SpeechSynthesisUtterance' in that specification.
Draft  

Browser compatibility

FeatureChromeEdgeFirefoxInternet ExplorerOperaSafari
Basic support33 Yes49 No217
SpeechSynthesisUtterance constructor33 Yes49 No217
lang33 Yes49 No217
onboundary33 Yes49 No217
onend33 Yes49 No217
onerror33 Yes49 No217
onmark33 Yes49 No217
onpause33 Yes49 No217
onresume33 Yes49 No217
onstart33 Yes49 No217
pitch33 Yes49 No217
rate33 Yes49 No217
text33 Yes49 No217
voice33 Yes49 No217
volume33 Yes49 No217
FeatureAndroid webviewChrome for AndroidEdge mobileFirefox for AndroidOpera AndroidiOS SafariSamsung Internet
Basic support3333 Yes

62

61 — 621

No7.1 ?
SpeechSynthesisUtterance constructor3333 Yes

62

61 — 621

No7.1 ?
lang3333 Yes

62

61 — 621

No7.1 ?
onboundary3333 Yes

62

61 — 621

No7.1 ?
onend3333 Yes

62

61 — 621

No7.1 ?
onerror3333 Yes

62

61 — 621

No7.1 ?
onmark3333 Yes

62

61 — 621

No7.1 ?
onpause3333 Yes

62

61 — 621

No7.1 ?
onresume3333 Yes

62

61 — 621

No7.1 ?
onstart3333 Yes

62

61 — 621

No7.1 ?
pitch3333 Yes

62

61 — 621

No7.1 ?
rate3333 Yes

62

61 — 621

No7.1 ?
text3333 Yes

62

61 — 621

No7.1 ?
voice3333 Yes

62

61 — 621

No7.1 ?
volume3333 Yes

62

61 — 621

No7.1 ?

1. From version 61 until version 62 (exclusive): this feature is behind the media.webspeech.synth.enabled preference (needs to be set to true). To change preferences in Firefox, visit about:config.

See also

Document Tags and Contributors

Contributors to this page: fscholz, MaulingMonkey, abbycar, chrisdavidmills
Last updated by: fscholz,