From 59f490e22e4fbb3f61a10be20d26082bb3e6b9d9 Mon Sep 17 00:00:00 2001 From: Linda Zanchi Date: Thu, 14 Feb 2019 16:03:39 -0500 Subject: [PATCH] edit for punctuation and capitalization (#30161) Removed multiple unnecessary commas, added one necessary comma, added one capital in a title. --- guide/english/voice/index.md | 48 +++++++++++++++--------------------- 1 file changed, 20 insertions(+), 28 deletions(-) diff --git a/guide/english/voice/index.md b/guide/english/voice/index.md index d92357938d..8a4c3eb747 100644 --- a/guide/english/voice/index.md +++ b/guide/english/voice/index.md @@ -2,24 +2,23 @@ title: Voice --- -## Voice - +# Voice Speech recognition allows users affected by accessibility difficulties (such as permanent visual impairment or temporary impairment while driving) the ability to navigate content on a website or input text data (such as a form). Speech synthesis provides websites the ability to provide information to users by reading text. -### Javascript Web Speech API +## Javascript Web Speech API The [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) enables you to incorporate voice data into web apps using both speech recognition and speech synthesis. -#### How the Web Speech API works +### How the Web Speech API Works -The Web Speech API uses the device's native microphone system. When an utterance is recognized from a pre-defined grammar (see below), it is returned as a result (or list of results) as a text string, and callback functions can be provided to perform further actions. +The Web Speech API uses the device's native microphone system. When an utterance is recognized from a pre-defined grammar (see below) it is returned as a result (or list of results) as a text string and callback functions can be provided to perform further actions. -#### How to use the Speech Recognition API +### How to use the Speech Recognition API -Here is a simple example of using the Speech Recognition API. Note that the API is initated with the `new SpeechRecognition()` constructor, and starts when `recognition.start();` is called. It creates a transcript from what is received and then that is appended to the `

` element. [Click here for a working demo of this code](https://codepen.io/ashwoodall/pen/MPeyRm). +Here is a simple example of using the Speech Recognition API. Note that the API is initated with the `new SpeechRecognition()` constructor and starts when `recognition.start();` is called. It creates a transcript from what is received and then that is appended to the `

` element. [Click here for a working demo of this code](https://codepen.io/ashwoodall/pen/MPeyRm). This is the HTML that the transcript is appended to: @@ -57,12 +56,11 @@ recognition.addEventListener('end', recognition.start); recognition.start(); ``` -### Alexa +## Alexa +Alexa is Amazon’s cloud-based voice service available on tens of millions of devices from Amazon and third-party device manufacturers. With Alexa you can build natural voice experiences that offer customers a more intuitive way to interact with the technology they use every day. +Alexa is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, sports, and other real-time information such as news. -Alexa is Amazon’s cloud-based voice service available on tens of millions of devices from Amazon and third-party device manufacturers. With Alexa, you can build natural voice experiences that offer customers a more intuitive way to interact with the technology they use every day. -It is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, sports, and other real-time information, such as news. - -# Amazon Echo Device Range +### Amazon Echo Device Range - Amazon Echo - Amazon Echo Plus - Amazon Echo Dot @@ -70,35 +68,29 @@ It is capable of voice interaction, music playback, making to-do lists, setting - Amazon Echo Show - Amazon Echo Spot -# Far Field Microphones +## Far Field Microphones Speech recognition systems often use multiple microphones to reduce the impact of reverberation and noise. -The Echo mics are arranged in a hexagonal layout, with one microphone at each vertex and one in the center. The delay between each microphone receiving the signal enables the device to identify the source of the voice and cancel out noise coming from other directions. This is a phenomenon known as beamforming. +The Echo mics are arranged in a hexagonal layout with one microphone at each vertex and one in the center. The delay between each microphone receiving the signal enables the device to identify the source of the voice and cancel out noise coming from other directions. This is a phenomenon known as beamforming. -While state-of-the-art speech recognition systems perform reasonably well in close-talking microphone conditions, performance degrades in conditions where the microphone is far from the user. +While state-of-the-art speech recognition systems perform reasonably well in close-talking microphone conditions performance degrades in conditions where the microphone is far from the user. The audio captured by the Echo will be influenced by: 1) the speaker’s voice against the wall of the room, 2) the background noise from outside, -3) the acoustic echo coming from the device’s loudspeaker +3) the acoustic echo coming from the device’s loudspeaker, 4) the output audio against the wall of the room. -# Software -The software components within the platform include both Natural Language Understanding (NLU) as well as Automated Speech Recognition (ASR). These software components can be leveraged by custom written "skills" by independent software developers that are then certified to a set of standards by Amazon. There are already more than 20k of these custom skills available through their app store. +## Software +The software components within the platform include both Natural Language Understanding (NLU) as well as Automated Speech Recognition (ASR). These software components can be leveraged by custom written "skills" by independent software developers who are then certified to a set of standards by Amazon. There are already more than 20k of these custom skills available through their app store. -### IBM Watson Speech-to-Text API - -The IBM Watson Speech-to-Text API uses machine learning to accurately predict speech in real time. Currently seven different languages are supported, as well as live voice and pre-recorded audio. The API can be used for free, or paid versions are available for larger scale apps. - -### Siri +## IBM Watson Speech-to-Text API +The IBM Watson Speech-to-Text API uses machine learning to accurately predict speech in real time. Currently seven different languages are supported as well as live voice and pre-recorded audio. The API can be used for free; paid versions are also available for larger scale apps. +## Siri Apple's iOS 12 update introduced [Siri](https://en.wikipedia.org/wiki/Siri) Shortcuts, which supports the use of third-party applications through Apple's digital voice assistant. Siri Shortcuts allows developers to add shortcuts and personalized phrases to Siri through their applications by, for example, letting the user record a voice phrase for a particular action and adding that phrase to Siri. -#### More Information - +## Additional Resources - [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) - [Alexa API](https://developer.amazon.com/docs/alexa-voice-service/api-overview.html) - [IBM Watson API](https://www.ibm.com/watson/services/speech-to-text/) - [Sirikit](https://developer.apple.com/documentation/sirikit) - - -