Text-to-Speech (TTS) Generation System
(Redirected from text to speech system)
Jump to navigation
Jump to search
A Text-to-Speech (TTS) Generation System is a speech generation system that implements a text-to-speech algorithm to solve a text-to-speech task.
- AKA: TTS Engine, Text-to-Speech (TTS) Conversion System.
- Context:
- It can (often) be a component of a Conversational Speech System (which may include an NLG system).
- It can (often) be composed of a TTS Text Normalizer, Grapheme-to-Phoneme Converter, Waveform Generator, and a Waveform Producing Device.
- It can range from being an English TTS System, Spanish TTS System, German TTS System, Japanese TTS System, Mandarin TTS System, ...
- It can range from being a Concatenative TTS System, to being a ...
- ...
- Example(s):
- AWS Polly TTS System.
- Nuance Vocalizer TTS System.
- GCP's Text-to-Speech System.
- Text_to_Voice_(Firefox).
- a Neural TTS System (that implements a neural TTS algorithm).
- …
- Counter-Example(s):
- See: Video Recognition System, Dialog System.
References
2018a
- (Google Cloud, 2018a) ⇒ https://cloud.google.com/text-to-speech/docs/
- QUOTE: Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 30 voices, available in multiple languages and variants. It applies DeepMind’s groundbreaking research in WaveNet and Google’s powerful neural networks to deliver high fidelity audio. With this easy-to-use API, you can create lifelike interactions with your users, across many applications and devices.
2018b
- (Arik et al., 2017) ⇒ Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, and Mohammad Shoeybi. (2017). “Deep Voice: Real-time Neural Text-to-speech.” arXiv preprint arXiv:1702.07825
- ABSTRACT: We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations.
2017
- (Github, 2017) ⇒ https://google.github.io/tacotron/publications/tacotron2/
2015
- (Rao et al., 2015) ⇒ Kanishka Rao, Fuchun Peng, Haşim Sak, and Françoise Beaufays. (2015). “Grapheme-to-phoneme Conversion Using Long Short-term Memory Recurrent Neural Networks.” In: Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. doi:10.1109/ICASSP.2015.7178767
- QUOTE: Grapheme-to-phoneme (G2P) models are key components in speech recognition and text-to-speech systems as they describe how words are pronounced.