ASHA journals
Browse
AUDIO
JSLHR-S-18-0313Niu_SuppS1.wav (982.71 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS2.wav (380.89 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS3.wav (660.51 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS4.wav (761.44 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS5.wav (293.11 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS6.wav (1006.97 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS7.wav (1.28 MB)
AUDIO
JSLHR-S-18-0313Niu_SuppS8.wav (770.31 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS9.wav (697.97 kB)
AUDIO
JSLHR-S-18-0313Niu_SuppS10.wav (1.45 MB)
1/0
10 files

Mandarin EL SR based on WaveNet-CTC (Qian et al., 2019)

media
posted on 2019-06-14, 22:37 authored by Zhaopeng Qian, Li Wang, Shaochuan Zhang, Chan Liu, Haijun Niu
Purpose: The application of Chinese Mandarin electrolaryngeal (EL) speech for laryngectomees has been limited by its drawbacks such as single fundamental frequency, mechanical sound, and large radiation noise. To improve the intelligibility of Chinese Mandarin EL speech, a new perspective using the automatic speech recognition (ASR) system was proposed, which can convert EL speech into healthy speech, if combined with text-to-speech.
Method: An ASR system was designed to recognize EL speech based on a deep learning model WaveNet and the connectionist temporal classification (WaveNet-CTC). This system mainly consists of 3 parts: the acoustic model, the language model, and the decoding model. The acoustic features are extracted during speech preprocessing, and 3,230 utterances of EL speech mixed with 10,000 utterances of healthy speech are used to train the ASR system. Comparative experiment was designed to evaluate the performance of the proposed method.
Results: The results show that the proposed ASR system has higher stability and generalizability compared with the traditional methods, manifesting superiority in terms of Chinese characters, Chinese words, short sentences, and long sentences. Phoneme confusion occurs more easily in the stop and affricate of EL speech than the healthy speech. However, the highest accuracy of the ASR could reach 83.24% when 3,230 utterances of EL speech were used to train the ASR system.
Conclusions: This study indicates that EL speech could be recognized effectively by the ASR based on WaveNet-CTC. This proposed method has a higher generalization performance and better stability than the traditional methods. A higher accuracy of the ASR system based on WaveNet-CTC can be obtained, which means that EL speech can be converted into healthy speech.

Supplemental Materials S1–S10. 10 electrolaryngeal (EL) speech sentences .wav files.

Qian, Z., Wang, L., Zhang, S., Liu, C., & Niu, H. (2019). Mandarin electrolaryngeal speech recognition based on WaveNet-CTC. Journal of Speech, Language, and Hearing Research, 62, 2203–2212. https://doi.org/10.1044/2019_JSLHR-S-18-0313

Funding

This study was supported by the Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (No. VRLAB2018B06).

History