A state-space model with neural-network prediction for recovering vocal tract resonances in fluent speech from Mel-cepstral coefficients
详细信息    查看全文
文摘
In this paper, we present a state-space formulation of a neural-network-based hidden dynamic model of speech whose parameters are trained using an approximate EM algorithm. This efficient and effective training makes use of the output of an off-the-shelf formant tracker (for the vowel segments of the speech signal), in addition to the Mel-cepstral observations, to simplify the complex sufficient statistics that would be required in the exact EM algorithm. The trained model, consisting of the state equation for the target-directed vocal tract resonance (VTR) dynamics on all classes of speech sounds (including consonant closure and constriction) and the observation equation for mapping from the VTR to Mel-cepstral acoustic measurement, is then used to recover the unobserved VTR based on the extended Kalman filter. The results demonstrate accurate estimation of the VTR, especially during rapid consonant–vowel or vowel–consonant transitions and during consonant closure when the acoustic measurement alone provides weak or no information to infer the VTR values. The practical significance of correctly identifying the VTRs during consonantal closure or constriction is that they provide target frequency values for the VTR or formant transitions from adjacent sounds. Without such target values, the VTR transitions from vowel to consonant or from consonant to vowel are often very difficult to extract accurately by the previous formant tracking techniques. With the use of the new technique reported in this paper, the consonantal VTRs and the related transitions become more reliably identified from the speech signal.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700