Pitch-Scaled Spectrum Based Excitation Model for HMM-based Speech Synthesis
详细信息    查看全文
  • 作者:Zhengqi Wen (1)
    Jianhua Tao (1)
    Shifeng Pan (1)
    Yang Wang (1)
  • 关键词:Speech synthesis ; HMM ; based speech synthesis ; Parametric representation of speech ; Excitation model ; Pitch ; scaled spectrum
  • 刊名:Journal of Signal Processing Systems
  • 出版年:2014
  • 出版时间:March 2014
  • 年:2014
  • 卷:74
  • 期:3
  • 页码:423-435
  • 全文大小:598 KB
  • 参考文献:1. Zen, H., Tokuda, K., & Black, A. (2009). Statistical parametric speech synthesis. / Speech Communication, 51(11), 1039-064. CrossRef
    2. [online] HMM-based Speech Synthesis System (HTS). http://hts.sp.nitech.ac.jp/.
    3. Stylianou, Y. (1996). Harmonic plus Noise Model for Speech, combined with Statistical Methods, for Speech and Speaker Modification. / P.h.D. thesis, Ecole Nationale Supèrieure des Télécommunications. Paris, France.
    4. Hermus, K., Van Hamme, H., & Irhimeh, S. (2007). Estimation of the voicing cut-off frequency contour based on a cumulative harmonicity score. / IEEE Signal Processing Letters, 14(11), 820-23. CrossRef
    5. Kawahara, H., Masuda-Katsuse, I., & de Cheveigné, A. (1999). Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: possible role of a repetitive structure in sounds. / Speech Communication, 27(5), 187-07. CrossRef
    6. Stylianou, Y. (2001). Applying the harmonic plus noise model in concatenative speech synthesis. / IEEE Transactions on Speech Audio Processing, 9(1), 21-9. CrossRef
    7. Hemptinne, C. (2006). Integration of the harmonic plus noise model (HNM) into the hidden markov model-based speech synthesis, system (HTS). / Master thesis. IDIAP Research Institute, IDIAP-RR 69, Switzerland.
    8. Zen, H., Toda, T., Nakamura, M., & Tokuda, K. (2007). Details of the Nitech HMM-based speech synthesis for Blizzard Challenge 2005. / IEICE Transactions on Information and Systems, E90(D), 325-33.
    9. Cabral, J. P., Renals, S., Yamagishi, J., & Richmond, K. (2011). HMM-based Speech Synthesizer Using the LF-model of the Glottal Source. / IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4704-707.
    10. Fant, G., Liljencrants, J., & Lin, Q. (1985). / A four-parameter model of glottal flow. Stockholm: STL-QPSR, KTH.
    11. Raitio, T., Suni, J., Yamagishi, H., Pulakka, A., Nurminen, J., Vainio, M., & Alku, P. (2010). HMM-based speech synthesis utilizing glottal inverse filtering. / IEEE Transactions on Speech Audio Processing, 19(1), 153-65. CrossRef
    12. Plumpe, M. D., Quatieri, T. F., & Reynolds, D. A. (1999). Modeling of the glottal flow derivative waveform with application to speaker identification. / IEEE Transactions on Speech Audio Processing, 7(5), 569-85. CrossRef
    13. Yoshimura, T., Tokuda, K., Masuko, T., & Kitamura, T. (2001). Mixed excitation for HMM-based speech synthesis. / 9th European Conference on Speech Communication and Technology, 2263-266.
    14. Macree, A. V., Truong, K., George, E. B., Barnwell, T. P., & Viswanathan, V. (1996). A 2.4 kbitsfs MELP Coder Candidate for the New US. Federal Standard. / IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 200-03.
    15. Drugman, T., Wilfart, G., & Dutoit, T. (2009). A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis. / Proceedings of Interspeech, 2009, 1779-782.
    16. Maia, R., Toda, T., Zen, H., Nankaku, Y., & Tokuda, K. (2007). An excitation model for HMM-based speech synthesis based on residual modeling. / 6the ISCA Workshop on Speech Synthesis, 131-36.
    17. Skoglund, J., & Bastiaan, W. K. (2000). On time-frequency masking in voiced speech. / IEEE Transactions on Speech Audio Processing, 8(4), 361-69. CrossRef
    18. Robert, J. M., & Thomas, F. Q. (1986). Speech analysis/synthesis based on a sinusoidal representation. / IEEE Transactions on Speech Audio Processing, 4(34), 744-54.
    19. Jackson, P. J. B., & Shadle, C. H. (2001). Pitch-scaled estimation of simultaneous voiced and trubulence-noise components in speech. / IEEE Transactions on Speech Audio Processing, 9(7), 713-26. CrossRef
    20. Wen, Z. Q., & Tao, J. H. (2011). Inverse filtering based harmonic plus noise excitation model for HMM-based speech synthesis. / Proceedings of Interspeech, 2011, 1805-808.
    21. Kawahara, H., Morise, M., Takahashi, T., Banno, H., Nisimura, R., & Irino, T. (2010). Simplification and extension of non-periodic excitation source representations for high-quality speech manipulation systems. / Proceedings of Interspeech, 2010, 38-1.
    22. Fant, G. (1960). / Acoustic Theory of Speech Production. The Hague: Mouton.
    23. Yegnanarayana, B., d’Alessandro, C., & Darsinos, V. (1998). An iterative algorithm for decomposition of speech signals into periodic and aperiodic components. / IEEE Transactions on Speech Audio Processing, 6(1), 1-1. CrossRef
    24. Naylor, P., Kounoudes, A., Gudnason, J., & Brookes, M. (2007). Estimation of glottal closure instants in voiced speech using the DYPSA algorithm. / IEEE Transactions on Speech Audio Processing, 15(1), 34-3. CrossRef
    25. Nuttal, A. H. (1981). Some windows with very good sidelobe behavior. / IEEE Transactions on Acoustics, Speech and Audio Processing, 29(1), 84-1. CrossRef
    26. Drugman, T., Moinet, A., Dutoit, T., & Wilfart, G. (2009). Using a pitch-synchronous redisual codebook for hybrid HMM/frame selection speech synthesis. / IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3793-796.
    27. Raitio, T., Suni, A., Pulakka, H., Vainio, M., & Alku, P. (2011). Utilizing glottal source pulse library for generation improved excitation signal for HMM-based speech synthesis. / IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4564-567.
    28. Fodor, I. K. (2002). A survey of dimension reduction techniques. / Technical Report UCRL-ID-148494, Lawrence Livermore National Laboratory. Center for Applied Scientific Computing, USA.
    29. Jackson, J. E. (1991). / A User’s Guide to Principal components. New York: John Wiley and Sons. CrossRef
    30. Wen, Z. Q., & Tao, J. H. (2011). An excitation model based on inverse filtering for speech analysis and synthesis. / IEEE International Workshop on Machine Learning for Signal Processing.
    31. Linde, Y., Buzo, A., & Gray, R. M. (1980). An algorithm for vector quantizer design. / IEEE Transaction on Communications, 28(1), 84-5.
    32. Wen, Z. Q., Tao, J. H., & Hain, H. U. (2012). Pitch-scaled spectrum based excitation model for HMM-based speech synthesis. / IEEE 11th International Conference on Signal Processing.
    33. John, M. (1975). Linear prediction: a tutorial review. / Proceedings of the IEEE, 63(4), 561-80. CrossRef
    34. Tokuda, K., Masuko, T., Miyazaki, N., & Kobayashi, T. (2002). Multi-space probability distribution HMM. / IEICE Transactions on Information and Systems, E85-D(3), 455-64.
    35. Shinoda, K., & Watanabe, T. (2000). MDL-based context-dependent subword modeling for speech recognition. / The Journal of the Acoustical Society of Japan (e), 21(2), 79-6. CrossRef
    36. Tokuda, K., Kobayashi, T., & Imai, S. (1995). Speech parameter generation from HMM using dynamic features. / IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 660-63
    37. Kawahra, H., Morise, M., Takahashi, T., Nisimura, R., & Irino, T. (2006). Tandem-STRAIGHT: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation. / Proceedings of ICASSP, 3933-936.
    38. Moulines, E., & Charpentier, F. (1990). Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. / Speech Communication, 9(5), 453-67. CrossRef
    39. Itakura, F. (1975). Line spectrum representation of linear predictor coefficients of speech signals. / Journal of the Acoustical Society of America, 57(S1), S35–S35. CrossRef
  • 作者单位:Zhengqi Wen (1)
    Jianhua Tao (1)
    Shifeng Pan (1)
    Yang Wang (1)

    1. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science, Beijing, China
  • ISSN:1939-8115
文摘
The speech generated by hidden Markov model (HMM)-based speech synthesis systems (HTS) suffers from a ‘buzzing-sound, which is due to an over-simplified vocoding technique. This paper proposes a new excitation model that uses a pitch-scaled spectrum for the parametric representation of speech in HTS. A residual signal produced using inverse filtering retains the detailed harmonic structure of speech that is not part of the linear prediction (LP) spectrum. By using pitch-scaled spectrums, we can compensate the LP spectrum using the detailed harmonic structure of the residual signal. This spectrum can be compressed using a periodic excitation parameter so that it can used to train HTS. We define an aperiodic measure as the harmonics-to-noise ratio, and calculate a voicing-cut off frequency to fit the aperiodic measure to a sigmoid function. We combine the LP coefficient, pitch-scaled spectrum, and sigmoid function to create a new parametric representation of speech. Listening tests were carried out to evaluate the effectiveness of the proposed technique. This vocoder received a mean opinion score of 4.0 in analysis-synthesis experiments, before dimensionality reduction. By integrating this vocoder into HTS, we improved the sound of the synthesized speech compared with the pulse train excitation model, and demonstrated an even better result than STRAIGHT-HTS.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700