复杂环境下麦克风阵列语音增强方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
语音增强是信号处理领域中主要的研究内容之一,在现代通信、多媒体技术、人机交互及智能系统等领域中具有广泛的应用价值。语音增强的主要目的是从带噪声的语音信号中提取出语音信息,以获得高质量语音信号。但多样性噪声源与环境混响的存在,使得麦克风接收的语音信号质量较差,这不仅影响语音的可懂度,而且影响语音处理系统的整体性能。因此,需要进行有效的噪声抑制,以增强语音信号的质量。
     通常情况下,单麦克风语音增强方法具有良好的噪声抑制性能,但在复杂的声学环境下,其噪声抑制性能急剧退化。麦克风阵列融合了语音信号的空间和时间信息,具有较高的空间分辨率与较强的抗干扰能力等特点,使得麦克风阵列成为视频会议等智能通信系统中捕捉说话人语音、改善语音质量的重要手段。近年来,基于麦克风阵列的语音增强方法已经成为语音增强技术的研究热点。
     本文以阵列处理和语音处理作为信号处理的主要工具,以视频会议系统为应用背景,对麦克风阵列语音增强方法进行了深入研究。
     本文的主要创新研究成果如下:
     (1)自适应波束形成与后置滤波波束形成融合的麦克风阵列语音增强方法。通常自适应波束形成语音增强方法适用于强相干噪声场,后置滤波波束形成方法适用于非相干噪声场,本文将这两种方法进行结合,给出了一种新的波束形成语音增强方法,该方法在相干噪声场与非相干噪声场环境下均有较好的消噪性能,因此对噪声场有良好的鲁棒性。
     (2)混响环境下时延估计方法的研究。在基于波束形成语音增强方法中,需要对麦克风接收的信号进行时延补偿。目前已有的时延估计算法大都没有考虑混响的影响,为此,本文给出了一种基于语音建立信号和广义相关加权的时延估计方法。该方法首先利用避免混响(Echo-Avoidance,EA)的混响模型(简称EA混响模型)来提取语音建立信号;然后用语音建立信号估计信号的功率谱,并进行平滑处理;最后采用广义相关加权方法估计时延。该方法在混响环境下可以有效地估计时延。实验结果验证了该方法的有效性。
     (3)倒谱域语音去混响方法的研究。本文给出了一种基于倒谱技术的麦克风阵列语音增强方法。该方法利用人耳对语音信号相位的不敏感特性,采用一种近似手段从含噪的语音信号中提取相位信息,以减少其运算量。仿真结果表明了该方法的有效性。
     (4)麦克风阵列语音增强子空间方法的研究。本文在麦克风阵列广义奇异值分解(GSVD)语音增强方法基础上,从降低计算量的角度,提出了一种基于GSVD的麦克风阵列语音增强改进方法。该方法是一种次优滤波语音增强方法,它在干扰噪声是白噪声的情况下,无需进行语音端点检测,因此计算复杂度大大降低。此外,本文还将GSVD麦克风阵列语音增强方法应用于单麦克风语音增强,同样取得了较好的增强效果。仿真实验结果表明,该方法能有效地抑制白噪声,使信噪比得到明显提高,同时也改善了语音质量。
     (5)基于语音生成模型的麦克风阵列语音增强方法研究。本文将单麦克风时变AR模型语音增强方法应用于麦克风阵列中,同时结合麦克风阵列的空间特性,给出了一种基于语音生成模型的麦克风阵列语音增强方法。该方法适合并行处理,可用较少的数据及AR模型阶数来实现语音增强处理。仿真实验验证了该方法的有效性。
Speech enhancement is one of the key technologies for the fields such as the information highway, multimedia, office automatization, modern communication, intelligent sytem and so on. The main aim of speech enhancement is to pick up speech information from the speech signals with noise, in order to obtain high quality speech. But due to the existence of the noise diversity and environment reverberation, the speech quality received by microphone is not so good, which affects the speech intelligibility and the speech processing performance. So effective noise suppression is necessary to improve the speech signals quality.
     Generally, single microphone speech enhancement has good noise suppression performance, but under complex acoustic environment, its noise supression performance declines rapidly. Microphone array technique combines the space and time information of speech signals, and has flexible beam control, higher space resolution, higher signal gain and better anti-interferenc performance. Now microphonw array technique becomes very important methods for capturing speaker speech and improving the speech quality in the intelligent communication system such as video conference system. In recent years, the speech enhancement methods based on microphone array have gradually become the research hot pot of speech processing.
     This thesis adopts microphone array processing and adaptive processing as the main signals processing tools, video conference system as the application background, this thesis studies some microphone array speech enhancement methods. Moreover, considering the delay estimation for microphone array speech enhancement, this thesis also discusses the time delay estimation under reverberation environment.
     The main research results of this thesis are as follows:
     (1) Research on adaptive beamforming and postfiltering beamforming combined microphone array speech enhancement methods. Considering the advantages of adaptive beamforming method and postfilter beamforming method under different noise fields, this thesis combines these two methods to propose a new beamforming speech enhancement method. The proposed method has good noise cancellation performance under both the correlative noise field and non-correlative noise filed. So it has good robust performance to the different noise.
     (2) Research on time delay estimation methods under reverberation environment. For the beamforming speech enhancement methods, it is normally to compensate the different channel speech signals with time delay. However, most of the time delay estimation algorithms don't take into account the reverberation influence. So this thesis proposes the time delay estimation method based on speech onset signals and generalized correlation weighting. This method first utilizes echo-avoidance (EA) reverberation model to pick up speech onset signals, then estimates the power spectrum with speech onset signals and carries out smooth processing, and at last adopts generalized correlation weighted method to estimate the time delay. This method can estimate the time delay accurately under reverberation environment. And the experiment result shows the validity of this method.
     (3) Research on the cepstrum based dereverberation methods. The speech dereverberation is also an important part of speech enhancement. This thesis proposes a microphone array speech enhancement. This method adopts an approximate method to gain the phase information from the noisy speech signals because the human ear is not sensitive to the speech phase. Compared wirh the traditional cepstrum speech enhancement methods, this method has less computational complex and can be used in the real video conference system which need to consider reverberation. Simulation shows the validity of this method.
     (4) Research on subspace methods for microphone array speech enhancement. In order to decrease the computational load, this thesis proposes the GSVD based microphone array speech enhancement method. This method is a suboptimal filtering speech enhancement, and it is not necessary to carry speech endpoint detection if the noise is white noise. Moreover, this thesis applies microphone array speech enhancement method to single microphone speech enhancement, and obtains good enhancement results. Simulation shows that the method can supress the noise effectively, and improve the signal to noise ratio.
     (5) Research on microphone array speech enhancement method based on the speeech production models. This thesis applies single microphone time-varying AR model speech enhancement methods to microphone array and combines the space characteristics of microphone array, then proposes the microphone array speech enhancement methods based on speech production models. This method can be paralell. In addition, this method uses less data points and AR model orders, and can realize real time speech enhancement. Simulation experiments show the validy of the method.
引文
[1]朱民雄,闻新.计算机语音技术.北京:北京航空航天大学出版社,2002.
    [2]易克初,田斌,付强.语音信号处理.北京:国防工业出版社,2000.
    [3]杨行峻,迟惠生.语音信号处理.北京:电子工业出版社,1995.
    [4]Boll S F.Suppression of acoustic noise in speech using spectral subtraction.IEEE Trans.on Acoustics,Speech and Signal Processing,1979,27(2):113-120.
    [5]Gabrea M,Grivel E,Najim M.A single microphone Kalman filter-based noise canceller.IEEE Signal Processing Letter,1999,6(3):55-57.
    [6]Yariv Ephraim,Harry L,Van Trees.A signal subspace approach for speech enhancement.IEEE Trans.on Speech and Audio Processing,1995,3(4):251-266.
    [7]Davila C E.A subspace approach to estimation of autoregressive paramenters from noise measurements.IEEE Trans.on Speech Processing,1998,46(2):531-534.
    [8]Huang Q H,Yang J,Xue Y F.New AR model based speech enhancement approach within variational Bayesian framework,Chinese Journal of Electronics,2007,16(3):499-502.
    [9]Brandstein M S.On the use of explicit speech modeling in microphone array applications.IEEE International Conference on Acoustics,Speech and Signal Processing,Seattle,WA,1998,6:3613-3616.
    [10]Sreenivas T V,Kirnapure P.Codebook constrained Wiener filtering for speech enhancement.IEEE Trans.on Speech and Audio Processing,1996,4(5):383-389.
    [11]Meyer J,Simmer K U.Multi-channel speech enhancement in a car environment using Wiener filtering and spectral subtraction,IEEE Trans.on Acoustics,Speech,and Signal Processing,1997,21(2):1167-1170.
    [12]Grenier Y.A microphone array for car environments.Speech Communication,1992,12(1):25-39.
    [13]Nordholm S,Claesson I,Bengtsson B.Adaptive array noise suppression of hands-free speaker input in cars.IEEE Trans.on Vehicular Technology,1993,42(4):514-518.
    [14]Nishiura T,Gruhn R,Nakamura S.Collaborative steering of microphone array and video camera toward multilingual teleconference through speech-to-speech translation.IEEE Workshop on Automatic Speech Recognition and Understanding,Trento,Italy,2001:119-122.
    [15]Compernolle Van Xie F Ma,Diest van M.Speech recognition in noisy environments with the aid of microphone array.Speech Communication,1990,9(5):433-442.
    [16]Widrow B,Lou F.Microphone array for hearing aids:an overview,Speech Communication,2003,39(1-2):139-146.
    [17]Xianxian Zhang, Hansen H J. CSA-BF: Novel constrained switched adaptive beamforming for speech enhancement and recognition in real car environments. IEEE International Conference on Acoustics, Speech and Signal Processing, New York, USA, 2003, 2:125-128.
    [18]Iain A, McCowan, Darren C. Moore, S. Sridharan. Near-field adaptive beamformer for robust speech recognition. Digital Signal Processing,2002,12(1):87-106
    [19]Erol B,Lee D.Hull J J, Multimodal summarization of meeting recorder. IEEE International Conference on Multimedia and Expo (ICME 2003), Baltimore, MD, JulY 2003.
    [20]Nakadai K, Okuno H G, Kitano H. Real-time sound source localization and separation for robot audition. IEEE International Conference on Spoken Language Processing, 2002: 193-196.
    [21]Choi C, Kong D, Kim J et al. Speech enhancement and recognition using circular microphone array for service robots. IEEE International Conference on Intelligent Robots and Systems, 2003:3516-3521
    [22]Gregory D. Abowd. Classroom 2000: An experiment with the instrument of a living educational environment. IBM System Journal, Special issue on pervasive Computing, 1999, 38(4):508-530.
    [23]Ward B D, Elko W G. Mixed near field/far field beamforming: a new technique for speech acquisition in a reverberant environment. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New York, USA, 1997.
    [24]Frost O L. An algorithm for linearly constrained adaptive array processing. Proceedings of the IEEE, 1972, 60(8):926-935
    [25]Griffiths L J, Jim C W. An alternative approach to linearly constrained adaptive beamforming. IEEE Trans. on Antennas and Propagation, 1982, 30(1):27-34.
    [26] Cox H, Second R M, Owen M M. Robust adaptive beamforming. IEEE Trans. Acoustics, Speech, Signal Processing, 1987,103(2):1365-1376.
    [27]Hoshuyama O, Sugiyama A, Hirano A. A robust adaptive beamformer for microphone arrays with a blocking matrix using constrained adaptive filters. IEEE Trans. on Signal Processing, 1999, 47(10):2677-2684.
    [28]Gannot S, Burshtein D, Weinstein E. Signal enhancement using beamforming and nonstationarity with applications to speech. IEEE Trans. on Signal Processing, 2001, 49 (8): 1614-1626.
    [29]AllenBJ, Berkley A D, Blauert J. Multimicrophone signal processing technique to remove reverberation from speech signals. Journal of the Acoustics Society of America, 1977, 62: 912-915.
    [30]Zelinski R. A microphone array with adaptive post-filtering for noise reduction in reverberant rooms. IEEE International Conference on Acoustics, Speech and Signal Processing, New York, USA, 1988, 5:2578-2581.
    [31]Simmer K U, Wasiljeff A. Adaptive microphone arrays for noise suppression in the frequency domain. Second Cost 229 Workshop on Adaptive Algorithms in Communication, Bordeaux, France, 1992,185-194.
    [32]Meyar J, Simmer K U. Multichannel speech enhancement in car environment using Wiener filtering and spectral subtraction. International conference on Acoustics, Speech and Signal Processing, Germany,1977.
    [33]Dendrinos M, Bakamidis S, Caryannis G. Speech enhancement from noise:A regenerative approach. Speech Communication, 1991, 10(2):45-57.
    [34]Hansen S K P. Signal subspace methods for speech enhancement. Ph. D. Dissertation, Denmark University of Technology, Denmark. 1997.
    [35] Asano F, Hayamizu S. Speech enhancement using css-based array processing. IEEE International Conference on Acoustics, Speech and Signal Processing, Munich, Germany, 1997,2:1191-1194.
    [36]Ephraim Y, Van Trees H L. A signal subspace approach for speech enhancement, IEEE Trans. on Speech and Audio Processing, 1995, 3(4):251-266.
    [37]Hu Y, Loizou P. A generalized subspace approach for enhancing speech corrupted by colored noise, IEEE Trans. on Speech and Audio Processing,2003, 11(4):334-341.
    [38]Rezayee A, Gazor S. An adaptive KLT approach for speech enhancement, IEEE Trans. on Speech and Audio Processing, 2001. 9(2):87-95.
    [39]Yang B. Projection approximation subspace tracking, IEEE Trans. on Signal Processing, 1995, 43(1): 95-107.
    [40]Doclo S, Moonen M. SVD-based optimal filtering with applications to noise reduction in speech signals. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New York, USA, 1999:143-146.
    [41]Doclo S, Moonen M. GSVD-based optimal filtering for single and multimicrophone speech enhancement. IEEE Trans. on Signal Processing, 2002, 50(9):2230-2244.
    [42]Doclo S, Moonen M. Multimicrophone noise reduction using recursive GSVD-based optimal filtering with ANC postprocessing stage. IEEE Trans. on Speech and Audio Processing, 2005, 13(1):53-69.
    [43]Gaubitch N D,Habets E A P,Naylor P A. Multimicrophone Speech dereverberation using spatiotemporal and spectral processing. IEEE International Symposium on Circuits and Systems, Seattle WA, 2008:3222-3225.
    [44]Wang H, Itakura F. An approach of dereverberation using multi-microphne subband envelope estimation. IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, Canada, 1991:953-956.
    [45]Petropulu P A, Subramaniam S, Wendt C. Cepstrum based deconvolution for speech dereverberation. IEEE Trans. on Speech and Audio Processing, 1996, 4(5):392-396.
    [46]Liu Q-G,Champagne B,Kabal P.Room speech dereverberation via minimum-phase and allpass component processing of multi-microphone signals.IEEE Pasific Rim Conference on Communications,Computer and Signal Processing,ontario,Canada,1995:571-574.
    [47]Liu Q-G,Champagne B,Kabal P.A microphone array processing technique for speech enhancement in a reverberation space.Speech Communication,1996,18(4):317-334
    [48]Gillespie W B,Malvar S H,Florencio F D.Speech deverberation via maximum-kurtosis subband adaptive filtering.IEEE International Conference on Acoustics,Speech,and Signal Processing,Salt Lake City,USA,2001,6:3701-3704.
    [49]Yu Z-L,Er H M.Blind multichannel dentification for speech dereverberation and enhancement.IEEE International Conference on Acoustics,Speech and Signal Processing,Montreal,Canada,2004,4:105-108.
    [50]Gannot S.Single-and multi-microphone speech dereverberation using specreal enhacement (Ph.D dissertation).Technische Universiteit Eindhoven,Netherlands,Eindhoven,2007.
    [51]Grielel M S,Brandstein S M.Microphone array speech dereverberation using coarse channel modeling.IEEE International Conference on Acoustics,Speech,and Signal Processing,2001,1:201-204.
    [52]Habets P E.Multi-channel speech dereverberation based on a statistical model of late reverberation.IEEE International Conference on Acoustics,Speech and Signal Processing,Philadelphia,USA,2005,4:173-176.
    [53]Visser E,Te-Won L.Speech enhancement using blind source separation and two-channel energy based speaker detection.IEEE International Conference on Acoustics,Speech and Signal Processing,Bong Kong,China,2003,1:884-887.
    [54]苗浩,李晓东,田静.一种用于语音增强的频域盲信号分离算法.声学技术.2007,26(3):431-434.
    [55]Swaada H,Araki S,Mukai R et al.Blind extraetion of adominnat source from mixtures of many sources using ICA and time-frequency masking.IEEE International Symposium on Cireuits and Systems,Kobe,Japan,2005:5882-5885.
    [56]Siow Yong Low,Sven Nordholm,Roberto Togneri.Convolutive blind signal separation with post-processing.IEEE Trans.on speech and audio processing,2004,12(5):539-548.
    [57]Grbic N,Dahl M,Claesson I.Neural network based adaptive microphone array system for speech enhancement.IEEE International Joint Conference on Neural Networks,Washington,USA,1998,3:2180-2183.
    [58]Branstein M,Griebel S.Nonlinear model based microphone array speech enhancement.Acoustic Signal Processing for Telecommunication,2000:261-279.
    [59]张贤达.通信信号处理.北京:国防工业出版社,2004.
    [60]沈凤麟,叶中付,钱玉美.信号统计分析与处理.安徽:中国科学技术大学出版社,2001.
    [61]McCowan I A,Moore D C,Sridharan S.Near-field adaptive beamformer for robust speech recognition,Digital Signal Processing,2002,(12)1:87-106.
    [62]Zheng Y R,Goubran R A,El-Tanany M.Experimental evaluation of a nested microphone array with adaptive noise canceller.IEEE Trans.on Instrumentation and Measurement,2004,53(3):777-786.
    [63]Rabinkin D.V.Optimum sensor placement for microphone arrays.PH.D.Dissertation,New Brunswick Rutgers:The State University of New Jersey,1998.
    [64]Gazor S,Grenier Y.Criterion for position of sensors for microphone array.IEEE Trans.on Speech and Audio Processing,1995,3(4):294-303.
    [65]Johnson D,Dudgeon D.Array Signal Processing:Concepts and Techniques.Prentice-Hall,1993.
    [66]Harry L.Van Trees.Optimum Array Processing,New York:Wiley-Interscience,2002.
    [67]Manolakis D G,Ingle V K,Kogon S M.统计信号处理,周正等译.北京:电子工业出版社.2003.
    [68]韩纪庆,张磊,郑铁然.语音信号处理.北京:清华大学出版社,2004.
    [69]赵力.语音信号处理.北京:机械工业出版社,2003.
    [70]Bitzer J,Simmer K U,Kammeyer K U.Theoretical noise reduction limits of the generalized sidelobe canceller for speech enhancement,IEEE International Conference on Acoustics,Speech and Signal Processing,1999,(5):2965-2968.
    [71]Huang J,Ohnishi N,Sugie N.Modeling the precedence effect for sound localization in reverberant environment.IEEE Instrumentation and Measurement Technology Conference,Brussels,1996:633-636.
    [72]Schroeder M R.New method of measuring reverberation time.Journal of Acoustical Society of America,1965,37(3):409-412.
    [73]Everest F A.The Master Hand of Acoustics,2nd Ed,New York:McGraw-Hall,2001.
    [74]Allen J B,Berkley D A.Image method for efficiently simulating small room acoustics.Journal of Acoustical Society of America,1979,65(4):943-950.
    [75]Huang J,Supaongprapa T,Wang F,Ohnishi N and Sugie N.A model-based sound localization system and its application to robot navigation.Robotics and Autonomous Systems,1999,27(4):199-209.
    [76]Huang J,Ohnishi N,Sugie N.Sound localization in reverberant environment based on the model of the precedence effect.IEEE Transactions on Instrumentation and Measurements.1997,46(4):842-846.
    [77]李细林.基于麦克风阵列实现声源定位:(硕士学位论文).大连:大连理工大学,2004.
    [78]张洪昌.基于麦克风阵列的语音增强方法研究:(硕士学位论文).大连:大连理工大学,2005.
    [79]Brandstein M,Ward D.Microphone Array:Signal Processing Techniques and Application.Springer-Verlag,Berlin,2001.
    [80]Huang Y,Benesty J,Elko G E.Adaptive eigenvalue decomposition algorithm for passive acoustic source localization system.IEEE International Conference on Acoustics,Speech,and Signal Processing,Phoenix,AZ,USA,1999,15(2):937-940.
    [81]Hoshuyama O, Sugiyama A, Hirano A. A robust adaptive beamformer for microphone arrays with a blocking matrix using constrained adaptive filters. IEEE International Conference on Acoustics, Speech and Signal Processing, Atlanta, USA, 1996, 925-928.
    [82]Gannot S, Cohen I. Speech enhancement based on the general transfer function GSC and postfiltering. IEEE Transactions on Speech and Audio Processing, 2004, 12(6):561 - 571.
    [83]Gannot S, Cohen I. Speech enhancement based on the general transfer function GSC and Postfiltering. IEEE International Conference on Acoustics, Speech and Signal Processing, Hong Kong, China, 2003, 1:908-911.
    [84]Marro C, Mahieux Y, Simmer K U. Analysis of noise reduction and dereverberation techniques based on microphone arrays with postfiltering, IEEE Transactions on Speech and Audio Processing, 1998, (6)3:240-259.
    [85]McCowan I A, Bourlard H. Microphone array post-filter based on noise field coherence, IEEE Transaction on Speech and Audio Processing, 2003, 6(11):709-716.
    [86]Fischer S, Kammeyer K D. Broadband beamformer with adaptive postfiltering for speech acquisition in noisy environments. IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich,1997, 21(1):359-362.
    [87]BitzerJ, Simmer KU, Kammeyer KD. Theoretical noise reduction limits of the generalized sidelobe canceller (GSC) for speech enhancement. IEEE International Conference on Acoustics, Speech and Signal Processing, Phoenix, USA, 1999, 5(1):2965-2968.
    [88]Hassab J C, Bouchar R E. Optimum estimation of time delay by a generalized correlation. IEEE Transactions on Acoustics, Speech and Signal Processing, 1979, 27(4):1024-1031.
    [89]Knapp C H, Carter G C. The generalized correlation method for estimation of time delay. IEEE Transactions on Acoustics, Speech and Signal Processing, 1976, 24(4):320-327.
    [90]Youn D H, Ahmed N, Carter G C. On using the LMS algorithm for time delay estimation. IEEE Transactions on Acoustics, Speech and Signal Processing, 1982, 30(5):798-801.
    [91]Champagne B, Bedard S and Stephenne A. Performance of time-delay estimation in the presence of room reverberation. IEEE Transaction on Speech and Audio Processing, 1996, 4(2):148-152.
    [92]Parisi R, Cirillo A, Panella M et al. . Source localization in reverberant environments by consistent peak selection. IEEE International Conference on Acoustics, Speech and Signal Processing, Honolulu, Hawaii, 2007,1:37-40.
    [93]Benesty J. Adaptive eigenvalue decomposition algorithm for passive acoustic source localization. Journal Acoustic Society of America, 2000, 107(1):384-391.
    [94]Huang Y, Benesty J, Elko G W. Microphone arrays for video camera steering. Acoustical Signal Processing for Telecommunication. MA: Kluwer Academic, 2000:239-257.
    [95]Omologo M, Svaizer P. Use of the cross-power-spectrum phase in acoustic event localization, IEEE Trans. on Speech and Audio Processing, 1997, 5(3):288-292.
    [96]Omologo M,Svaizer P.Acoustic event localization using a crosspower spectrum phase based technique,IEEE International Conference on Acoustics,Speech,and Signal Processing,Adelaide,1994:273-276.
    [97]Omologo M,Svaizer P.Acoustic source location in noisy and reverberant environment using CSP analysis,Adelaide,IEEE International Conference on Acoustics,Speech,and Signal Processing,Atlanta,GA,1996:921-924.
    [98]Wang H,Chu P.Voice source localization for automatic camera pointing system in videoconferencing.IEEE International Conference on Acoustics,Speech,and Signal Processing,Munich,1997:187-190.
    [99]Doclo S,Moonen M.Robust time-delay estimation in highly adverse acoustic environments.IEEE Workshop on Applications of Signal Processing to Audio and Acoustics,New York,2001:59-62.
    [100]Reed F A,Feintuch P L and Bershad N J.Time delay estimation using the LMS adaptive filter-static behavior,IEEE Trans.on Acoustics,Speech,and Signal Processing,1981,29(3):561-571.
    [101]Sasaki K,Sato T,Makamura Y.Holographic passive sonar.IEEE Trans.on Sonics Ultrasonics,1977,24:193-200.
    [102]王宏禹,邱天爽.自适应噪声抵消和时间延迟估计,大连:大连理工大学出版社,1999.
    [103]张贤达.时间序列分析——高阶统计量方法,北京:清华大学出版社,1996.
    [104]马大猷,沈豪.声学手册.北京:科学出版社,1983.
    [105]何培宇,殷斌,Sommen P C W et al.实时声学数字信号处理实验研究.四川大学学报(自然科学版),2002,39(2):264-267.
    [106]何振亚.数字信号处理的理论与应用.北京:人民邮电出版社,1983.
    [107]A.V.奥木海姆,R.W.谢弗.离散时间信号处理.黄建国,刘树棠译.北京:科学出版社,2000.
    [108]Thomas F Quatieri.赵胜辉译.离散时间语音信号处理原理与应用.北京:电子工业出版社,2004.
    [109]张雄伟,陈亮,杨吉斌.现代语音处理技术及应用.北京:机械工业出版社,2003.
    [110]张颖,应小凡,田斌等.一种基于倒谱分析的抗多径衰落的算法.信号处理.2003,19(1):55-58.
    [111]徐伯勋,白旭滨,傅孝毅.信号处理中的数学变换和估计方法.北京:清华大学出版社,2004.
    [112]韩纪庆,张磊,郑铁然.语音信号处理.北京:清华大学出版社,2004.
    [113]苏先礼.语音去混响研究:(硕士学位论文).四川:四川大学,2006.
    [114]Kinsler L,Frey A,Coppens A et al.Fundamentals of Acoustics.Third Edition,New York:John Wiley & Sons,1982.
    [115]马晓红,陆晓燕,殷福亮.改进的互功率谱相位时延估计方法,电子与信息学报,2004,26(1):53-59.
    [116]Veen A J D,Deprettere E F,Swindlhurst A L.Subspace-based signal analysis using singular value decomposition,Proc.of IEEE,1993,81(9):1277-1308.
    [117]张贤达.矩阵分析与应用,北京:清华大学出版社,2004.
    [118]Demmel J,Veselic K.Jacobis method is more accurate than QR.SIAM J.Matrix Analysis and Applications,1992,13(4):1204-1205.
    [119]Luk F T.Computing the singular-value decomposition on the ILLIAC IV.ACM Trans.on Mathematical Software,1980,6(4):524-539.
    [120]Comon P,Golub G n.Tracking a few extreme singular values and vectors in signal processing.Proc.of IEEE,1990,78(8):1327-1343.
    [121]戈卢布G H,范洛恩C F,著,袁亚湘,等译.矩阵计算,北京:科学出版社,2001
    [122]周长发.科学与工程数值方法,北京:清华大学出版社,2002.
    [123]Van Loan C F.Generalizing the singular value decomposition.SIAM J.Numerical Analysis,1976,13:76-83.
    [124]Van Loan C F.Matrix Computations and Signal Processing.Englewood Cliffs:Prentice-Hall,1989.
    [125]Drmac Z.A tangent algorithm for computing the generalized singular value decomposition with application.SIAM J.Numerical Analysis,1998,35(5):1804-1832
    [126]Ephraim V,Van Trees H L.A signal subspace approach for speech enhancement.IEEE Trans.Speech and Audio Processing.1995,3(4):251-266.
    [127]Dendrinos M,Bakamidis S,Carayannis G.Speech enhancement from noise:a regenerative approach,Speech Communication,1991,10(1):45-57.
    [128]Rezayee A,Gazor S.An adaptive KLT approach for speech enhancemant.IEEE Transaction on Speech and Audio Processing,2001,9(2):87-94.
    [129]Hu Y,Loizou P C.A subspace approach for enhancing speech corrupted by colored noise,IEEE Signal Processing Letters,2002,9(7):204-206.
    [130]Hu Y,Loizou P C.A generalized subspace approach for enhancing speech corrupted by colored noise,IEEE Transactions on Speech and Audio Processing,2003,11(4):334-341.
    [131]Lev-Ari H,Ephraim Y.Extension of the signal subspace speech enhancement approach to colored noise,IEEE Signal Processing Letters,2003,10(4):104-106.
    [132]Jabloun F,Champagne B.Incorporating the human hearing properties in the signal subspace approach for speech enhancement,IEEE Trans.Speech and Audio Processing,2003,11(6):700-708.
    [133]张贤达.信号处理中的线性代数,北京:科学出版社,1997.
    [134]金乃高,殷福亮,王冬霞等.基于子带粒子滤波的一种语音增强方法,通信学报,2006,27(4):23-28
    [135]姚天任.数字语音处理.武汉:华中理工大学出版社,1992.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700