盲信号分离若干关键问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
盲信号分离在众多科学领域,特别是在语音信号分离与识别、生物信号(如脑电图、心电图)处理、无线通信系统等领域,有着极其广泛的应用。实际上,由于其重要的理论价值和广泛的应用前景,盲信号分离已经成为当前信号处理领域最热门的新技术之一。经过大量学者的不懈努力,盲信号分离已经在多个方面得到深入的研究和发展,并涌现了大量优秀的盲分离算法。然而,盲分离仍然存在一些关键理论与实际问题需要解决。本博士论文针对这些问题展开研究,主要作出了如下创新性工作:
     1.改进和推广了著名的盲分离几何算法——最小值域方法。盲分离几何算法能够为盲分离提供可视化的解释和分离过程。其中,最小值域方法有比较严格的理论基础,且对观测信号的个数没有限制。然而,其分离算法的可靠性和效率都存在不足。通过利用凸包的一些优秀性质,本文提出的改进算法效率更高,可靠性更好。另外,本文把最小值域算法推广到最大值域方法,从而扩展了这类几何算法的应用范围。
     2.首次对对角化器的条件数展开深入研究。联合对角化是解决盲分离的最重要工具之一,然而已有算法不能从根本上避免病态解。我们深入分析了联合对角化出现病态解的根本原因,首次把联合对角化问题建模成一个双目标优化模型。在此基础上,我们设计了具有较好条件数的联合对角化算法。该算法给出的对角化器,在最小化对角化误差的同时,还拥有尽可能小的条件数。从而彻底避免了平凡解、不平衡解和退化解。实际上,算法能够给出所给模型Pareto意义下的最优解。此外,该算法对需要对角化的矩阵束几乎没有任何要求。我们也简单讨论了算法的收敛性以及可辨识性。最后,我们还发展了可应用于在线盲分离的联合对角化算法,从而改变了联合对角化算法只适合盲分离批处理算法的现状。实际上,该算法也可以作为一个独立的联合对角化算法,尽管该算法对矩阵束有一些轻微的约束。
     3.为时间预测度/方差比方法建立了完整的可分性理论。我们的结果表明源信号在方差比意义下可分当且仅当源信号具有不同的时序结构(即各阶时滞自相关系数),于是,方差比方法的有效性和应用范围得到澄清。该理论还揭示了方差比方法与慢特征提取、二阶统计量方法的内在联系。算法方面,我们将联合对角化技术引入到方差比方法,并且给出了不依赖于源信号的评价指标。最后,我们用数值仿真验证了理论结果的正确性。经过这些努力,方差比方法的可用性和可靠性得到显著增强。
     4.提出了在未知源信号数目的情况下估计混叠矩阵的算法。通过一非线性投影,该算法的目标函数的最大值刚好对应混叠矩阵的一列。于是,通过估计目标函数的最大值以及随后的列屏蔽操作,混叠矩阵被逐列估计。因此该算法无须知道源信号的数目(即混叠矩阵的列数)。由于列屏蔽操作会导致大量无用的局部极大值,我们引入粒子群算法优化目标函数。仿真比较结果表明了算法的有效性,特别是在源信号数目未知和源信号不是特别稀疏的情况。
     5.提出了基于最小体积约束的非负矩阵分解算法。非负矩阵分解是分离相关源信号的最有力工具之一。我们提出的算法在弱稀疏条件下也能得到理想的结果。我们从理论上分析了非负矩阵分解与稀疏性、最小体积之间的内在联系,解释了模型的合理性和正确性。并提出了两个算法优化目标函数。第一个算法在处理较小规模数据集时非常有效,而第二个算法则更适合处理大数据集问题。仿真表明,我们的算法不仅可以分离一些高度相关的源信号,而且可以显著提高非负矩阵分解学习部分的能力,从而具有非常广泛的用途。
     总的说来,本文主要研究了盲分离的线性瞬时模型的相关理论和算法,特别地,对方差比分离方法和应用广泛的联合对角化问题进行了深入的研究。
Blind source separation (BSS) has been found wide applications in many scientific areas, especially in the areas of speech source separation and recognition, biological signal processing, and wireless communication, etc. In fact, due to its high value in theory and promising applications in practice, BSS has been one of the hottest topics in signal processing. Currently, many substantial contributions to BSS have been made. However, some important theoretical and practical issues are still unsolved and they will be focused in this dissertation. The main contributions of this dissertation include:
     1. an improvement to the famous minimum-range (MR) geometric BSS method (GBSS). GBSS provides visible separation procedure and geometric interpretation of separation. The MR method is proved reliable theoretically and has no restrictions on the number of observations. However, the optimization algorithm of the method has disadvantages in reliability and efficiency. By virtue of some favorable properties of a convex hull, an improved MR algorithm (iMR) is proposed in this thesis. It can be seen that iMR is more reliable and efficient. Moreover, the maximum-range criterion is also derived to deal with sparse signals, which leads to an extended range of applications of the MR method.
     2. a first and in-depth study on the condition number of diagonalizers. Joint diagonalization is one of the most powerful tools to solve BSS problems. However, existing algorithms often can not avoid ill-conditioned solutions strictly. To overcome this disadvantage, the approximate joint diagonalization problem is reviewed as a multi-objective optimization problem for the first time. Then, a new algorithm for nonorthogonal joint diagonalization is developed. The new algorithm yields diagonalizers which have as small condition numbers as possible while minimizing the diagonalization error. Meanwhile, degenerate solutions, trivial solutions and unbalanced solutions are totally avoided strictly. Besides, the new algorithm imposes few restrictions on the target set of matrices to be diagonalized, which makes it widely applicable. Primary results on convergence are presented and we also show that, for exactly jointly diagonalizable sets, no local minima exist and the solutions are essentially unique under mild conditions. The practical use of our algorithm is shown in blind source separation problems, especially when ill-conditioned mixing matrices are involved. Finally, the guided joint diagonalization algorithm (GDiag) is proposed, which is the first applicable joint diagonalization algorithm in online BSS scenario. Indeed, GDiag itself is also a regular joint diagonalization algorithm, although it has some mild restrictions on the set of matrices to be diagonalized.
     3. an in-depth separability analysis for the covariance rate method. Our results show that the sources are separable via the covariance rate method if and only if they have different temporal structures (i.e., autocorrelations). Consequently, the applicability and limitations of the covariance rate method are clarified. Also, the relation between the covariance rate method and slow feature analysis, and second-order statistics methods is revealed. In addition, instead of using generalized eigendecomposition, joint approximate diagonalization algorithms are introduced to improve the robustness of the method. A new criterion is presented to evaluate the separation results. Numerical simulations are performed to demonstrate the validity of the theoretical results. With these efforts, the reliability and validity of the covariance rate method is considerably enhanced.
     4. a new mixing matrix estimation technique with unknown source number. Although many approaches have been proposed to estimate the mixing matrix, however, they often need to know either the exact value or the upper bound of the source number (i.e. the number of columns of the mixing matrix) as a priori. In this paper, a new method, called NPCM (nonlinear projection and column masking), is proposed to estimate the mixing matrix. In NPCM, the objective function is based on a nonlinear projection such that its maxima just correspond to the columns of the mixing matrix. Then, the columns of the mixing matrix are estimated and deflated sequentially by locating each maximum followed by a masking procedure. As a result, NPCM does not need any prior information on the source number. Because the masking procedure may cause many small and useless local maxima, particle swarm optimization (PSO) is introduced to optimize the objective function. Feasibility and efficiency of PSO are discussed. Comparative experimental results show that NPCM is reasonably competent in the estimation of the mixing matrix, especially in the case that the number of sources is unknown and the sources are less sparse.
     5. a new nonnegative matrix factorization (NMF) method to separate highly dependent sources based on minimum-volume constraint (MVC_NMF). NMF is one of the most promising tools to separate statistically dependent sources. The major advantage of our methods is that they can give desirable results even if in very weak sparseness situations. Close relation between NMF and sparseness NMF, MVC_NMF is theoretically analyzed and the reasonability of the new model is justified. Two algorithms are proposed to optimize the objective function. The first one is quite efficient for the relatively small scale problems and the second one is more suitable for larger ones. Simulations show that the new algorithms can not only separate some highly statistically dependent sources but also improve the ability of learning parts of NMF significantly, therefore can be found wide applications in related areas.
     To summary, this thesis is focused on the theoretical analysis and algorithms development for the linear instantaneous mixing of BSS. Particularly, we perform in-depth study on the covariance rate method and widely applicable joint diagonalization techniques.
引文
[1] Jutten C. and Herault J. Blind separation of sources,Part 1:An adaptive algorithm based on neuromimetic architecture[J]. Signal Processing, 1991, 24 (1): 1-10.
    [2] Hyvarinen A., Karhunen J., and Oja E. Independent Component Analysis[M]. New York: Wiley, 2001.
    [3] Cichocki A. and Amari S. Adaptive blind signal and image processing: learning algorithms and applications[M]. John Wiley & Sons, Ltd, 2002.
    [4] Haykin S. Unsupervised Adaptive Filtering , Vol I: Blind Source Separation[M]. New York: Wiley, 2000.
    [5] Common P. Independent component analysis, A new concept?[J]. Signal Processing, Elsevier, 1994, 36 (3): 287-314.
    [6] Bofill P. and Zibulevsky M. Underdetermined blind source separation using sparse representations[J]. Signal Processing, 2001, 81 (11): 2353-2362.
    [7] Zibulevsky M. and Pearlmutter B.A. Blind source separation by sparse decomposition in a signal dictionary[J]. Neural Computation, 2001, 13 (4): 863-882.
    [8] Yilmaz O. and Rickard S. Blind separation of speech mixtures via time-frequency masking[J]. IEEE Transactions on Signal Processing, 2004, 52 (7): 1830-1847.
    [9] Abdeldjalil A.E.B., Nguyen L.T., Karim A.M., Adel B., and Yves G. Underdetermined blind separation of nondisjoint sources in the time-frequency domain[J]. IEEE Transactions on Signal Processing, 2007, 55 (3): 897-907.
    [10] Saab R., Yilmaz O., McKeown M.J., and Abugharbieh R. Underdetermined anechoic blind source separation via l(q)-basis-pursuit with q < 1[J]. IEEE Transactions on Signal Processing, 2007, 55 (8): 4004-4017.
    [11] Cherry E.C. Some experiments on the recognition of speech, with one and two ears[J]. Journal of the Acoustical Society of America, 1953, 25: 975–979.
    [12] Haykin S. and Chen Z. The cocktail party problem[J]. Neural Computation, 2005, 17 (9): 1875-1902.
    [13] Bell A. and Sejnowski T. An information-maximization approach to blind separation and blind deconvolution[J]. Neural Computation, 1995, 7 (6): 1129-1159.
    [14] Amari S.I., Cichocki A., and Yang H., "A new learning algorithm for blind signal separation of sources," in Advances in neural information processing, vol. 8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. Cambridge, Massachusetts: MIT Press, 1996, 757-763.
    [15] Cardoso J.F. and Laheld B.H. Equivariant adaptive source separation[J]. IEEE Transactions on Signal Processing, 1996, 44 (12): 3017-3030.
    [16] Belouchrani A., AbedMeraim K., Cardoso J.F., and Moulines E. A blind source separation technique using second-order statistics[J]. IEEE Transactions on Signal Processing, 1997, 45 (2): 434-444.
    [17] Cardoso J.F. Infomax and maximum likelihood for blind source separation[J]. IEEE Signal Processing Letters, 1997, 4 (4): 112-114.
    [18] Hyvarinen A. and Oja E. A fast fixed-point algorithm for independent component analysis[J]. Neural Computation, 1997, 9 (7): 1483-1492.
    [19] Cardoso J.F. Blind signal separation: Statistical principles[J]. Proceedings of the IEEE, 1998, 86 (10): 2009-2025.
    [20] Hyvarinen A. Fast and robust fixed-point algorithms for independent component analysis[J]. IEEE Transactions on Neural Networks, 1999, 10 (3): 626-634.
    [21] Hyvarinen A., "The FastICA Matlab package. V2.5," 2005.
    [22] Hyvarinen A. The fixed-point algorithm and maximum likelihood estimation for independent component analysis[J]. Neural Processing Letters, 1999, 10 (1): 1-5.
    [23] Oja E. The nonlinear PCA learning rule in independent component analysis[J]. Neurocomputing, 1997, 17 (1): 25-45.
    [24] Zhu X.L., Zhang X.D., Ding Z.Z., and Jia Y. Adaptive nonlinear PCA algorithms for blind source separation without prewhitening[J]. IEEE Transactions on Circuits and Systems I-Regular Papers, 2006, 53 (3): 745-753.
    [25] Zhu X.L., Zhang X.D., and Su Y.T. A fast NPCA algorithm for online blind source separation[J]. Neurocomputing, 2006, 69 (7-9): 964-968.
    [26] Amari S. Natural gradient works efficiently in learning[J]. Neural Computation, 1998, 10 (2): 251-276.
    [27] Abrudan T.E., Eriksson J., and Koivunen V. Steepest descent algorithms for optimization under unitary matrix constraint[J]. IEEE Transactions on Signal Processing, 2008, 56 (3): 1134-1147.
    [28] S. Squartini, F. Piazza, and A. Shawker. New Riemannian metrics for improvement of convergence speed in ICA based learning algorithms[J]. ISCAS 2005. IEEE International Symposium on Circuits and Systems, 2005, 4: 3603-3606.
    [29] Squartini S., Arcangeli A., and Piazza F. Stability analysis of natural gradient learning rules in complete ICA: A unifying perspective[J]. IEEE Signal Processing Letters,2007, 14 (1): 54-57.
    [30] Liu Z.Y., Chiu K.C., and Xu L. One-bit-matching conjecture for independent component analysis[J]. Neural Computation, 2004, 16 (2): 383-399.
    [31] Ma J.W., Liu Z.Y., and Xu L. A further result on the ICA one-bit-matching conjecture[J]. Neural Computation, 2005, 17 (2): 331-334.
    [32] Xu L. One-bit-matching theorem for ICA, convex-concave programming on polyhedral set, and distribution approximation for combinatorics[J]. Neural Computation, 2007, 19 (2): 546-569.
    [33] Tong L., Liu R.W., Soon V.C., and Huang Y.F. Indeterminacy and identifiability of blind identification[J]. IEEE Transactions on Circuits and Systems, 1991, 38 (5): 499-509.
    [34] Stone J.V. Blind source separation using temporal predictability[J]. Neural Computation, 2001, 13 (7): 1559-1574.
    [35] Stone J.V. Blind deconvolution using temporal predictability[J]. Neurocomputing, 2002, 49 (1): 79-86.
    [36] Xie S.L., He Z.S., and Fu Y.L. A note on Stone's conjecture of blind signal separation[J]. Neural Computation, 2005, 17 (2): 321-330.
    [37] Cardoso J.F. and Souloumiac A. Jacobi Angles for Simultaneous Diagonalization[J]. Siam Journal on Matrix Analysis and Applications, 1996, 17 (1): 161-164.
    [38] Yeredor A. Blind source separation via the second characteristic function[J]. Signal Processing, 2000, 80 (5): 897-902.
    [39] Pham D.T. and Cardoso J.F. Blind separation of instantaneous mixtures of nonstationary sources[J]. IEEE Transactions on Signal Processing, 2001, 49 (9): 1837-1848.
    [40] Ziehe A., Laskov P., Nolte G., and Muller K.R. A fast algorithm for joint diagonalization with non-orthogonal transformations and its application to blind source separation[J]. Journal of Machine Learning Research, 2004, 5: 777-800.
    [41] Vollgraf R. and Obermayer K. Quadratic optimization for simultaneous matrix diagonalization[J]. IEEE Transactions on Signal Processing, 2006, 54 (9): 3270-3278.
    [42] De Lathauwer L. and Castaing J. Blind Identification of Underdetermined Mixtures by Simultaneous Matrix Diagonalization[J]. Ieee Transactions on Signal Processing, 2008, 56 (3): 1096 - 1105
    [43] Fevotte C., Godsill S.J., and Wolfe P.J. Bayesian approach for blind separation of underdetermined mixtures of sparse sources[J]. Independent Component Analysis andBlind Signal Separation, 2004, 3195: 398-405.
    [44] Lv Q. and Zhang X.D. A unified method for blind separation of sparse sources with unknown source number[J]. IEEE Signal Processing Letters, 2006, 13 (1): 49-51.
    [45] Frigui H. and Krishnapuram R. A robust algorithm for automatic extraction of an unknown number of clusters from noisy data[J]. Pattern Recognit. Lett., 1996, 17: 1223-1232.
    [46] O'Grady P.D. and Pearlmutter B.A., "Soft-LOST: EM on a mixture of oriented lines," in Fifth Int Conf on Independent Component Analysis, vol. 3195. Granada, Spain: Springer-Verlag, 2004, 430-436.
    [47] O'Grady P.D. and Pearlmutter B.A. The LOST Algorithm: Finding Lines and Separating Speech Mixtures[J]. EURASIP Journal on Advances in Signal Processing, 2008.
    [48] O'Grady P.D., Pearlmutter B.A., and Rickard S.T. Survey of sparse and non-sparse methods in source separation[J]. International Journal of Imaging Systems and Technology, 2005, 15 (1): 18-33.
    [49] Donoho D.L. and Elad M. Optimally sparse representation in general (nonorthogonal) dictionaries via l(1) minimization[J]. Proceedings of the National Academy of Sciences of the United States of America, 2003, 100 (5): 2197-2202.
    [50] Li Y.Q., Cichocki A., and Amari S. Analysis of sparse representation and blind source separation[J]. Neural Computation, 2004, 16 (6): 1193-1234.
    [51] Li Y.Q., Amari S.I., Cichocki A., and Guan C.T. Probability estimation for recoverability analysis of blind source separation based on sparse representation[J]. IEEE Transactions on Information Theory, 2006, 52 (7): 3139-3152.
    [52] Seung-Jean K., Koh K., Lustig M., Boyd S., and Gorinevsky D. An Interior-Point Method for Large-Scale l1-Regularized Least Squares[J]. IEEE Journal of Selected Topics in Signal Processing, 2007, 1 (4): 606-617.
    [53] Rao B.D., Engan K., Cotter S.R., Pahner J., and Kreutz-Delgado K. Subset selection in noise based on diversity measure minimization[J]. IEEE Transactions on Signal Processing, 2003, 51 (3): 760-770.
    [54] Gorodnitsky I.F. and Rao B.D. Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm[J]. IEEE Transactions on Signal Processing, 1997, 45 (3): 600-616.
    [55] Zdunek R. and Cichocki A. Improved M-FOCUSS Algorithm With OverlappingBlocks for Locally Smooth Sparse Signals[J]. IEEE Transactions on Signal Processing, 2008, 56 (10): 4752-4761.
    [56] He Z.S., Cichocki A., Zdunek R., and Xie S.L. Improved FOCUSS Method With Conjugate Gradient Iterations[J]. IEEE Transactions on Signal Processing, 2009, 57 (1): 399-404.
    [57] Mohimani H., Babaie-Zadeh M., and Jutten C. A Fast Approach for Overcomplete Sparse. Decomposition Based on Smoothed l(0) Norm[J]. IEEE Transactions on Signal Processing, 2009, 57 (1): 289-301.
    [58] Donoho D.L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52 (4): 1289-1306.
    [59] Tsaig Y. and Donoho D.L. Extensions of compressed sensing[J]. Signal Processing, 2006, 86 (3): 549-571.
    [60] Haupt J., Bajwa W.U., Rabbat M., and Nowak R. Compressed sensing for networked data[J]. IEEE Signal Processing Magazine, 2008, 25 (2): 92-101.
    [61] Cohen A., Dahmen W., and Devore R. Compressed sensing and best k-term approximation[J]. Journal of the American Mathematical Society, 2009, 22 (1): 211-231.
    [62] Jung H., Sung K., Nayak K.S., Kim E.Y., and Ye J.C. k-t FOCUSS: A General Compressed Sensing Framework for High Resolution Dynamic MRI[J]. Magnetic Resonance in Medicine, 2009, 61 (1): 103-116.
    [63] Mishali M. and Eldar Y.C. Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals[J]. IEEE Transactions on Signal Processing, 2009, 57 (3): 993-1009.
    [64] Murata N., Ikeda S., and Ziehe A. An approach to blind source separation based on temporal structure of speech signals[J]. Neurocomputing, 2001, 41: 1-24.
    [65] Parra L.C. and Alvino C.V. Geometric source separation: Merging convolutive source separation with geometric beamforming[J]. IEEE Transactions on Speech and Audio Processing, 2002, 10 (6): 352-362.
    [66] Sawada H., Mukai R., Araki S., and Makino S. A robust and precise method for solving the permutation problem of frequency-domain blind source separation[J]. IEEE Transactions on Speech and Audio Processing, 2004, 12 (5): 530-538.
    [67] Wang W.W., Chambers J.A., and Sanei S. A novel hybrid approach to the permutation problem of frequency domain blind source separation[J]. Independent Component Analysis and Blind Signal Separation, 2004, 3195: 532-539.
    [68] Ikram M.Z. and Morgan D.R. Permutation inconsistency in blind speech separation: Investigation and solutions[J]. IEEE Transactions on Speech and Audio Processing, 2005, 13 (1): 1-13.
    [69] Di Persia L., Milone D., and Yanagida M. Indeterminacy Free Frequency-Domain Blind Separation of Reverberant Audio Sources[J]. IEEE Transactions on Audio Speech and Language Processing, 2009, 17 (2): 299-311.
    [70] Vrins F., Lee J.A., and Verleysen M. A minimum-range approach to blind extraction of bounded sources[J]. IEEE Transactions on Neural Networks, 2007, 18 (3): 809-822.
    [71] Puntonet C.G. and Prieto A. Geometric approach for blind separation of signals[J]. Electronics Letters, 1997, 33 (10): 835-836.
    [72] Yamaguchi T., Hirokawa K., and Itoh K. Independent component analysis by transforming a scatter diagram of mixtures of signals[J]. Optics Communications, 2000, 173 (1-6): 107-114.
    [73] Mansour A., Ohnishi N., and Puntonet C.G. Blind multiuser separation of instantaneous mixture algorithm based on geometrical concepts[J]. Signal Processing, 2002, 82 (8): 1155-1175.
    [74] Theis F.J., Jung A., Puntonet C.G., and Lang E.W. Linear geometric ICA: Fundamentals and algorithms[J]. Neural Computation, 2003, 15 (2): 419-439.
    [75] Vrins F., Jutten C., and Verleysen M., "SWM: a class of convex contrasts for source separation," in Proc. IEEE Int. Conf. Acoust. Speech Signal Process(ICASSP), . Philadelphia, 2005, V.161–V.164.
    [76] Erdogan A.T. A simple geometric blind source separation method for bounded magnitude sources[J]. IEEE Transactions on Signal Processing, 2006, 54 (2): 438-449.
    [77] Erdogan A.T. Globally convergent deflationary instantaneous blind source separation algorithm for digital communication signals[J]. IEEE Transactions on Signal Processing, 2007, 55 (5): 2182-2192.
    [78]何昭水,谢胜利,章晋龙.基于QR分解的盲源分离几何算法[J].控制理论与应用, 2005(01).
    [79]章晋龙,何昭水,谢胜利等.多个源信号混叠的盲分离几何算法[J].计算机学报, 2005(09).
    [80] Pirzadeh H. Computational Geometry with the Rotating Calipers[D]. Montreal: 1999
    [81] Amari S., Cichocki A., and Yang H.H. A new learning algorithm for blind signal separation. presented at Advances in Neural Information Processing Systems, 1996.
    [82] Cichocki A., Amari S., Siwek K., and Tanaka T., "ICALAB toolboxes," March 28, 2007.
    [83] Moreau E. A generalization of joint-diagonalization criteria for source separation[J]. IEEE Transactions on Signal Processing, 2001, 49 (3): 530-541.
    [84] John M. and Rahbar K. joint diagonalization of correlation matrices by using newton methods with application to blind signal separation[J]. IEEE, Sensor Array and Multichannel Signal Processing Workshop Proceedings, 2002: 403-407.
    [85] Wang F.X., Liu Z.K., and Zhang J. Nonorthogonal joint diagonalization algorithm based on trigonometric parameterization[J]. IEEE Transactions on Signal Processing, 2007, 55 (11): 5299-5308.
    [86] Chabriel G., Barrere J., Thirion-Moreau N., and Moreau E. Algebraic Joint Zero-Diagonalization and Blind Sources Separation[J]. IEEE Trans Signal Processing, 2008, 56 (3): 980 - 989
    [87] Wang F.X., Liu Z.K., and Zhang J. A new joint diagonalization algorithm with application in blind source separation[J]. IEEE Signal Processing Letters, 2006, 13 (1): 41-44.
    [88] Chen B.N. and Petropulu A.P. Frequency domain blind MIMO system identification based on second- and higher order statistics[J]. IEEE Transactions on Signal Processing, 2001, 49 (8): 1677-1688.
    [89] Sheinvald J. On blind beamforming for multiple non-Gaussian signals and the constant-modulus algorithm[J]. IEEE Transactions on Signal Processing, 1998, 46 (7): 1878-1885.
    [90] Cardoso J.F. and Souloumiac A. Blind beamforming for non-Gaussian signals[J]. Radar and Signal Processing, IEE Proceedings F, 1993, 140 (6): 362-370.
    [91] Moreau E. and Comon P. Comments on blind beamforming for multiple non-gaussian signals and the constant-modulus algorithm[J]. IEEE Transactions on Signal Processing, 2000, 48 (11): 3248-3250.
    [92] Fadaili E., Moreau N.T., and Moreau E. Nonorthogonal joint diagonalization/zero diagonalization for source separation based on time-frequency distributions[J]. IEEE Transactions on Signal Processing, 2007, 55 (5): 1673-1687.
    [93] Degerine S. and Kane E. A Comparative Study of Approximate Joint Diagonalization Algorithms for Blind Source Separation in Presence of Additive Noise[J]. Signal Processing, IEEE Transactions on, 2007, 55 (6, Part 2): 3022-3031.
    [94] Cardoso J.F. On the performance of orthogonal source separation algorithms.presented at Signal Process. VII, Proc. Europ. Assoc. Signal Process, Edinburgh, U.K., 1994.
    [95] Yeredor A. Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation[J]. IEEE Transactions on Signal Processing, 2002, 50 (7): 1545-1553.
    [96] Li X.L. and Zhang X.D. Nonorthogonal Joint Diagonalization Free of Degenerate Solution[J]. IEEE Transactions on Signal Processing, 2007, 55 (5): 1803-1814.
    [97] Zhou G.X., Yang Z.Y., Wu Z.Z., and Zhang J.L. Non-orthogonal joint diagonalization with diagonal constraints[J]. Progress in Natural Science, 2008, 18 (6): 735-739.
    [98] Cichocki A., Amari S., Siwek K., and Tanaka T., ICALABA toolboxes. (March 28, 2007.). [OnLine]. Available: http://www.bsp.brain.riken.jp/ICALAB.
    [99] Ziehe A., Laskov P., Nolte G., and Muller K.R., "The FFDiag Matlab package," 2004.
    [100] Tiknonov A.N. and Arsenin V.Y. Solutions of Ill-posed Problems[M]. Washington, DC,: Wiley, 1977.
    [101] Luenberger D.G. Linear and Nonlinear Programming [M]. 2nd ed: Kluwer Academic Pub, 2003: 491.
    [102] Stadler W. A survey of multicriteria optimization or the vector maximization problem: Part I: 1776-1960[J]. Journal of Optimization Theory and Applications, 1979, 29 (1): 1-52.
    [103] Stadler W., A Comprehensive Bibliography on Multicriteria Decision Making and Related Areas[R]. University of California-Berkeley: University of California-Berkeley, 1981.
    [104] Girolami M. Self-Organizing Neural Networks: Independent Component Analysis and Blind Source Separation[M]. London, U.K.: Springer-Verlag, 1999.
    [105] Pease M.C. Methods of Matrix Algebra[M]. New York: Academic, 1965.
    [106] Stone J.V. Independent component analysis: an introduction[J]. Trends in Cognitive Sciences, 2002, 6 (2): 59-64.
    [107] Molgedey L. and Schuster H.G. Separation of a mixture of independent signals using time-delayed correlations[J]. Physical Review Letters, 1994, 72 (23): 3634-3637.
    [108] Ziehe A. and Muller K.R. TDSEP--an efficient algorithm for blind separation using time structure. presented at the 8th International Conference on Artificial Neural Networks (ICANN’98), Sk?vde, Sweden, 1998.
    [109] Nuzillard D. and Nuzillard J.M. Second-order blind source separation in the Fourierspace of data[J]. Signal Processing, 2003, 83 (3): 627-631.
    [110] Blaschke T., Berkes P., and Wiskott L. What is the relation between slow feature analysis and independent component analysis?[J]. Neural Computation, 2006, 18 (10): 2495-2508.
    [111] Hundley D.R., Kirby M.J., and Anderle M. Blind source separation using the maximum signal fraction approach[J]. Signal Processing, 2002, 82 (10): 1505-1508.
    [112] Stone J.V., Porrill J., Porter N.R., and Wilkinson I.D. Spatiotemporal independent component analysis of event-related fMRI data using skewed probability density functions[J]. Neuroimage, 2002, 15 (2): 407-421.
    [113] Iriarte J., Urrestarazu E., Valencia M., Alegre M., Malanda A., Viteri C., and Artieda J. Independent component analysis as a tool to eliminate artifacts in EEG: A quantitative study[J]. Journal of Clinical Neurophysiology, 2003, 20 (4): 249-257.
    [114] Urrestarazu E., Iriarte J., Artieda J., Alegre M., Valencia M., and Viteri C. Independent component analysis separate spikes of different origin in the EEG. Montreal, CANADA, 2004.
    [115] Cichocki A. Blind source separation: New tools for extraction of source signals and denoising. Orlando, FL, 2005.
    [116] Szupiluk R., Wojewnik P., Zabkowski T., and Ieee. Blind signal separation methods for integration of neural networks results. Florence, ITALY, 2006.
    [117] Ye M. and Li X. An efficient measure of signal temporal predictability for blind source separation[J]. Neural Processing Letters, 2007, 26 (1): 57-68.
    [118] Liu H.L. and Cheung Y.M. On blind source separation using generalized eigenvalues with a new metric[J]. Neurocomputing, 2008, 71 (4-6): 973-982.
    [119] Wiskott L. and Sejnowski T.J. Slow feature analysis: Unsupervised learning of invariances[J]. Neural Computation, 2002, 14 (4): 715-770.
    [120] Blaschke T. and Wiskott L. Independent slow feature analysis and nonlinear blind source separation[J]. Independent Component Analysis and Blind Signal Separation, 2004, 3195: 742-749.
    [121] Georgiev F., Theis F., and Cichocki A. Blind source separation and sparse component analysis of overcomplete mixtures. presented at IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04). 2004.
    [122] Georgiev P. and Cichoki A. Sparse component analysis of overcomplete mixtures by improved basis pursuit method. presented at Circuits and Systems, 2004. ISCAS '04., 2004.
    [123] Zhong M.J., Tang H.W., Chen H.J., and Tang Y.Y. An EM algorithm for learning sparse and overcomplete representations[J]. Neurocomputing, 2004, 57: 469-476.
    [124] Georgiev P., Theis F., and Cichocki A. Sparse component analysis and blind source separation of underdetermined mixtures[J]. IEEE Transactions on Neural Networks, 2005, 16 (4): 992-996.
    [125] Shi Z.W., Tang H.W., and Tang Y.Y. Blind source separation of more sources than mixtures using sparse mixture models[J]. Pattern Recognition Letters, 2005, 26 (16): 2491-2499.
    [126] Desobry F. and Fevotte C. Kernel PCA based estimation of the mixing matrix in linear instantaneous mixtures of sparse sources. presented at ICASSP 2006 Proceedings, IEEE International Conference on Acoustics, Speech and Signal Processing, 2006.
    [127] Fevotte C. and Godsill S.J. A Bayesian approach for blind separation of sparse sources[J]. IEEE Transactions on Audio Speech and Language Processing, 2006, 14 (6): 2174-2188.
    [128] Washizawa Y. and Cichocki A. On-Line K-PLANE Clustering Learning Algorithm for Sparse Comopnent Analysis. presented at IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP 2006., 2006.
    [129] Zhang W., Liu J., Sun J., and Bai S.Z. A new two-stage approach to underdetermined blind source separation using sparse representation[J]. 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol III, Pts 1-3, Proceedings, 2007: 953-956.
    [130] Aissa-El-Bey A., Linh-Trung N., Abed-Meraim K., Belouchrani A., and Grenier Y. Underdetermined blind separation of nondisjoint sources in the time-frequency domain[J]. IEEE Transactions on Signal Processing, 2007, 55 (3): 897-907.
    [131] Araki S., Sawada H., Mukai R., and Makino S. Underdetermined blind sparse source separation for arbitrarily arranged multiple sensors[J]. Signal Processing, 2007, 87 (8): 1833-1847.
    [132] Cichocki A., Karhunen J., Kasprzak W., and Vigario R. Neural networks for blind separation with unknown number of sources[J]. Neurocomputing, 1999, 24 (1-3): 55-93.
    [133] O'Grady P.D. and Pearlmutter B.A. The LOST Algorithm: Finding Lines and Separating Speech Mixtures[J]. EURASIP Journal on Advances in Signal Processing, 2008.
    [134] Xu L. Least mean square error reconstruction principle for self-organizing neural-nets[J]. Neural Networks, 1993, 6 (5): 627-648.
    [135] Karhunen J., Pajunen P., and Oja E. The nonlinear PCA criterion in blind source separation: Relations with other approaches[J]. Neurocomputing, 1998, 22 (1-3): 5-20.
    [136] Kennedy J. and Eberhart R.C. Particle swarm optimization[J]. in Proc. IEEE Int. Conf. Neural Networks, 1995: 1942-1948.
    [137] Jiang M., Luo Y.P., and Yang S.Y. Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm[J]. Information Processing Letters, 2007, 102 (1): 8-16.
    [138] Kadirkamanathan V., Selvarajah K., and Fleming P.J. Stability analysis of the particle dynamics in particle swarm optimizer[J]. IEEE Transactions on Evolutionary Computation, 2006, 10 (3): 245-255.
    [139] Liang J.J., Qin A.K., Suganthan P.N., and Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions[J]. IEEE Transactions on Evolutionary Computation, 2006, 10 (3): 281-295.
    [140] Trelea I.C. The particle swarm optimization algorithm: convergence analysis and parameter selection[J]. Information Processing Letters, 2003, 85 (6): 317-325.
    [141] Merkle D. and Middendorf M. Swarm Intelligence and Signal Processing[J]. IEEE Signal Processing Magazine, 2008, 25 (6): 152-158.
    [142] van den Bergh F. and Engelbrecht A.P. A cooperative approach to particle swarm optimization[J]. IEEE Transactions on Evolutionary Computation, 2004, 8 (3): 225-239.
    [143] Theis T.J., Lang E.W., and Puntonet C.G. A geometric algorithm for overcomplete linear ICA[J]. Neurocomputing, 2004, 56: 381-398.
    [144] geoICA, Available at: http://www.biologie.uni-regensburg.de/Biophysik/Theis/research/geoICA.zip.
    [145] Lee D.D. and Seung H.S. Learning the parts of objects by non-negative matrix factorization[J]. Nature, 1999, 401 (6755): 788-791.
    [146] Hoyer P.O. Non-negative matrix factorization with sparseness constraints[J]. Journal of Machine Learning Research, 2004, 5: 1457-1469.
    [147] Cichocki A., Lee H., Kim Y.D., and Choi S. Non-negative matrix factorization with alpha-divergence[J]. Pattern Recognition Letters, 2008, 29 (9): 1433-1440.
    [148] Kotsia I., Zafeiriou S., and Pitas I. A novel discriminant non-negative matrix, factorization algorithm with applications to facial image characterization problems[J].IEEE Transactions on Information Forensics and Security, 2007, 2 (3): 588-595.
    [149] Donoho D. and Stodden V. When does non-negative matrix factorization give a correct decomposition into parts? presented at Advances in Neural Information Processing 16 (Proc. NIPS*2003), 2004.
    [150] Miao L. and Qi H. Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization[J]. IEEE Trans. Geosci. Remote Sens., 2007, 45 (3): 765-777.
    [151] Cichocki A., Zdunek R., and Amari S., "New algorithm for non-negative matrix factorization in applications to blind source separation," in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Process., vol. 5, 2006, 621-624.
    [152] Pauca V.P., Piper J., and Plemmons R.J. Nonnegative matrix factorization for spectral data analysis[J]. Lin. Alg. Appl., 2006, 416: 29-47.
    [153] Laurberg H., Christensen M.G., Plumbley M.D., Hansen L.K., and Jensen S.H. Theorems on Positive Data: On the Uniqueness of NMF[J]. Computational Intelligence and Neuroscience, 2008.
    [154] Laurberg H. Uniqueness of Non-Negative Matrix Factorization. presented at Statistical Signal Processing, 2007. SSP '07. IEEE/SP 14th Workshop on, 2007.
    [155] Cichocki A., Phan A., Zdunek R., and Zhang L.-Q. Flexible Component Analysis for Sparse, Smooth, Nonnegative Coding or Representation[M] in Neural Information Processing, 2008: 811-820.
    [156] Heiler M. and Schnorr C. Learning sparse representations by non-negative matrix factorization and sequential cone programming[J]. Journal of Machine Learning Research, 2006, 7: 1385-1407.
    [157] Stadlthanner K., Lutter D., Theis F.J., Lang E.W., Tome A.M., Georgieva P., and Puntonet C.G. Sparse Nonnegative Matrix Factorization with Genetic Algorithms for Microarray Analysis. presented at Neural Networks, 2007. IJCNN 2007. International Joint Conference on, 2007.
    [158] Eggert J. and Korner E. Sparse coding and NMF. presented at Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, 2004.
    [159] Theis F.J., Stadlthanner K., and Tanaka T., "First Results On Uniqueness Of Sparse Non-Negative Matrix," in European, Sig. Proc. Conf., 2005.
    [160] Stadlthanner K., Lutter D., Theis F.J., Lang E.W., Tome A.M., Georgieva P., and Puntonet C.G. Sparse nonnegative matrix factorization with genetic algorithms formicroarray analysis[J]. in Proc. Int. Joint Conf. Neural Netw., 2007: 294-299.
    [161] Laurberg H. and Hansen L.K. On affine non-negative matrix factorization[J]. in Proc. IEEE Int. Conf. Acoustice, Speech and Sig. Process., 2007, 2: 653-656.
    [162] Weixiang L., Nanning Z., and Xiaofeng L. Non-negative matrix factorization for visual coding. presented at Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003.
    [163] Craig M.D. Minimum-volume transforms for remotely sensed data[J]. IEEE Transactions on Geoscience and Remote Sensing, 1994, 32 (3): 542-552.
    [164] Nascimento J.M.P. and Dias J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2005, 43 (4): 898-910.
    [165] Fa-Yu W., Chong-Yung C., Tsung-Han C., and Yue W. Non-Negative Least-Correlated Component Analysis for Separation of Dependent Sources by Volume Maximization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009.
    [166] Lee D.D. and Seung H.S. Algorithms for nonnegative matrix factorization[J]. Adv. Neural Inf. Process. Syst., 2001, 13: 556-562.
    [167] Lee D.D. and Seung H.S. Learning the parts of objects by nonnegative matrix factorization[J]. Nature, 1999, 401: 788-791.
    [168] Cichocki A., Zdunek R., and Amari S. Nonnegative matrix and tensor factorization[J]. IEEE Sig. Process. Magazine, 2008, 26 (1): 142-145.
    [169] Hoyer P.O. Non-negative matrix factorization with sparseness constraints[J]. J. Machine Learning Research, 2004, 5: 1457-1469.
    [170] Cichocki A., Phan A.H., Zdunek R., and Zhang L.Q. Flexible component analysis for sparse, smooth, nonnegative coding or representation[J]. Lecture notes in Computer Science, 2008, 4984: 811-820.
    [171] Eggert J. and k?rner E., "Sparse coding and NMF," in Proc. IEEE Int. Conf. Neural. Netw., 2004, 2529-2533.
    [172] Robila S.A. and Maciak L.G. Considerations on Parallelizing Non-negative Matrix Factorization for Hyperspectral Data Unmixing[J]. IEEE Geoscience and Remote Sensing Letters, 2009, 6 (1): 57-61.
    [173] Nils B. The Gramian and K-Volume in N-Space: Some Classical Results in Linear Algebra[J]. Journal of Young Investigators, 1999, 2 (1).
    [174] Bertsekas D.P. Nonlinear Programming[M]. 2nd ed. Belmont, Massachusetts:Athena Scientific, 1999, pp. 267-270: 791.
    [175] Boutsidis C. and Gallopoulos E. SVD based initialization: A head start for nonnegative matrix factorization[J]. Pattern Recognition, 2008, 41 (4): 1350-1362.
    [176] Xue Y., Tong C.S., Chen Y., and Chen W.S. Clustering-based initialization for non-negative matrix factorization[J]. Applied Mathematics and Computation, 2008, 205 (2): 525-536.
    [177] Barber C.B., Dobkin D.P., and Huhdanpaa H.T. The Quickhull algorithm for convex hulls[J]. ACM Trans. on Mathematical Software, 1996, 22 (4): 469-483.
    [178] Seidel R. Convex Hull Computations[M]. Boca Raton, FL: CRC, ch. 19, 1997: 361-375.
    [179] Cichocki A. and Amari S. Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications[M]. New York: Wiley, 2003.
    [180] Cichocki A. and Zdunek R., "NMFLAB for Signal Processing," in NMFLAB– MATLAB Toolbox for Non-Negative Matrix Factorization. : [URL] http://www.bsp.brain.riken.jp/ICALAB/nmflab.html, 2006.
    [181] Hoyer P., "nmfpack," 1.1 ed: [URL]: http://www.cs.helsinki.fi/patrik.hoyer.
    [182] Bro R. PARAFAC. Tutorial and applications[J]. Chemometrics and Intelligent Laboratory Systems, 1997, 38 (2): 149-171.
    [183] Sidiropoulos N.D., Bro R., and Giannakis G.B. Parallel factor analysis in sensor array processing[J]. Ieee Transactions on Signal Processing, 2000, 48 (8): 2377-2388.
    [184] De Lathauwer L. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization[J]. Siam Journal on Matrix Analysis and Applications, 2006, 28 (3): 642-666.
    [185] Sidiropoulos N.D. and Bro R. On the uniqueness of multilinear decomposition of N-way arrays[J]. Journal of Chemometrics, 2000, 14 (3): 229-239.
    [186] Stegeman A., ten Berge J.M.F., and De Lathauwer L. Sufficient conditions for uniqueness in Candecomp/Parafac and Indscal with random component matrices[J]. Psychometrika, 2006, 71 (2): 219-229.
    [187] Stegeman A. and Sidiropoulos N.D. On Kruskal's uniqueness condition for the Candecomp/Parafac decomposition[J]. Linear Algebra and Its Applications, 2007, 420 (2-3): 540-552.
    [188] Rajih M., Comon P., and Harshman R.A. Enhanced Line Search: A Novel Method to Accelerate Parafac[J]. Siam Journal on Matrix Analysis and Applications, 2008, 30 (3):1128-1147.
    [189] Ten Berge J.M.F., Stegeman A., and Dosse M.B. The Carroll and Chang conjecture of equal Indscal components when Candecomp/Parafac gives perfect fit[J]. Linear Algebra and Its Applications, 2009, 430 (2-3): 818-829.
    [190] Chen J. and Huo X.M. Theoretical results on sparse representations of multiple-measurement vectors[J]. IEEE Transactions on Signal Processing, 2006, 54 (12): 4634-4643.
    [191] Ye J.C., Tak S., Han Y., and Park H.W. Projection reconstruction MR imaging using FOCUSS[J]. Magnetic Resonance in Medicine, 2007, 57 (4): 764-775.
    [192] Lustig M., Donoho D.L., Santos J.M., and Pauly J.M. Compressed sensing MRI[J]. IEEE Signal Processing Magazine, 2008, 25 (2): 72-82.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700