面向统计过程控制的成分提取技术研究与应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
统计过程控制(SPC)借助统计成分提取技术监测生产过程的稳定性,是先进制造系统的重要组成部分,也是先进质量控制的重要工具。成分提取技术是一类研究多变量数据内部统计规律,揭示数据内在低维本质信息的统计分析技术,更是统计过程控制的关键支撑技术。本文以统计过程控制为应用背景,深入研究了以二阶统计量方差和高阶统计量为算法性能指标的主成分和独立成分提取技术,并应用于化工过程和半导体封装过程的监控、故障诊断和系统降维等方面。
    论文首先分析了在高斯分布下经典主成分分析(MSE-PCA)建立的主成分模型具有最小均方误差和最小残差熵性质。熵是比方差更通用的系统不确定性度量,最大熵原理要求系统主成分模型应该具有最小残差熵,但MSE-PCA对非高斯数据所建立的主成分模型不具有最小残差熵。依据最大熵原理,论文提出了一种主成分模型具有最小残差熵的改进型主成分提取方法(MEE-PCA)。MEE-PCA先以MSE-PCA确定基本主成分模型,再利用遗传算法优化所保留的主特征向量,使得主成分模型的残差熵最小。并以多变量四水箱过程为实例,描述了MEE-PCA在统计过程监控及故障诊断中的应用,验证了MEE-PCA方法比MSE-PCA的优越性。
    依据随机逼近理论和Hebb学习规则,论文深入分析了以神经网络实现主成分提取的算法,论述了具有更强非线性数据降维能力的非线性主成神经网络算法。结合自关联线性主成分提取神经网络(MSE-PCNN)和非线性主成分提取思想,提出一种以最小残差熵为指标的自关联非线性主成分提取神经网络(MEE-PCNN),给出基于Parzen窗口密度函数估计的微分熵近似计算方法。基于信息最大化(Infomax)原理,论证了MSE-PCNN方法和MEE-PCNN方法在高斯分布情况下的等价性。以四水箱过程为实例,对比分析了经典PCA和非线性主成分神经网络的降维能力。用非高斯数据仿真验证了MEE-PCNN方法能有效地进行非高斯数据降维和信号盲源提取。
    针对独立非高斯性信号混和数据的压缩降维与盲源提取问题,总结了几种基于最大非高斯性或信息熵度量指标的独立成分分析(ICA)算法,论证了最大似然估计ICA算法、最大负熵ICA算法和最小互信息ICA算法之间的等价性。结合非线性主成分提取网络的降维思想和信息最大化(Infomax)原则,论文提出一种以Renyi熵最大化作为指标的主独立成分提取网络(PICNN)算法,用于同时对非高斯混和数据降维压缩和独立成分提取。以田纳西-伊斯曼过程为应用实例,验证了ICA算法在过程故障检测和诊断中应用的优越性。用非高斯数据仿真分析了PICNN算法在信号降维和盲信号重构中应用的有效性。
    统计成分提取技术常被用于基于知识或信号的数值分析类故障诊断方法中,却难以被用于基于模型的数学解析类故障诊断方法中。论文提出一种高维随机动态系统降维和基于观测器的故障诊断算法。该算法首先用成分提取技术对高维解析模型降维逼近,然后设计状态观察器,通过选择适当的自适应调节规律,保证所选择的李亚普诺夫函数能单调递
With the applications of computer integrated manufacturing system (CIMS) and thedemands of rigorous product quality, statistical process control (SPC) is playing a significantrole in the industria processes. Component extraction (CX) as a key supporting technologyof SPC is a statistical computational technique for revealing the multivariable statisticalcharacteristics and extracting the hidden components that underlie the observation of a setof variables and signals. The main goal of this dissertation are aimed at extracting theprincipal components (PC) and independent components (IC) from the observed mixturedata with optimal cost performance based on second and higher statistics. In this thesisapplications of these component techniques of SPC, such as process monitoring and faultdiagnosis, signal processing and dimension reduction, are also illustrated.
    Firstly, the optimal performance of principal component analysis (PCA) is demonstratedaccording to principles of minimum error estimation and maximum entropy. While for theobservation of a non-Gaussian stochastic distribution process system the optimal PCA modelshould have minimum error entropy (MEE). It is evident that the conventional PCA approachneeds to be refined to a PCA model for non-Gaussian distribution system. In this study amodified PCA (MEE-PCA) with the optimization for MEE for the dimensionality reductionof non-Gaussian system is proposed, and the corresponding optimizing method via geneticalgorithm (GA) is derived. A four-tank multivariable system is included to demonstrate theadvantages of MEE-PCA in SPC, and the promising results have been obtained.
    Neural network (NN) provides a feasible way for parallel online PCA. In this thesis theprincipal component neural networks (PCNN) with minimum squared error criteria to extractlinear and nonlinear principal component are expounded. It has shown that linear PCNNmodel with MSE can extract the subspace spanned by principal eigenvectors or the theoret-ical principal eigenvectors. But for non-Gaussian distribution system the PCNN model withMSE does not contain maximum information about original system definitely. In this thesisa generalized autoassociative PCNN model with minimum error entropy (MEE) and its gra-dient descent learning algorithm are proposed. A nonparametric estimator based on Parzenwindowing with the Gaussian kernel to estimate entropy is also provided. According to theInfomax principle the equivalence of the PCNN with cost performance of MSE and MEEin Gaussian case is analyzed. The advantages of nonlinear PCNN in dimensionality reduc-tion and the e?ectiveness of the proposed MEE-PCNN in maximum information componentextraction from observation are simulated through some examples.
    Considering a situation where the observations are the mixtures of a number of indepen-dent non-Gaussian signals whose channels of mixing are unknown, and what we need to do isto find the original independent sources from the mixture. Linear PCNNs which ignore thehigher order structure will not be able to separate these independent source from the mix-
    tures. The aim of ICA is to design structure that can separate a mixture of signals in a blindmanner and identify the unknown mixing channels with only a observed mixed data. Nonlin-ear decorrelation and maximum non-Gaussianity are two basic principles for ICA. In contrastto PCA based on the covariance structure, ICA not only decorrelates the components butalso reduces higher order statistical dependencies, in order to make the extracted componentas independent as possible. The powerful strength of ICA is that only mutual statistical in-dependence between the non-Gaussian source signals is assumed in ICA model and no prioriinformation about the characteristics of the source signals and the mixing matrix are known.The classical application of ICA is blind source separation (BSS) which refers to the prob-lem of recovering signals from several observed mixtures. In this study the techniques andalgorithms for ICA are described from the perspective of information theory.Subsequently aprincipal independent component neural network (PICNN) based on maximization of secondorder Renyi’s entropy is proposed. An approximation method for the computation of theRenyi entropy criterion and the corresponding gradient learning algorithm are provided. Themotivation for using Renyi’s entropy was the existence of an computationally simple esti-mator for Renyi’s quadratic entropy, as well as the fact that Shannon’s entropy is a specialcase of Renyi’s entropy. For normally distributed data the maximization of the transformeddata variance indicates that the entropy or average information content of data is maximized.Simulation examples are included to show the e?ectiveness of the proposed approach for thedimensionality reduction and the advantages of the blind source separation over the generalprinciple component analysis.Based on the ideas of dimensionality reduction and component extraction as mentionedabove, a nonlinear principal component neural network (PCNN) model with the instanta-neous stochastic gradient descent learning algorithm for dimensionality reduction of a highdimensional dynamic control system is derived. A fault diagnosis method via an adaptiveobserver for the dimensionality-reduced system is proposed by using the linear residual sig-nal, where an adaptive tuning rule is established to insure the monotonically decreasing of aselected Lyapunov function. The e?ciency of the proposed approaches is illustrated througha simulation example.Finally, the advantages of SPC based on the component extraction techniques are demon-strated through a case study on the dispensing process in integrated circuit encapsulation.Through a comparison study of the performance of di?erent methods it has shown that MEEbased component extraction technique is better than the MSE base component extractiontechnique in fault diagnosis.
引文
[1] 淑荣, 梁工谦, 彭炎午, 先进制造系统中的统计过程控制, 航空工程, 2000, 4: 12-14
    [2] 徐翀, 马玉林, 袁哲俊, 面向amt的统计过程质量控制, 计算机辅助设计与制造,1998,10: 39-41
    [3] 于涛, 王高山, 先进制造环境中工序质量统计过程控制的地位与作用, 中国机械工程,2004, 15(23): 2107-2110
    [4] Martin E. B., Morris A. J. and Kiparrisides C. Manufacturing Performance Enhance-ment through Multivariate Statistical Process Control. Annual Reviews in Control,1999. 23: 35-44
    [5] Wise B. M., Gallagher N. B. and Butler S.W., et al. A Comparison of Principal Com-ponents Analysis, Multi-way Principal Components Analysis, Tri-linear Decompositionand Parallel Factor Analysis for Fault Detection in a Semiconductor Etch Process,Journal of Chemometrics, 1999. 13: 379-396
    [6] Rencher A. C. Methods of Multivariate Analysis, New York: John Wiley & Sons, 1995.43-487
    [7] Jackson J. E. A User’s Guide To Principal Components, New York: John Wiley & Sons,1991. 1-25
    [8] Jolli?e I. T. Principal Component Analysis, 2nd edition, New York: Springer Verlag,2002. 5-111
    [9] Johnson R. A. and Wichern D. W. 实用多元统计分析, 第四版, 陆旋(译), 北京: 清华大学出版社, 2001. 478-512
    [10] 方开泰, 实用多元统计分析, 上海: 华东师大出版社, 1989. 20-235
    [11] 于秀林, 任雪松, 多元统计分析, 北京: 统计出版社, 1999. 40-237
    [12] Nomikos P. and McGregor J. F. Monitoring Batch Processes Using Multiway PrincipalComponent Analysis, AIChE Journal, 1994, 40(8): 1361-1375
    [13] Wise B. M. Adapting Multivariate Analysis for Monitoring and Modeling of DynamicSystems, Ph.D. Dissertation, University of Washington, Seattle, 1991.
    [14] Dunia R., Qin J. and Edgar T. F., et al. Sensor Fault Identification and ReconstructionUsing Principal Component Analysis, In: 13th IFAC World Congress, San Francisco,June 30 -July 5, 1996. 259-264
    [15] MacGregor J. F. and Koutodi M. Statistical Process Control of Multivariate Processes,Control Engineering Practice, 1995, 3: 403-414
    [16] Russell E. L., Chiang L.H. and Braatz R.D. Data-driven Techniques for Fault Detectionand Diagnosis in Chemical Processes, Springer-Verlag, London, 2000. 46-120
    [17] Chiang L. H., Russell E.L. and Braatz R.D. Fault diagnosis in Chemical Processesusing Fisher Discriminant Analysis, Discriminantartial Least Squares, and PrincipalComponent Analysis, Chemometrics and Intelligent Lab. Systems, 2000, 50: 243-252
    [18] Chiang L. H., Russell E.L. and Braatz R.D. Fault Detection and Diagnosis in IndustrialSystems, Springer-Verlag, London, 2001. 30-163
    [19] Theodoridis S. and Koutroumbas K. Pattern Recognition, 2nd Edition, 北京: 机械工业出版社, 2003. 355-386
    [20] Rotem Y., A. Wachs and Lewin D. R. Ethylene Compressor Monitoring using Model-based PCA, AIChE Journal 2000, 46(9): 1825-1836
    [21] Lachman-Shalem, Haimovitch N., Shauly E. N. and Lewin D. R. MBPCA Applica-tion for Fault Detection in NMOS Fabrication, IEEE Transactions on SemiconductorManufacturing, 2002, 15(1): 60-70
    [22] Karhunen J. and Oja E. New methods for Stochastic Approximation of TruncatedKarhunen-Lo`eve Expansions, In: Proc. 6th Int. Conf. on Pattern Recognition, NewYork: Springer Verlag, 1982, 550-553
    [23] Oja E. A Simplified Neuron Model as a Principal Component Analyzer, Journal ofMathematical Biology, 1982, 15: 267-273
    [24] Oja E. and Karhunen J. On Stochastic Approximation of Eigenvectors and Eigenvaluesof Expectation of a Random Matix, Journal Mathematical Analysis and Applications,1985, 106: 69-84
    [25] Baldi P. Linear Learning : Landscapes and Algorithms, Neural Information ProcessingSystems, 1988, 65-72
    [26] Baldi P. and Hornik K. Neural Networks and Principal component analysis: Learningfrom examples without local minima, Neural Networks, 1989, 2: 53-58
    [27] Diamantaras K. I. and S. Y. Kung, Principal Component Neural Networks : Theoryand Applications, New York: John Wiley & Sons, 1996. 14-181
    [28] Haykin S. Neural Networks : A Comprehensive Foundation, 清华大学出版社, 北京,2001. 10-97
    [29] Kramer M. A. Nonlinear Principal Component Analysis Using Autoassociative NeuralNetworks, AIChE Journal, 1991, 37: 223-243
    [30] Hsieh W. W. Nonlinear Principal Component Analysis by Neural Networks, Tellus,2001, 53(A): 599-615
    [31] J. Karhunen, L. Wang and R. Vigario, Nonlinear PCA Type Approaches for SourceSeparation and Independent Component Analysis, Proc. International Conference onNeural Networks (ICNN’95), Perth, Australia, 1995, 2: 995-1000
    [32] Oja E., The Nonlinear PCA Learning Rule and Signal Separation-Mathematical Anal-ysis, Helsinki University of Technology, Laboratory of Computer and Information Sci-ence, Report A26, 1995.
    [33] Oja E., PCA, ICA, and Nonlinear Hebbian Learning, Proc. International. Conferenceon Artificial Neural Networks (ICANN-95), Paris, France, October 9-13, 1995, 1: 89-94
    [34] Comon P. Independent Component Analysis: A New Concept?, Signal Processing,1994, 36: 287-314
    [35] Hyv¨arinen A. and Oja E., Independent Component Analysis: Algorithms and Appli-cations, Neural Networks, 2000, 13: 411-430
    [36] Hyv¨arinen A., Karhunen J. and E.Oja, Independent Component Analysis, New York:John Wiley & Sons, 2001. 15-262.
    [37] Lee T.-W. Independent Component Analysis : Theory and Applications, Kluwer Aca-demic Publishers, 1998
    [38] Cheung Y. M. and Xu L. Independent Component Ordering in ICA Time Series Anal-ysis, Neurocomputing, 2001, 41: 145-152
    [39] Kiviluoto K. and Oja E. Independent Component Analysis for Parallel Financial TimeSeries, In: Proceedings of the Fifth International Conference on Neural InformationProcessing (ICONIP’98), Tokyo, Japan, 1998, 2:895-898
    [40] Shannon T. T.,Abercrombie D. and McNames J. Process Monitoring via IndependentComponents, IEEE Int. Conf. on Systems, Man and Cybernetics, 2003, 4: 3496-3500
    [41] Ypma A. and Pajunen P. Rotating Machine Vibration Analysis with Second-OrderIndependent Component Analysis, In: Proc. of the 1st Int. Workshop on IndependentComponent Analysis and Signal Separation: ICA’99, Aussois, France, Jan, 1999. 37-42.
    [42] 陈国金, 工业过程监控:基于主元分析和盲源信号分析方法. [博士学位论文],保存地点:浙江大学图书馆,2004年4月.
    [43] 王树青, 基于数据驱动的流程工业性能监控与故障诊断研究. [博士学位论文],保存地点:浙江大学图书馆,2004年1月.
    [44] Shannon C. E. A Mathematical Theory of Communication. The Bell System TechnicalJournal, 1948, 27: 379-423
    [45] Makeig S., Bell A. J. and Jung T. P., et al. Independent Component Analysis of Elec-troencephalographic Data, Advances in Neural Information Processing Systems, 1996,8: 145-151
    [46] McKeown M. J., Makeig S. and Brown G., et al. Analysis of fMRI data by Blindsepa-ration into Independent Spatial Components, Human Brain Mapping, 1998, 6: 160-188
    [47] Cristescu R., Ristaniemi T. and Joutsensalo J., et al. CDMA Delay Estimation Using aFast ICA Algorithm, In: Proc. of the IEEE Int. Symp. on Personal, Indoor, and MobileCommunications (PIMRC’00), London, United Kingdom, Sept.17-19, 2000. 1117-1120
    [48] Cristescu R., Ristaniemi T. and Joutsensalo J., et al. Blind Separation of ConvolvedMixtures for CDMA Systems, In: Proc. of the European Signal Processing Conference(EUSIPCO 2000), Tampere, Finland, Sept.5-8, 2000, 619-622
    [49] Raju K. and Ristaniemi T. Exploiting independence to cancel interferences due toadjacent cells in a ds-cdma cellular system, 14th IEEE Proceedings on Personal, Indoorand Mobile Radio Communications (PIMRC 2003), Sept.7-10, 2003, 3: 2130-2134
    [50] Ristaniemi T., Raju K. and Karhunen J., et al, Jammer Cancellation in DS-CDMAArrays: Pre and Post Switching of ICA and RAKE, Proc. of the 2002 12th IEEEWorkshop on Neural Networks for Signal Processing, Sept.4-6, 2002. 495-504
    [51] Tugnait J. K. and Ma J. H. Blind Multiuser Detection for Code-Hopping DS-CDMASignals in Asynchronous Multipath Channels, IEEE Transactions on Wireless Commu-nications, 2004, 3(2): 466-476
    [52] 席聪, 张太镒, 刘枫, 基于核独立成分分析的盲多用户检测算法,西安交通大学学报,2004, 38(4): 373-376
    [53] Parra L., Spence C. and Sajda P., et al, Unmixing Hyperspectral Fata, Advances inNeural Information Processing Systems, MIT Press, 2000, 12: 942-948
    [54] 王峻峰, 基于主分量、独立分量分析的盲信号处理及应用研究,[博士学位论文],保存地点:华中科技大学图书馆,2005年5月.
    [55] Martin E. B. and Morris A. J. Non-parametric Confidence Bounds for Process Perfor-mance Monitoring Charts. Journal of Process Control, 1996, 6(6): 349-358
    [56] 陈运, 周亮, 陈新, 信息论与编码, 北京: 电子工业出版社, 2002.
    [57] Girolami M. Self-Organising Neural Networks : Independent Component Analysis andBlind Source Separation, London: Springer Verlag, 1999, 5-200
    [58] 张杰, 阳宪惠, 多变量统计过程控制, 北京: 化工工业出版社, 2000.
    [59] Roberts S. J. and Everson R. M. Independent Component Analysis : Principles andPractice, Cambridge University Press, 2001.
    [60] 周东华, 叶银忠, 现代故障诊断与容错控制. 北京: 清华大学出版社, 2000.
    [61] 肖健华, 吴今培, 樊可清, 杨叔子, 粗糙主成分分析在齿轮故障特征提取中的应用. 振动工程学报, 2003, 16(2): 166-170
    [62] 陈国金, 梁军, 刘育明, 钱积新, 基于多元统计投影方法的过程监控技术研究. 浙江大学学报(工学版), 2004, 38(12): 1561-1565
    [63] 郭明, 王树青, 基于独立分量分析的系统性能监控方法研究. 浙江大学学报(工学版),2004, 38(6): 665-669
    [64] 何宁, 谢磊, 郭明,王树青, 基于独立成分的动态多变量过程的故障检测与诊断方法. 化工学报, 200, 56(4): 646-652
    [65] 赵立杰, 柴天佑, 王纲, 多元统计性能监视和故障诊断技术研究进展. 信息与控制,2004, 33(2): 97-201
    [66] 王海清, 余世明, 基于故障诊断性能优化的主元个数选取方法. 化工学报,2004, 52(2):214-219
    [67] 王海清, 宋执环, 李平, 主元分析方法的故障可检测性研究. 仪器仪表学报, 2002, 23(3):222-240
    [68] Ku W., R. Storer H. and Georgakis C. Disturbance Dtection and Isolation by DynamicPrincipal Component Analysis. Chemometrics And Intelligent Laboratory Systems,1995, 30(1): 179-196
    [69] Macgregor J. H., Jaeckle C., Kiparissides C., et al. Process monitoring and Diagnosisby Multiblock PLS methods. AIChE Journal, 1994 , 40(5): 826-838
    [70] Nomikos P. and MacGregor J.F. Multivariate SPC Charts for Monitoring Batch Pro-cesses. Technometrics, 1995, 37(1): 41-59.
    [71] Bakshi B. R. Multiscale PCA with Application to Multivariate Statistical Process Mon-itoring.AIChE Journal, 1998 ,44(7):1596-1610
    [72] 陈耀, 王文海, 孙优贤, 基于动态主元分析的统计过程监视. 2000年10月, 51(5): 666-670
    [73] Tan S. and Mavrovouniotis M. L. Reducing Data Dimensionalitythrough OptimizingNeural Network Inputs. AIChE Journal, 1995, 41(6): 1471-1480
    [74] Hastie T. and Stuetzle W. Principal Curves. Journal of the American Statistical Asso-ciation, 1989, 84: 502-516
    [75] Dong D. and McAvoy T. J., Nonlinear Principal Component Analysis Based on Prin-cipal Curves and Neural Networks. Computers Chemical Engineering, 1996, 20: 20-78
    [76] Qin S. J. Recursive PLS algorithms for adaptive data modeling.Computers & ChemicalEngineering, 1998, 22(45): 503-514
    [77] Oja E., Neural Network, Principal Components and Subspaces, International Journalof Neural Systems, 1989, 1: 61-68.
    [78] Oja E., Principal Components, Minor Components and Linear Neural Network, NeuralNetworks, 1992, 5: 927-935
    [79] Sanger T. D. An Optimality Principle for Unsupervised Learning, Advances in NeuralInformation Processing Systems 1, Touretzky D. S. (Ed.), San Mateo, Morgan Kauf-mann, Canada, 1989. 11-19
    [80] Sanger T. D. Optimal Unsupervised Learning in a Singlelayer Linear Feed-ForwardNeural Network, Neural Networks, 1989, 2(6): 459-473
    [81] F¨oldi`ak P. Adaptive Network for Optimal Linear Feature Extraction, In Proceedings ofthe IEEE/INNS International Joint Conference on Neural Networks, Washington DC.,June 18-22, 1989, New York: IEEE Press, 1: 401-405
    [82] Barlow H. B. and F¨oldia`k P. Adaptation and Decorrelation in the Cortex, The Com-puting Neuron, Miall C., et al, (eds.), Addison-Wesley, 1989.
    [83] Rubner J. and Tavan P. A Self-Organizing Network for Principal-Component Analysis,Europhys. Left. 1989, 10(7):693-698
    [84] Kung S. Y. and Diamantaras K. I., A Neural Network Learning Algorithm for AdaptivePrincipal Component Extraction (APEX), In: IEEE Intl. Conf. on Acoustics, Speech,and Signal Processing, 1990, 1: 861-864
    [85] Hornik K., Noisy Linear Networks, In: Artificial Neural Networks for Speech and Vision,Mammone R. (Ed.), London: Chapman & Hall, 1993. 37-44
    [86] Diamantaras K. I., Robust Hebbian Learning and Noisy Principal Component Analysis,International Journal of Computer Mathematics, Gordon & Breach Science Publishers,1997, 68(1-2): 5-24
    [87] Bourlard H. and Y. Kamp, Auto-Association by Multilayer Perceptrons and SingularValues Decomposition, Biol. Cybernet. 1988, 59: 291-294
    [88] Oja E., H. Ogawa and Wangviwattana J. Learning in Nonlinear Constrained HebbianNetworks, In: Articial Neural Networks(Proc. ICANN-91), Kohonen T. et al.,(eds),Elsevier, 1991. 385-390
    [89] Del Frate F. and Schiavon G. Nonlinear Principal Component Analysis for the Radio-metric Inversion of Atmospheric Profiles by Using Neural Networks, IEEE TransanctionOn Geoscience of Remote Sensoring, 1999, 37: 2335-2342
    [90] LeBlanc M. and Tibshirani R. J. Adaptive Principal Surfaces, Journal of the AmericanStatistical Association, 1994, 89: 53-64
    [91] Lu B. and Hsieh W. W. Simplified Nonlinear Principal Component Analysis, Proceed-ings of the International Joint Conference on Neural Networks, 2003, 1: 759-763
    [92] Jutten C. and Herault J. Blind Separation of Sources, part I, An Adaptive AlgorithmBased on Neuromimetic Architecture, Signal Processing, 1991, 24: 1-10
    [93] Comon P., Jutten C. and Herault J. Blind separation of sources, part 2: Problemstatement, Signal Processing, 1991, 24: 11-20
    [94] R. Linsker, An application of the Principle of Maximum Information Preservation toLinear Systems, Advances in Neural Information Processing Systems, Touretzky D. S.(Ed.), Morgan Kaufman, San Mateo, CA, 1989, 1: 186-194
    [95] Linsker R. Local Synaptic Learning Rules Su?ce to Maximize Mutual Information ina Linear Network, Neural Computation, 1992, 4: 691-702
    [96] Nadal P. J. and Parga N. Nonlinear Neurons in the Low-Noise limit : a Factorial CodeMaximizes Information Transfer Network, Computation in Neural Systems, 1994, 5(4):565-581
    [97] Bell T. and Sejnowski J. An Information-Maximization Approach to Blind Separationand Blind Deconvolution, Neural Computation, 1995, 27: 1129-1159
    [98] Linsker R. Self-Organization in a Perceptual Network, IEEE Computer, 1988, 21(3):105-117
    [99] Amari S., T. P. Chen and A. Cichocki, Nonholonomic Orthogonal Learning Algorithmsfor Blind Source Separation, Neural Computation, 2000, 12: 1463-1484
    [100] Amari S. and J.-F. Cardoso, Blind Source Separation Semiparametric Statistical Ap-proach, IEEE Trans. on Signal Processing, 1997, 45(11): 2692-2700
    [101] Amari S., A. Cichocki, and H.H. Yang, A New Learning Algorithm for Blind SignalSeparation, In : Advances in Neural Information Processing Systems, MIT Press, Cam-bridge, 8, 1996.
    [102] Amari S., Natural Gradient Works E?ciently in Learning, Neural Computation, 1998,10: 251-276
    [103] Cardoso J.-F. and Laheld B. Equivariant Adaptive Source Separation, In: IEEE Trans.on Signal Proc., 1996, 44(12): 3017-3030
    [104] MacKay D. Maximum Likelihood and Covariant Algorithms for ICA, Technical Report,Cavendish Lab, Cambridge, 1996.
    [105] Attias H. Independent Factor Analysis, Neural Computation, 1999, 11(4): 803-851
    [106] Girolami M. and Fyfe C. Extraction of Independent Signal Sources Using a De?ationaryExploratory Projection Pursuit Network with Lateral Inhibition, in: IEE Proceedingson Vision, Image and Signal Processing, 1997, 14(5): 299-306
    [107] Girolami M. and Fyfe C. An Extended Exploratory Projection Pursuit Network withLinear and Nonlinear Anti-Hebbian Connections Applied to the Cocktail Party Prob-lem, Neural Networks, 1997, 10(9): 1607-1618
    [108] Hyv¨arinen A. and Oja E. A Fast Fixed-Point Algorithm for Independent ComponentAnalysis, Neural Computation, 1997, 9(7): 1483-1492
    [109] Hyv¨arinen A. Fast and Robust Fixed-Point Algorithms for Independent ComponentAnalysis, IEEE Transactions on Neural Networks, 1999, 10(3): 626-634
    [110] Torkkola K. Blind Separation of Convolved Sources Based on Information Maximiza-tion, In: Neural Networks for Signal Processing VI., Kyoto Japan, IEEE press, 1996.
    [111] Torkkola K. Blind Separation of Delayed Sources Based on Information Maximization,In: Proceedings of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing.Atlanta, GA, 1996.
    [112] Almeida L. B. ICA of Linear and Nonlinear Mixtures Based on Mutual Information,In: Proc. 2001 Int. Joint Conf. on Neural Networks, Washington, 2001.
    [113] Almeida L. B. MISEP-Linear and Nonlinear ICA Based on Mutual Information, Journalof Machine Learning Research (Special issue on independent components analysis),2003, 4: 1297-1318
    [114] Deco G. and Brauer W. Nonlinear Higher-Order Statistical Decorrelation by Volume-Conserving Neural Architectures, Neural Networks, 1995, 8: 525-535
    [115] Lee T.-W. Nonlinear Approaches to Independent Component, In: Proceedings of theAmerican Institute of Physics, October, 1999.
    [116] Valpola H. Nonlinear Independent Component Analysis Using Ensemble Learning :Theory, In: Proc. Second Int. Worksh. Independent Component Analysis and BlindSignal Separation, Helsinki, Finland, 2000, 251-256
    [117] Hyv¨arinen A. and Hurri J. Blind Separation of Sources that have Spatiotemporal De-pendencies. Signal Processing, 2004, 84(2):247-254
    [118] 郝志华, 马孝江, 局域波法和独立成分分析在转子系统故障诊断上的应用, 中国电机工程学报, 2005年2月, 25(3): 84-88
    [119] 王斌, 张立明, 基于独立成分的MEG(脑电图)数据分析与处理, 生物物理学报,2003年6月, 19(2): 141-146
    [120] 钟明军, 唐焕文, 唐一源, 空间独立成分分析实现fMRI信号的盲源分量, 生物物理学报,2003年3月, 19(1): 79-82
    [121] Hyv¨arinen A. Blind Source Separation by Nonstationarity ofVariance : A Cumulant-based Approach. IEEE Transactions on Neural Networks, 2001, 12(6): 1471-1474
    [122] Matsuoka K., Ohya M., and Kawamoto M. A Neural Net for Blind Separation ofNonstationary Signals. Neural Networks, 1995, 8(3): 411-419
    [123] Pham D.T. and Cardoso J.F. Blind separation of Instantaneous Mixtures of non Sta-tionary Sources. IEEE Trans. Signal Processing, 2001, 49(9): 1837-1848
    [124] Hyv¨arinen A.. Complexity pursuit: Separating Interesting Components from Time-Series. Neural Computation, 2001, 13(4): 883-898
    [125] Belouchrani A., Abed Meraim K., Cardoso J.-F. and Moulines E. A Blind Source Sepa-ration Technique Based on Second Order Statistics. IEEE Trans. on Signal Processing,1997, 45(2): 434-444
    [126] Molgedey L.and Schuster H. G. Separation of a Mixture of Independent Signals UsingTime Delayed Correlations. Physical Review Letters, 1994, 72: 3634-3636
    [127] Guo Z. H. and H. Wang, A Modified PCA based on the Minimum Error Entropy, In:Proceedings of the 2004 American Control Conference, June 30-July 2, Boston, MA,USA, 2004, 4:3800-3801
    [128] 郭振华, 王宏, 一种用于降维和盲源分离的主独立元神经网络, 数据采集与处理, 2004.19(3):239-242
    [129] Guo Z. H., Shen L. and Wang H. Approaches to Dimension Reduction and Fault Diag-nosis of the High-Dimensional Dynamic System, The 2004 Int. Conference on Dynamics,Instrumentation and Control (CDIC’04), August 18-20, Nanjing, China, 2004.
    [130] Cover T. M. and Thomas J. A. Elements of Information Theory, Wiley Series inTelecommunications, New York: John Wiley & Sons, 1991.
    [131] Jaynes E. T. Information Theory and Statistical Mechanics, Physical Review, 1957,106(4): 620-630
    [132] Jaynes E. T. The Rationale of Maximum-Entropy Methods, Proceedings of the IEEE,1982, 70: 939-952
    [133] Moreau E. and Macchi O. New Self-Adaptive Algorithms for Source Separation Basedon Contrast Functions, In: Proc. IEEE Signal Processing Workshop on Higher OrderStatistics, Lake Tahoe, USA, 1993. 215-219
    [134] Abedi A. and Khandani A. K. An Analytical Method for Approximate PerformanceEvaluation of Binary Linear Block Codes, IEEE Trans. Commun., 2004, 52(2): 228-235
    [135] Blinnikov S. and Moessner R. Expansions for Nearly Gaussian Distributions, Astron.Astrophys. Supplement Ser., 1998, 130(1): 193-205
    [136] Kushner H. and Clark D. Stochastic Approximation for Constrained and UnconstrainedSystems, New York: Springer Verlag, 1978. 131-143
    [137] Kushner H. and Yin G. Stochastic Approximation Algorithms and Applications, Berlin:Springer-Verlag, 1997. 247-257
    [138] Amari S. and Douglas S.C. Why Natural Gradient, In: Proc. IEEE Int. Conf. on Acous-tics, Speech, Signal Processing, Seattle, 1998, 2: 1213-1216
    [139] 程云鹏, 张凯院, 徐仲, 矩阵论, 第二版, 西安: 西北工业大学出版社, 2001.
    [140] Wold H. Estimation of principal components and related models by iterative leastsquares, In: Multivariate Analysis, Krishnaiah P. R. (Ed.), New York: Academic Press,1966, 391-420
    [141] Hotelling H. Analysis of a Complex of Statistical Variables into Principal Components,Journal of Educational Psychology, 1933, 24: 417-441, 498-520
    [142] Valle S., Li W. and Qin S. J. Selection of the Number of Principal Components : TheVariance of the Reconstruction Error Criterion with a Comparison to Other Methods,Ind. Eng. Chem. Res., 1999, 38: 4389-4401
    [143] Davis L. D. Handbook of Genetic Algorithm, New York: Van Nostrand Reinhold, 1991.
    [144] Johansson K. H. The Quadruple-Tank Process: A Multivariable Laboratory ProcessWith An Adjustable Zero. IEEE Transcations on Control Systems Technology, 2000,8(3): 456-465
    [145] He Peter Q., Wang J. and Qin S. J. A New Fault Diagnosis Method Using FaultDirections in Fisher Discriminant Analysis, Texas-Wisconsin Modeling and ControlConsortium Technical Report, TWMCC-2004-05, 2004.
    [146] Valle S., Li W. and Qin S. J. Selection of the Number of Principal Components: TheVariance of the Reconstruction Error Criterion with a Comparison to Other Methods.Ind. Eng. Chem. Res.,1999, 38: 4389-4401
    [147] 聂赞坎, 徐宗本, 随机逼近及自适应算法, 北京: 科学出版社, 2003, 120-138
    [148] Karhunen J., Optimization Criteria and Nonlinear PCA Neural Networks, In: Proc. ofthe 1994 IEEE International Conference on Neural Networks (a part of World Congresson Computational Intelligence), Orlando, Florida, 1994, 2: 1241-1246
    [149] Parzen E. On the Estimation of a Probability Density Function and the Mode, Annalsof Math. Statistics, 1962, 33: 1065-1076
    [150] Achlioptas D. and McSherry F. Fast Computation of Low Rank Matrix Approxima-tions, In: Proc. ACM Symp. Theory of Computing, New York: ACM Press, 2001, 611-618
    [151] Karhunen J. and Joutsensalo J. Representation and Separation of Signals Using Non-linear PCA Type Learning, Neural Networks, 1994, 7(1): 113-127
    [152] Karhunen J. and Joutsensalo J. Generalizations of Principal Component Analysis, Neu-ral Networks, 1995, 8(4): 549-562
    [153] Karhunen J., Oja E. and Wang L., et al. A Class of Neural Networks for IndependentComponent Analysis, IEEE Transactions on Neural Networks, 1997, 8(3): 486-504
    [154] Oja E., The Nonlinear PCA Learning Rule in Independent Component Analysis, Neu-rocomputing, 1997, 17(1): 25-45
    [155] Wang L. and Karhunen J. A Unified Neural Bigradient Algorithm for Robust PCA andMCA, International Journal of Neural Systems, 1996, 7(1): 53-67
    [156] Oja E., Karhunen J., Wang L. and Vigario R. Principal and independent componentsin neural networks-Recent developments, In: Proc. VII Italian Workshop Neural NetsWIRN’95, Vietri sul Mare, Italy, May, 1995.
    [157] Principe J., Xu D. and Zhao Q., et al, Learning from Examples with InformationTheoretic Criteria, Journal of VLSI Signal Processing Systems, 2000, 26: 61-77
    [158] Erdogmus D., Hild K. E. II and Principe J. C. Online Entropy Manipulation: StochasticInformation Gradient, IEEE Signal Processing Letters, 2003, 10(8): 242-245
    [159] Fisher J. W. III and Principe J. C. Entropy Manipulation of Arbitrary Nonlinear Map-pings, In: Proceeding of Neural Networks for Signal Processing, Amelia Island, FL,September, 1997, 14-23
    [160] Viola P., Schraudolph N. and Sejnowski T. Empirical Entropy Manipulation for Real-World Problems, Proc. Neural Info. Proc. Sys. (NIPS 8) Conf., 1995, 851-857
    [161] Taylor J. and Plumbley M. Information Theory and Neural Networks in MathematicalApproaches to Neural Networks, Taylor J. (Ed.), Elsevier Science Publ., Amsterdam,1993, 307-340
    [162] Renyi A. Some Fundamental Questions of Information Theory, Selected Papers of Al-fred Renyi 2, Akademic Kiado, Budapest, 1976.
    [163] Therrien C. W. Discrete Random Signals and Statistical Signal Processing, EnglewoodCli?s, NJ, Prentice Hall, 1992.
    [164] Yang H. H., Amari S. and Cichocki A. Information-Theoretic Approach to Blind Sep-aration of Sources in Non-Linear Mixture, Signal Processing, 1998, 64(3): 291-300
    [165] Downs J. J. and Vogel E. F. A Plant-Wide Industrial Process Control Problem. Com-puters & Chemical Engineering, 1993, 17(3): 245-255
    [166] Lyman P. R. and Georgakis C. Plant-Wide Control of the Tenessee Eastman Problem.Comp. Chem. Eng. 1995, 19(3): 321-331
    [167] Wang H. and Daley S. Actuator Fault Diagnosis : an Adaptive Observer Based Ap-proach, IEEE Transactions on Automatic Control, 1996, 41(7): 1073-1077
    [168] Shen L. and Wang H.Adaptive Observer Design for General Nonlinear Systems withLinear Output Structure, The Forth International Conference on Control and Automa-tion, Montreal, Canada, 2003. 48-52
    [169] 赵翼翔. 面向半导体封装的点胶系统建模、控制与实现. [博士学位论文],保存地点:华中科技大学图书馆,2005年5月.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700