支持向量回归机及其在智能航空发动机参数估计中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
智能发动机控制作为航空发动机最值得发展的先进控制概念,它包括的内容非常丰富。因而,本文重点研究了智能发动机控制中的推力估计器设计和解析余度技术。推力估计器在智能发动机控制中的直接推力控制和性能退化缓解控制中都有重要应用;智能发动机控制中的高可靠性控制就是针对传感器故障而提出的,而要保证传感器工作的可靠性,发展先进的解析余度技术是一种有效的途径。在设计推力估计器和发展解析余度技术的过程当中,作者利用了机器学习中具有统计学基础和优良泛化能力的支持向量回归技术,并针对原有算法的一些缺点和不足之处提出了许多有价值的算法和观点,更重要的是,作者将自己提出的算法应用到了推力估计器设计和解析余度技术当中并取得了满意的效果。本文的主要研究内容如下:
     首先针对经典支持向量回归机不能抑制系统中存在的奇异点问题提出了截尾ε-不敏感损失函数,并进而提出了截尾支持向量回归机。截尾支持向量回归机不仅能抑制系统中存在的奇异点而提高回归机的泛化能力,还减少了支持向量的数目,提高了实时性。由于截尾支持向量回归机涉及到非凸优化问题,因而作者利用CCCP技巧,将非凸优化问题转化为一系列凸优化问题,才使这个非凸优化问题得以解决。作者在实现截尾支持向量回归机的过程当中从两个角度出发:一是对偶空间;二是原空间。虽然角度不同,但达到的效果基本相同。
     在分析了硬支持向量回归机产生过拟合现象的原因后,作者提出了利用Greedy Stagewise策略来近似训练硬支持向量回归机,即GS-HSVR算法。这主要是由于Greedy Stagewise策略产生的“早停”现象阻止了硬支持向量回归机过拟合现象的发生,这其实相当于一种正则化策略。和经典的软支持向量回归机比较起来,笔者提出的近似训练算法在泛化能力上和软支持向量回归机相当,但在训练时间和支持向量数目上都有一定的优势。
     和经典支持向量回归机比较起来,最小二乘支持向量回归机虽然能减轻训练代价,但其解缺乏稀疏性。为了实现其解的稀疏性,作者在介绍FSA-LSSVR算法的基础上,首先提出了LS2SVR算法。这种算法和FSA-LSSVR算法以及一些现存的算法比较起来,无论在训练时间和支持向量数目上都占有一定的优势。和FSA-LSSVR算法比较起来,LS2SVR考虑到了整个训练样本集产生的约束对目标函数的影响,因而在较少支持向量数目的情况下,能取得和FSA-LSSVR一样的泛化能力,并进行了证明。为了进一步实现LSSVR的稀疏性,作者将约简技术和迭代策略结合起来提出了RR-LSSVR算法。与FSA-LSSVR、RLSSVR和LS2SVR相比较,RR-LSSVR算法有更优秀的稀疏性,但这种算法的训练代价也是最大的。
     为了改善局部变化差异比较大的系统的学习效果,同时也为了利用先验知识和多核学习的优势,作者将半参数技术和多核学习结合起来,提出了两种多核半参数支持向量回归机:一种是多核半参数线性规划支持向量回归机(MSLP-SVR),另一种是稀疏多核半参数最小二乘支持向量回归机(稀疏MSLSSVR)。这两种多核半参数回归机有一个特性,那就是经典的单核回归机是其特例,这也就意味着多核半参数回归机的学习效果不会比经典的单核回归机差。另外,和其他多核学习算法比较起来,作者提出的多核学习算法在泛化能力或者训练时间上占有优势。
     作者研究和提出这些算法的目的就是为了应用它们。这主要包括两个方面:一是利用RR-LSSVR算法进行推力估计器设计;二是基于GS-HSVR算法提出了一种在线进行传感器故障诊断的解析余度技术。
     实现直接推力控制和性能退化缓解控制的一个重要环节就是进行推力估计器设计。首先作者基于用于模型选择的留一法来进行推力估计器输入量的选择。在确定完推力估计器的输入量后,用RR-LSSVR算法进行了全包线推力估计器的设计。为了在全包线内设计高精度和高实时性的推力估计器,作者将包线按高度进行分块。紧接着,提出了更合理的对全包线进行分块的方法,那就是将全包线内的点进行聚类。一般来说,每一类中的数据点都是相似的,推力不会相差太大,这就避免了在推力绝对误差基本相同的情况,由于推力相差太多而导致相对误差较大现象的发生。针对航空发动机在服役过程中发生的性能退化现象,在训练过程中加入退化样本后使这个问题得以解决。为了实时地估计出航空发动机运行时的推力值,作者修改了RR-LSSVR算法,在输入端引入反馈的推力值来模拟航空发动机的动态过程而设计了动态过程推力估计器。
     在智能发动机控制概念中有一种高可靠性控制。所谓高可靠性控制,就是要保证提供给控制器的信号是正确可靠的。针对这个问题,作者将离线GS-HSVR算法进行了适当的改进。改进后的算法即FOAHSVR算法不仅能获得和GS-HSVR相当的泛化能力,更重要的是FOAHSVR是一种在线学习算法。利用FOAHSVR的在线学习性,提出了一种在线进行航空发动机传感器故障诊断的方案,并能对航空发动机的单传感器或者多传感器的偏置故障进行很好的检测、隔离和自适应重构而形成了解析余度技术。为了应对传感器的漂移故障,作者还提出了一种修正策略,实验表明,此修正策略能对航空发动机传感器发生的漂移故障进行有效的检测和自适应重构。
As an advanced control concept worthiest of developing in aeroengines, intelligent engine control possesses plentiful contents. Hence, in this dissertation, thrust estimator design and analytical redun-dacy technique for intelligent engine control attract more attentions. The thrust estimator plays a paramount role in direct thrust control and performance deterioration mitigation control, while ad-vanced analytical redundancy technique is always referred as a powerful and effective tool to cope with sensor fault in high reliability control emphasized in intelligent engine control. During the proc-ess of designing thrust estimator and developing analytical redundancy, based on support vector re-gression (SVR) owning statistical learning foundation and excellent generlization performance, to circumvent the shortcomings existing in the original algorithms, many valuable algorithms and view-points are proposed. To be more important, the proposed algorithms are applied to thrust estimator design and analytical redundancy technique, and the satisfactory results are obtained.
     The classical SVR does not oppress outliers in the system. To this end, the truncated support vector regression (TSVR) with truncatedε-insensitive loss function is proposed, which not only oppreses outliers in the system and enhances generalization performance but also reduces the number of sup-port vectors and improves the real time. To solve the non-convex optimization problem in TSVR, the concave-convex procedure (CCCP) is utilized to transform the non-convex optimization to a series of convex ones so that this problem is settled successfully. In the process of realizing TSVR, there are two different perspectives, one of which is the dual; another is the primal. Although perspectives are different, the similar results can be achieved.
     After analyzing the overfitting phenomenon in hard support vector regression (HSVR), the greedy stagewise strategy is employed to approximately train HSVR, viz. GS-HSVR. In effect, the presented GS-HSVR uses the early stopping rule, which is equivalent to an implicit regularization technique, to block the overfitting phenomenon. Compared with the classical SVR, GS-HSVR takes advantages to a certain extent in the training time and the number of support vectors.
     In comparison with classical SVR, the solution of least squares support vector regression (LSSVR) is lack of sparseness. To sidestep this problem, on the basis of fast sparse approximation for LSSVR (FSA-LSSVR), a novel algorithm, viz. LS2SVR, is proposed. By contrast with FSA-LSSVR and other pruning algorithms, LS2SVR takes advantages of the training time and the number of support vectors. Due to consideration of generating constraints for the target function from the ensemble training set, LS2SVR can gain the comparable generalization performance with FSA-LSSVR using less number of support vectors, which is proved. Furthermore, after combining reduced technique with iterative strat-egy, recursive reduced LSSVR (RR-LSSVR) is proposed. Compared with FSA-LSSVR, RLSSVR (random LSSVR), and LS2SVR, with the exception of the training cost, RR-LSSVR holds better sparseness.
     To improve many unkown systems owning different data trends in different regrions, i.e., some parts are steep variations while other parts are smooth variations, after combining semiparametric technique with multikernel learning, two multikernel semiparametric predictors, i.e., multikernel semiparametric linear programming support vector regression (MSLP-SVR) and sparse multikernel semiparametric least squares support vector regression (sparse MSLSSVR), are proposed. The com-mon characteristic of the two predictors above is that the classical predictors are their special cases, which signifies the learning effectiveness of the two proposed predictors not worse than the classical ones. In addition, compared with other multikernel learning algorithms, the two proposed multikernel learning algorithms take advantages in generalization performance or training time.
     The main purpose of researches on these algorithms above is to exploit them, which includes two aspects. One is that RR-LSSVR is utilized to design thrust estimator for intelligent aeroengine. An-other is that based on GS-HSVR, an online analytical redundancy technique for sensor fault in intel-ligent aeroengine is proposed.
     The design of thrust estimator is significant for direct thrust control and performance deterioration mitigation control in telligent engine control. Hence, firstly, the input parameters of thrust estimator are determined by leave-one-out strategy which is commonly-used to select model. In order to design thrust estimator with high accuracy and real time, according to the fight altitude, the full flight enve-lope is divided. Subsequently, a more reasonable method is presented to divide the flight envelope, i.e., the whole samples in the full flight envelope are grouped using the clustering method. In general, during each group, the thrust does not fluctuate steeply, which guarantees to avoid the phenomenon that the relative errors are obviously different with the almost same thrust errors due to the sharply fluctuant thrust. To mitigate the deterioration phenomenon, after adding deterioration samples to the training set, this problem is solved. Finally, to simulate the dynamic process in intelligent aeroengines, the modified RR-LSSVR, which uses the feedback thrust as the input parameter, is exploited to design dynamic thrust estimator.
     As a concept in intelligent engine control, high reliability control is to supply correct signals for aeroengine controller. To this problem, the improved GS-HSVR, viz. fast online approximation for hard support vector (FOAHSVR), can get the similar generalization performance to GS-HSVR. More importantly, FOAHSVR is an online learning algorithm. Hence, an online analytical redundancy scheme for sensor fault in telligent aeroengines is proposed, which can be used to detect, isolate and accommodate sensor failure fault. To deal with sensor drift fault, a correction strategy is proposed, and the experimental results demonstrate that the presented correction strategy is capable of detecting, isolating and accommodating the sensor drift fault effectively.
引文
[1]刘大响,程荣辉.世界航空动力技术的现状及发展动向.北京航空航天大学学报, 2002, 28(5): 490-496.
    [2]陈大光.发动机技术进步对飞机先进性的重要作用.航空动力学报, 2008, 23(6): 981-985.
    [3] C. A. Skira, M. Agnello. Control system for the next century’s fighter engines. Transactions of the ASME Journal of Engineering for Gas Turbines and Power, 1992, 114(4): 749-754.
    [4] W. Troha, K. Kerner, G. Butler. Development status of the U.S. army small heavy fuel engine (SHFE) VAATE program. AHS International 63rd Annual Forum - Proceedings - Riding the Wave of New Vertical Flight Technology, United States: American Helicopter Society.
    [5]方昌德.航空发动机的发展前景.航空发动机, 2004, 30(1): 1-5.
    [6]黄春峰.航空智能发动机的研究发展.中国航空学会2007年学术年会动力专题. 2007: 1-14.
    [7]郭琦. VAATE计划中的智能发动机概念.燃气涡轮实验与研究, 2002, 15(4): 44-44.
    [8]孙健国.面向21世纪航空动力控制展望.航空动力学报, 16(2): 97-102.
    [9]陈恬.无人作战智能发动机控制技术研究, [南京航空航天大学博士学位论文].南京:南京航空航天大学, 2006.
    [10] S. Garg. Controls and health management technologies for intelligent aerospace propulsion systems. AIAA-2004-0949, 2004.
    [11] S. Garg. Introduction to advanced engine control concepts. 20070010763, 2007.
    [12] S. Adibhatla, H. Brown. Intelligent engine control (IEC). AIAA 92-3484, 1992.
    [13] Glenn Research Center. HPT clearance control intelligent engine systems– phase 1. NASA/CR-2005-213970, 2005.
    [14] W. Myers, S. Winter. AST critical propulsion and noise reduction technologies for future com-mercial subsonic engines area of interest 1.0: reliable and affordable control systems. NASA/CR-2006-214244, 2006.
    [15] T.-H. Guo, P. Chen, L. Jaw. Intelligent life-extending controls for aircraft engines. AIAA-2004-6468, 2004.
    [16] J. S. Litt, T. S. Sowers. Evaluation of an outer loop retrofit architecture for intelligent turbofan engine thrust control. AIAA-2006-5103, 2006.
    [17]金茂贤.航空发发动机先进控制概念及最新进展.航空科学技术, 2005(1): 24-27.
    [18]陈恬,孙健国.基于相关性分析和神经网络的直接推理控制.南京航空航天大学学报, 2005, 37(2): 183-187.
    [19] M. Henriksson, T. Gr?nstedt. Engine variability impact on reduced order thrust estimation fil-ters. AIAA 2004-4044, 2004.
    [20] S. Adibhatla, Z. Gastineau. Tracking filter selection and control mode selection for model based control. AIAA-94-3204, 1994.
    [21] S. Adibhatla, T. Lewis. Model-based intelligent digital engine control (MoBIDEC). AIAA-97-3192, 1997.
    [22] Z. Gastineat, G. Happawana. Robust model-based control for jet engines. AIAA-98-3752, 1998.
    [23] M. Henriksson, T. Gr?nstedt. Reduced order thrust estimation for a jet engine. Proceedings of the ASME Turbo Expo 2005, United States: American Society of Mechanical Engineers, 2005: 95-101.
    [24] M. Henriksson, D. Ring. Robust Kalman filter thrust estimation in a turbofan engine. Proceed-ings of the ASME Turbo Expo 2006 - Power for Land, Sea, and Air, United States: American Society of Mechanical Engineers, 2006: 891-900.
    [25] J. S. Litt. An optimal orthogonal decomposition method for Kalman filter-based turbofan en-gine thrust estimation. Journal of Engineering for Gas Turbines and Power, 2008, 130: 1-12.
    [26] M. Maggiore, R. Ordonez, K. M. Passino, S. Adibhatla. Estimator design in jet engine applica-tions. Engineering Applications of Artificial Intelligence, 2003, 16(7-8): 579-593.
    [27] R. E. Wallhagen, D. J. Arpasi. Self-teaching digital-computer program for fail-operational con-trol of a turbojet engine in a sea-level test stand. NASA-TM-X-3043, 1974.
    [28] W. R. Wells, C. W. de Silva. Failure state detection of aircraft engine output sensors. 1977 Joint Automatic Control Conference, New York, NY, USA: IEEE, 1977: 1493-1497.
    [29] H. A. Spang, R. C. Corley. Failure detection and correction for turbofan engine. Generic Elec-tric Co., TIS Report 77 CRD159, 1977.
    [30] A. D. Posano. Failure indication and corrective action for turboshaft engine. Journal of the American Helicopter Society, 1980.
    [31] R. V. Beard. Failure accommodation in linear systems through selfreorganization. Report MVT-71-1. Man Vehicle Laboratory, Cambirdge, Massachusetts, 1971.
    [32] A. Emami-Naeini, M. M. Akhter, S. M. Rock. Robust detection, isolation, and accommodation for sensor failures. NASA-CR-174825, 1986.
    [33] H. Brown, R. W. Vizzini. Analytical redundancy technology for engine reliability improvement. Aerospace Technology Conference and Exposition, SAE, Warrendale, PA, USA , 1986.
    [34] J. A. Swan, R. W. Vizzini. Analytical redundancy design for improved engine control reliability, final review. AIAA-88-3176, 1988.
    [35] J. C. Deckert, M. N. Desai, J. J. Deyst, et al. F-8 DFBW sensor failure identification using analytical redundancy. IEEE Transactions on Automatic Control, 1977, AC-22(5): 795-803.
    [36] Y. C. Edward, S. W. Alan. Analytical redundancy and the design of robust failure detection sys-tems. IEEE Transactions on Automatical Control, 1984, AC-29(7): 603-614.
    [37] B. C. Moore. On the flexibility offered by state feedback in multivariable systems beyond closed loop eigenvalue assignment. IEEE Transactions on Automatic Control, 1976, AC-21(5): 689-692.
    [38] R. J. Patton, J. Chen. Robust fault detection of jet engine sensor system using eigenstructure assignment. AIAA-91-2797-CP, 1991.
    [39] J. J. Gertler. Survey of model-based failure detection and isolation in complex plants. IEEE Control Systems Magazine, 1988, 8(6): 3-11.
    [40]周东华,孙优贤.控制系统的故障检测与诊断技术.北京:清华大学出版社, 1994.
    [41] A. D. Stacy, B. D. Joanne. Analyzing fault tolerance using DREDD (dependability and risk evaluation using decision diagrams). AIAA-95-0952, 1995.
    [42] T.-H. Guo, J. Nurre. Sensor failure detection and recovery by neural networks. NASA-TM-104484, 1991.
    [43] J. C. Moller, J. S. Litt, T.-H. Guo. Neural network-based sensor validation for turboshaft en-gines. AIAA-98-3605, 1998.
    [44] P. Dewallef, O. Leonard. On-line validation of measurements on jet engines using automatic learning methods. ISABE paper 2001-1031, 2001.
    [45] K. Botros, G. Kibrya, A. Glover. A demonstration of artificial neural networks based data min-ing for gas turbine driven compressor stations. Journal of Engineering for Gas Turbines and Power, 2002, 124(2): 284-297.
    [46] Huang Xianghua. Sensor fault diagnosis and reconstruction of engine control system based on autoassociative neural network. Chinese Journal of Aeronautics, 2004, 17(1): 23-27.
    [47]黄向华.发动机数控系统智能解析余度技术, [南京航空航天大学博士学位论文].南京:南京航空航天大学, 1998.
    [48] D. L. Simon, T.-H. Guo, K. J. Semega. Sensor needs for control and health management of in-telligent aircraft engines. NASA/TM-2004-213202, 2004.
    [49] S. Garg, K. J. Melcher. Sensor selection and data validation for reliable integrated system health management. NASA 20080033120, 2008.
    [50] V. N. Vapnik. The Nature of Statistical Learning. New York: Springer-Verlag, 1995. [张学工译.统计学习理论的本质(中文版).北京:清华大学出版社, 2000.]
    [51]张莉.支撑矢量机与核方法研究, [西安电子科技大学博士学位论文].西安:西安电子科技大学, 2002.
    [52] J. Mercer. Functions of positive and negative type and their connection with the theory of inte-gral equations. Philos. Trans. Roy. Soc. London, 1909, 209: 415-446.
    [53] M. A. Aizerman, E. M. Braverman, L. I. Rozonoér. Theoretical foundation of the potential function method in pattern recognition learning. Automation and Remote Control, 1964, 25: 821-837.
    [54] B. E. Boser, I. M. Guyon, V. N. Vapnik. A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, New York, NY, USA: ACM, 1992: 144-152.
    [55] V. N. Vapnik, S. Mukherjee. Support vector method for multivariate density estimation. Ad-vances in Neural Information Processing Systems, Cambridge, MA, USA: MIT Press, 2000: 659-665.
    [56] J. Weston, A. Gammerman, M. Stitson, et al. Support vector density estimation. Advances in Kernel Methods: Support Vector Learning, Cambridge, MA, USA: MIT Press, 1999: 293-306.
    [57] B. Sch?lkopf, A. J. Smola, K. R. Müller. Kernel principal component analysis. Advances in Kernel Methods: Support Vector Learning, Cambridge, MA, USA: MIT Press, 1999: 327-352.
    [58] S. Mika, G. Ratsch, J. Weston, et al. Fisher discriminant analysis with kernels. Proceedings of the 1999 IEEE Signal Processing Society Workshop, Piscataway, NJ, USA: IEEE, 1999: 41-48.
    [59] M. Girolami. Mercer kernel based clustering in feature space. IEEE Transactions on Neural Networks, 2002, 13(4): 780-784.
    [60] A. Ben-Hur, D. Horn, H. T. Siegelmann, et al. Support vector clustering. Journal of Machine Learning Research, 2002, 2(2): 125-137.
    [61] J. Lee, D. Lee. An improved cluster labeling method for support vector clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3): 461-464.
    [62] S. Agarwal, C. Cortes, R. Herbrich (Editors). Learning to rank. Advances in Neural Informa-tion Processing Systems, Cambridge, MA, USA: MIT Press, 2005.
    [63]焦李成,公茂果,王爽等.自然计算、机器学习与图像理解前沿.西安:西安电子科技大学出版社, 2008: 143-165.
    [64] H. L. Xiong, M. N. S. Swamy, M. O. Ahmad. Optimizing the kernel in the empirical feature space. IEEE Transactions on Neural Networks, 2005, 16(2): 460-474.
    [65] W. S. McCulloch, W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulle-tin of Mathematical Biophysics, 1943, 5: 115-133.
    [66] D. O. Hebb. The Organization of Behavior. New York: Wiley, 1949: xi-xix, 60-78.
    [67] F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 1985, 5: 386-408.
    [68] A. B. J. Novikoff. On convergence proofs on perceptrons. Proceedings of the Symposium on the Mathemetical Theory of Automata, 1962, XII: 615-622.
    [69] D. E. Rumelhart, G. E. Hinton, R. J. Williams. Learning internal representations by error propagation. Proceedings of Parallel Distributed Processing: Explorations in the Microstruc-tures of Cognition, MA: MIT Press, 1986: 675-695.
    [70] D. E. Rumelhart, G. E. Hinton, R. J. Williams. Learning representations by back-propagation errors. Nature, 1986, 323: 696-699.
    [71] A. N. Tikhonov. On solving incorrectly posed problems and method of regularization. Doklady Akademii Nauk USSR, 1963, 151: 501-504.
    [72] V. V. Ivanov. On linear problems which are not well-posed . Soviet Math. Docl., 1962, 3(4): 981-983.
    [73] D. Z. Philips. A technique for numerical solution of certain integral equation of the first kind. J. Assoc. Comput. Math., 1962, 9: 84-96.
    [74] M. Rosenblat. Remarks on some nonparametric estimation of density functions. Annals of Mathematical Statistics, 1956, 27: 624-669.
    [75] E. Parzen. On estimation of probability function and mode. Annals of Mathematical Statistics, 1962, 33(3).
    [76] N. N. Chentsov. Evaluation of an unknown distribution density from observations. Soviet Math., 1963, 4: 1559-1562.
    [77] R. J. Solomonoff. A preliminary report on general theory of inductive inference. Technical Re-port ZTB-138, Zator Company, Cambridge, MA, 1960.
    [78] A. N. Kolmogorov. Three approaches to the quantitative definitions of information. Problem of Inform. Transmission, 1965, 1(1): 1-7.
    [79] G. J. Chaitin. On the length of programs for computing finite binary sequences. J. Assoc. Com-put. Mach., 1966, 13: 547-569.
    [80] J. Rissanen. Modeling by shortest data description. Automatica, 1978, 14(5): 465-471.
    [81] V. N. Vapnik. A. Ja. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Doklady Akademii Nauk USSR, 1968, 181(4).
    [82] V. N. Vapnik, A. Ja. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Apl., 1971, 16: 264-280.
    [83] V. N. Vapnik, A. Ja. Chervonenkis. Theory of Pattern Recognition (Russian Edition). Moscow: Nauka, 1974.
    [84] V. N. Vapnik. Estimation of Dependencies Based on Empirical Data. New York: Springer, 1982.
    [85] V. N. Vapnik, A. Ja. Chervonenkis. The necessary and sufficient conditions for consistency of the method of empirical risk minimization. Pattern Recognition, 1991, 1(3): 284-305.
    [86] S. Knerr, L. Personnaz, G. Dreyfus. Single-layer learning and training a neural network. Pro-ceedings of the NATO Advanced Research Workshop, Berlin, Germany: Springer-Verlag, 1990: 41-50.
    [87] L. Bottou, C. Cortes, J. Denker. Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of 12th International Conference on Pattern Recognition, Los Alamitos, CA, USA: IEEE Comput. Soc. Press, 1994: 77-82.
    [88] M. Moreira, E. Mayoraz. Improved pairwise coupling classification with correcting classifiers. Machine Learning: ECML-98 10th European Conference on Machine Learning, Berlin, Ger-many: Springer-Verlag, 1998: 160-171.
    [89] J. C. Platt, N. Cristianini, T. Shawe-Taylor. Large margin DAGs for multiclass classification. Advances in Neural Information Processing Systems, MA, USA: MIT Press, 2000: 547-553.
    [90] T. G. Dietterich, G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 1995, 2: 263-288.
    [91]马笑潇,黄席樾,柴毅.基于SVM的二叉树多类分类算法及其在故障诊断中的应用.控制与决策, 2003, 18(3): 272-277.
    [92] B. Sch?lkopf, C. J. C. Burges, V. Vapnik. Extracting support data for a given task. Proceedings of First International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA, USA: AAAI, 1995: 252-257.
    [93] B. Sch?lkopf, J. C. Plattz, J. Shawe-Taylor, et al. Estimating the support of a high-dimensionaldistributes. Neural Computation, 2001, 13(7): 1443-1471.
    [94] S. M. Guo, L. C. Chen, J. S. H. Tsai. A boundary method for outlier detection based on support vector domain description. Pattern Recognition, 42(1): 77-83.
    [95] D. M. J. Tax, R. P. W. Duin. Support vector domain description. Pattern Recognition Letters, 1999, 20(11-13): 1191-1199.
    [96] B. Sch?lkopf, A. J. Smola, R. C. Williamson, et al. New support vector algorithms. Neural Computation, 12(5): 1207-1245.
    [97] C.-C. Chang, C.-J. Lin. Trainingν-support vector classifiers: theory and algorithms. Neural Computation, 13(9): 2119-2147.
    [98] C.-C. Chang, C.-J. Lin. Trainingν-support vector regression: theory and algorithms. Neural Computation, 14(8): 1959-1977.
    [99] V. Kecman. Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models. Cambridge, MA: MIT Press, 2001.
    [100] I. Hadzic, V. Kecman. Support vector machines trained by linear programming theory and ap-plication in image compression and data classification. Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering, Piscataway, NJ, USA & Belgrade, Yugoslavia: IEEE & Academic Mind, 2000: 18-23.
    [101] Y.-J. Lee, O. L. Mangasarian. RSVM: reduced support vector machines. Proceedings of 1st SIAM Int. Conf. Data Mining, 2001.
    [102] J. A. K. Suykens, J. Vandewalle. Least squares support vector machine classifiers. Neural Proc-essing Letters, 1999, 9(3): 293-300.
    [103] C. Cortes, V. N. Vapnik. Support-vector networks. Machine learning, 20(3): 273-297.
    [104] M. Schmidt. Identifying speaker with support vector networks. Proceedings of 28th Sympo-sium on the Interface of Computing Science and Statistics (Graph-Image-Vision), Fairfax Sta-tion, VA, USA: Interface Found. North America, 1997: 305-314.
    [105] K. Hotta. Robust face recognition under partial occlusion based on support vector machine with local Gaussian summation kernel. Image and Vision Computing, 2008, 26(11): 1490-1498.
    [106] R. M. Mohamed, A. A. Farag. Mean field theory for density estimation using support vector machines. Seventh International Conference on Information Fusion, Mountain View, CA, USA: Int. Soc. of Information Fusion, 2004: 495-501.
    [107] S. Avidan. Support vector tracking. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 2004, 26(8): 1064-1072.
    [108] X. Li, X. Tian. Two steps features selection and support vector machines for web page text categorization. Journal of Computational Information Systems, 2008, 4(1): 133-138.
    [109]张莉,周伟达,焦李成.用于一维图像识别的支撑矢量机.红外与毫米波学报, 2002, 21(2): 119-123.
    [110] K. W. Lau, Q. H. Wu. Local prediction of non-linear time series using support vector regression. Pattern Recognition, 2008, 41(5): 1539-1547.
    [111] S. Chen, M. Wang. Seeking multi-thresholds directly from support vectors for image segmen-tation. Neurocomputing, 2005, 67(1-4): 335-344.
    [112] J. A. K. Suykens, J. Vandewalle, B. De Moor. Optimal control by least squares support vector machines. Neural Networks, 2001, 14(1): 23-25.
    [113] Y. Hu, J. Pang. Financial crisis early-warning based on support vector machine. 2008 Interna-tional Joint Conference on Neural Networks, United States: IEEE, 2008: 2435-2440.
    [114] M. Mohandes. Support vector machines for short-term electrical load forecasting. International Journal of Energy Research, 2002, 26(4): 335-345.
    [115] W.-C. Hong. Electric load forecasting by support vector model. Applied Mathemetical Model-ling, 2009, 33(5): 2444-2454.
    [116] L. M. Manevitz, M. Yousef. One-class SVMs for document classification. Journal of Machine Learning Research, 2002, 2(2): 139-154.
    [117] Y. Zhang, X.-D. Liu, F.-D Xie, et al. Fault classifier of rotating machinery based on weighted support vector data description. Expert Systems with Applications, 2009, 36(4): 7928-7932.
    [118] C.-F. Chun, S.-D. Wang. Fuzzy support vector machines. IEEE Transactions on Neural Net-works, 2002, 13(2): 464-471.
    [119] Y. Wang, S. Wang, K. K. Lai. A new fuzzy support vector machine to evaluate credit risk. IEEE Transactions on Fuzzy Systems, 2005, 13(6): 820-831.
    [120] J. Zhang, Y. Wang. A rough margin based support vector machine. Information Sciences, 2008, 178(9): 2204-2214.
    [121]周志华,王珏.机器学习及其应用2007.北京:清华大学出版社, 2007: 49-84.
    [122] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods: Support Vector Learning, Cambridge, MA, USA: MIT Press, 1999: 185-208.
    [123] E. Osuna, R. Freund, F. Girosi. An improved training algorithm for support vector machines.Proceedings of the 1997 IEEE Signal Processing Society Workshop, New York, NY, USA: IEEE, 1997: 276-285.
    [124] T. Joachims. Making large-scale SVM learning practical. Advances in Kernel Methods: Sup-port Vector Machine, Cambridge, MA, USA: MIT Press, 1999: 169-184.
    [125] R. Collobert, S. Bengio. SVMTorch: support vector machines for large-scale regression prob-lems. Journal of Machine Learning Research, 2001, 1(2): 143-160.
    [126] S. K. Shevade, S. S. Keerthi, C. Bhattacharyya, K. R. K. Murthy. Improvements to the SMO algorithm for SVM regression. IEEE Transactions on Neural Network, 2000, 11(5): 1188-1193.
    [127] G. W. Flake, S. Lawrence. Efficient SVM regression training with SMO. Machine Learning, 2002, 46(1-3): 271-290.
    [128] C. C. Chang, C. J. Lin, LIBSVM: a library for support vector machines, 2001, available from .
    [129] S. S. Keerthi, S. K. Shevade, C. Bhattacharyya. A fast iterative nearest point algorithm for sup-port vector machine classifier design. IEEE Transactions of Neural Networks, 2000, 11(1): 124-136.
    [130] J. Q. Wang, Q. Tao, J. Wang. Kernel projection algorithm for large-scale SVM problems. Jour-nal of Computer Science and Technology, 2002, 17(5): 556-564.
    [131] I. W. Tsang, J. T. Cheung. Core vector machines: fast SVM training on very large data sets. Journal of Machine Learning Research, 2005, 6: 363-392.
    [132] I. W. Tsang, J. T. Kwok, J. M. Zurada. Generalized core vector machines. IEEE Transactions on Neural Networks, 2006, 17(5): 606-635.
    [133] V. Franc, V. Hlavác. An iterative algorithm learning the maximal margin classifiers. Pattern Recognition, 2003, 36(9): 1985-1996.
    [134] B. F. Mitchell, V. F. Dem’yanov, V. N. Malozemov. Finding the point of a polyhedron closet to the origin. SIAM J. Control, 1974, 12(1): 19-26.
    [135]王珏,周志华,周傲英.机器学习及其应用.北京:清华大学出版社, 2006.
    [136] V. N. Vapnik. Statistical Learning Theory. New York: Wiley, 1998.
    [137] V. N. Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Net-works, 1999, 10(5): 988-999.
    [138] N. Cristianini, J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Ker-nel-based Learning Methods. Beijing: China Machine Press, 2005.
    [139]邓乃扬,田英杰.数据挖掘中的新方法—支持向量机.北京:科学出版社, 2004.
    [140]白鹏,张喜斌,张斌等.支持向量机理论及工程应用实例.西安:西安电子科技大学出版社, 2008.
    [141] J. Shawe-Taylor, N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge, UK: Cam-bridge University Press, 2004.
    [142] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 1998, 7(2): 1455-1480.
    [143] T. Evgeniou, M. Pontil, T. Poggio. Regularization networks and support vector machines. Ad-vances in Computational Mathematics, 2000, 13(1): 1-50.
    [144] W. Li. Generalized Regularized Learning, [The Thesis of the Chinese University of Hong Kong]. Hong Kong: The Chinese University of Hong Kong,2007.
    [145] W. Li, K. H. Lee, K. S. Leung. Generalized regularized least-squares learning with predefined features in a Hillbert space. Advances in Neural Information Processing Systems, USA: MIT Press, 2007: 881-888.
    [146] W. Li, K. S. Leung, K. H. Lee. Generalizing the bias term of support vector machines. Pro-ceeding of the 20th International Joint Conference on Artificial Intelligence, 2007: 919-924.
    [147] J. Zhu, T. Hastie. Kernel logistic regression and the import vector machine. Advances in Neural Information Processing Systems, USA: MIT Press, 2002.
    [148] G. C. Cawley, N. L. C. Talbot. Efficient approximate leave-one-out cross-validation for kernel logistic regression. Machine Learning, 71(2-3): 243-264.
    [149] R. E. Schapire. The strength of weak learnability. Machine Learning, 1990, 5(2): 197-227.
    [150] Y. Freund, R. E. Schapire. Experiments with a new boosting algorithm. Proceedings of Thir-teenth International Conference on Machine Learning, San Francisco, CA, USA: Morgan Kaufmann Publishers, 1996: 148-156.
    [151] C. Saunders, A. Gammerman, V. Vovk. Ridge regression learning algorithm in dual variables. Proceedings of the Fifteenth International Conference on Machine Learning, San Francisco, CA, USA: Morgan Kaufmann Publishers, 1998: 515-521.
    [152] R. Collobert, F. Sinz, J. Weston, et al. Trading convexity for scalability. Proceedings of the 23rd International Conference on Machine Learning, New York, United States: Association for Computing Machinery, 2006: 201-208.
    [153] X. Shen, G. C. Tseng, X. Zhang, et al. Onψ-learning. Journal of the American Statistical As-sociation, 2003, 98(1): 724-734.
    [154] Y. Liu, X. Shen, H. Doss. Multicategoryψ-learning and support vector machine: computa-tional tools. Journal of Computational and Graphical Statistics, 2005, 14: 219-236.
    [155] Y. Wu, Y. Liu. Robust truncated Hinge loss support vector machines. Journal of the American Statistical Association, 2007, 102(479): 974-983.
    [156] A. L. Yuille, A. Rangarajan. The concave-convex procedure. Neural Computation, 2003, 15(4): 915-936.
    [157] L. Gunter, J. Zhu. Efficient computation and model selection for the support vector regression. Neural Computation, 2007, 19(6): 1633-1655.
    [158] C.-M. Huang, Y.-J. Lee, D. K. J. Lin, et al. Model selection for support vector machines via uniform design. Computational Statistics and Data Analysis, 2007, 52(1): 335-346.
    [159] W. Wang, Z. Xu. A heuristic training for support vector regression. Neurocomputing, 2004, 61(1-4): 259-271.
    [160] G. Guo, J. S. Zhang. Reducing examples to accelerate support vector regression. Pattern Rec-ognition Letters, 2007, 28(16): 2173-2183.
    [161] S. Lessmann, R. Stahlbock, S. R. Crone. Genetic algorithms for support vector machine model selection. 2006 International Joint Conference on Neural Networks, Piscataway, NJ, USA: IEEE, 2007: 3063-3069.
    [162] J. Acevedo, S. Maldonado, S. Lafuente, et al. Model selection for support vector machines us-ing ant colony optimization in an electronic nose application. Lecture Notes in Computer Sci-ence, Belgium: Springer Verlag, 2006: 468-475.
    [163] G. Valentini, T. G. Dietterich. Bias-variance analysis of support vector machines for the devel-opment of SVM-based ensemble methods. Journal of Machine Learning Research, 2004, 5: 725-775.
    [164] O. L. Mangasarian. A finite Newton method for classification. Optimization Methods and Soft-ware, 2002, 17(5): 913-929.
    [165] G. Fung, O. L. Mangasarian. Finite Newton method for Lagrangian support vector machine classification. Neurocomputing, 2003, 55(1-2): 39-55.
    [166] S. S. Keerthi, D. M. DeCoste. A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 2005, 6: 341-361.
    [167] O. Chapelle. Training a support vector machine in the primal. Neural Computation, 2007, 19(5): 1155-1178.
    [168] L. Wang, H. Jia, J. Li. Training robust support vector machine with smooth ramp loss in the primal space. Neurocomputing, 2008, 71(13-15): 3020-3025.
    [169] L. Bo, L. Wang, L. Jiao. Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, 2007, 19(4): 1082-1096.
    [170] G. S. Kimeldorf, G. Wahba. A correspondence between Bayesian estimation on stochastic proc-esses and smoothing by splines. Annals of Mathematical Statistics, 1970, 41: 495-502.
    [171] X. D. Zhang. Matrix Analysis and Applications. New York: Springer, 2004.
    [172] C. F. Lin, S. D. Wang. Fuzzy support vector machines. IEEE Transactions on Neural Networks, 2002, 13(2): 464-471.
    [173] C. C. Chuang. Fuzzy weighted support vector regression with a fuzzy partition. IEEE Transac-tions on Systems, Man, and Cybernetics-Part B: Cybernetics, 2007, 37(3): 630-640.
    [174] L. Bo, L. Wang, L. Jiao. Training hard-margin support vector machines using greedy stagewise algorithm. IEEE Transactions on Neural Networks, 2008, 19(8): 1446-1455.
    [175] S. Chen, F. Cowan, P. Grant. Orthogonal least squares learning algorithm for radial basis func-tion networks. IEEE Transactions on Neural Networks, 1991, 2(2): 302-309.
    [176] P. Vincent, Y. Bengio. Kernel matching pursuits. Machine Learning, 2002, 48(1-3): 165-187.
    [177] J. R. Stack, G. J. Dobeck, X. J. Liao, et al. Kernel-matching pursuits with arbitrary loss func-tions. IEEE Transactions on Neural Networks, 2009, 20(3): 395-405.
    [178] S. Mallat, Z. Zhang. Matching pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing, 1993, 41(12): 3397-3415.
    [179] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 1995, 121(2): 256-285.
    [180] P. R. Charkha. Stock price prediction and trend prediction using neural networks. Proceedings of 1st International Conference on Emerging Trends in Engineering and Technology, Piscata-way, United States: IEEE, 2008: 592-594.
    [181] J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statis-tics, 2001, 29(5): 1189-1232.
    [182] S. Chen, E. S. Chng, K. Alkadhimi. Regularized orthogonal least squares algorithm for con-structing radial basis function networks. International Journal of Control, 1996, 64(5): 829-837.
    [183] J. A. K. Suykens, J. Vandewalle. Least squares support vector machine classifiers, Neural Proc-essing Letter, 1999, 9(3): 293-300.
    [184] J. A. K. Suykens, T. Van Gestel, J. De Brabanter, et al. Least Squares Support Vector Machines. Singapore: World Scientific, 2002.
    [185] M. M. Adankon, M. Cheriet. Model selection for the LS-SVM. Application to handwriting rec-ognition. Pattern Recognition. (Accepted)
    [186] K. Polat, S Güns. Breast cancer diagnosis using least square support vector machine. Digital Signal Processing, 2007, 17(4): 694-701.
    [187] V. Mitra, C.-J. Wang, S. Banerjee. Text classification: a least square support vector machine approach. Applied Soft Computing, 2007, 7(3): 908-914.
    [188] M. Espinoza, J. A. K. Suykens, B. De Moor. Fixed-size least squares support vector machines: a large scale application in electrical load forecasting. Computational Management Science, 2006, 3(2): 113-129.
    [189] J. A. K. Suykens, J. De Brabanter, L. Lukas, et al. Weighted least squares support vector ma-chines: robustness and sparse approximation. Neurocomputing, 2002, 48(1-4): 85-105.
    [190] J. A. K. Suykens, L. Lukas, J. Vandewalle. Sparse approximation using least squares support vector machines. Proceeding of IEEE International Symposium on Circuits and System, Pis-cataway, NJ, United States: IEEE, 2000: 757-760.
    [191] J. B. Gao, D. Shi, X. M. Liu. Significant Vector Learning to Construct Sparse Kernel Regres-sion Models. Neural Networks, 2007, 20(7): 791-798.
    [192] B. J. De Kruif, T. J. A. De Vries. Pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 2003, 14(3): 696-702.
    [193] L. Hoegaerts, J. A. K. Suykens, J. Vandewalle, et al. A comparison of pruning algorithms for sparse least squares support vector machines. Proceedings of International Conference on Neural Information Processing, Berlin: Springer-Verlag, 2004: 1247-1253.
    [194] A. Kuh, P. De Wilde. Comments on“pruning error minimization in least squares support vector machines”. IEEE Transactions on Neural Networks, 2007, 18(2): 606-609.
    [195] S. S. Keerthi, S. K. Shevade. SMO algorithm for least squares SVM. Proceedings of the Inter-national Joint Conference on Neural Networks 2003, United States: IEEE, 2003: 2088-2093.
    [196] L. Bo, L. Jiao, L. Wang. Working set selection using functional gain for LS-SVM. IEEE Trans-actions on Neural Networks, 2007, 18(5): 1541-1544.
    [197] X. Y. Zeng, X. W. Chen. SMO-based pruning methods for sparse least squares support vector machines. IEEE Transactions on Neural Networks, 2005, 16(6): 1541-1546.
    [198] L. Jiao, L. Bo, L. Wang. Fast sparse approximation for least squares support vector machine. IEEE Transaction on Neural Networks, 2007, 18(3): 685-697.
    [199] J. A. K. Suykens, L. Lukas, P. Van Dooren, et al. Least squares support vector machine classi-fiers: a large scale algorithm. Proceedings of the European Conference on Circuit Theory andDesign, Torino, Italy: Politecnico di Torino, 1999: 839-842.
    [200] W. Chu, C. J. Ong, S. S. Keerthi. An improved conjugate gradient scheme to the solution of least squares SVM. IEEE Transactions on Neural Networks, 2005, 16(2): 498-501.
    [201] W. Li, K.-H. Lee, K.-S. Leung. Large-scale rlsc learning without agony. Twenty-Fourth Inter-national Conference on Machine Learning, United States: Association for Computing Machin-ery, 2007: 529-536.
    [202] Y. Zhao, K. C. Keong. Fast leave-one-out evaluation and improvement on inference for LS-SVMs. Proceedings of the 17th International Conference on Pattern Recognition, Cam-bridge, United States: IEEE, 2004: 494-497.
    [203] G. C. Cawley, N. L. C. Talbot. Fast exact leave-one-out cross validation of sparse least-squares support vector machines. Neural Networks, 2004, 17(10): 1467-1475.
    [204] S. J. An, W. Q. Liu, S. Venkatesh. Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recognition, 2007, 40(8): 2154-2162.
    [205] T. Downs, K. Gates, A. Masters. Exact simplification of support vector solutions, Journal of Machine Learning, 2002, 2(2): 293-297.
    [206] Y.-J. Lee, O. L. Mangasarian. RSVM: reduced support vector machines. Proceedings of 1st SIAM International Conference on Data Mining, 2001: 1-17.
    [207] K. M. Lin, C. J. Lin. A study on reduced support vector machines, IEEE Transactions on Neu-ral Networks, 2003, 14(6): 1449-1459.
    [208] Y. J. Lee, S. Y. Huang. Reduced support vector machines: a statistical theory. IEEE Transac-tions on Neural Networks 2007, 18(1): 1-13.
    [209]王星.非参数统计.北京:中国人民大学出版社, 2005: 28-30.
    [210] A. J. Smola, T. T. Frie, B. Sch?lkopf. Semiparametric support vector and linear programming machines. Proceedings of the 1998 Conference on Advances in neural information processing systems II, Cambridge, MA, USA: MIT, 1998: 585 - 591.
    [211] C.-V. Nguyen, D. B. H. Tay. Regression using multikernel and semiparametric support vector algorithms. IEEE Signal Processing Letters, 2008, 15: 481-484.
    [212] J. Bi, T. Zhang, K. Bennett. Column-generation boosting methods for mixture of kernels. Pro-ceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, USA: Association for Computing Machinery, 2004: 521-526.
    [213] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, et al. Learning the kernel matrix with semidefinite programming, Journal of Machine Learning Research, 2004, 5: 27-72.
    [214] C. S. Ong, A. J. Smola, R. C. Williamson. Learning the kernel with hyperkernels. Journal of Machine Learning Research, 2005, 6: 1043-1071.
    [215] F. Bach, G. R. G. Lanckriet, M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. Twenty-First International Conference on Machine Learning, USA: Association for Computing Machinery, 2004: 41-48.
    [216] S. Sonnenburg, G. R?tsch, C. Sch?fer. A general and efficient multiple kernel learning algo-rithm. Advances in neural information processing systems, Cambridge, MA: MIT Press, 2005: 1273-1280.
    [217] Z. Wang, S. Chen, T. Sun. MultiK-MHKS: a novel multiple kernel learning algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(2): 348-353.
    [218] A. J. Smola, B. Sch?lkopf, G. R?tsch. Linear programs for automatic accuracy control in re-gression. Proceedings of the 1999 the 9th International Conference on Artificial Neural Net-works, Stevenage, United Kingdom: IEE, 1999: 575-580.
    [219] V. Kecman. Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models. United States: MIT Press, 2001.
    [220] I. Hadzic, V. Kecman, Support vector machines trained by linear programming theory and ap-plication in image compression and data classification. Proceedings of the 5th Seminar on Neu-ral Network Applications in Electrical Engineering, Piscataway, NJ, USA & Belgrade, Yugo-slavia: IEEE, 2000: 18-23.
    [221] P. M. L. Drezet, R. R. Harrison. A new method for sparsity control in support vector classifica-tion and regression. Pattern Recognition, 2001, 34(1): 111-125.
    [222] D. Zheng, J. Wang, Y. Zhao. Non-flat function estimation with a multi-scale support vector regression. Neurocomputing, 2006, 70(1-3): 420-429.
    [223]朱行健,王雪瑜.燃气轮机工作原理及性能.北京:科学出版社, 1992.
    [224]张逸民.航空涡轮风扇发动机.北京:国防工业出版社, 1985.
    [225]姚彦龙.航空发动机神经网络直接推力逆控制, [南京航空航天大学硕士学位论文].南京:南京航空航天大学, 2008.
    [226] A. Turevskiy, R. Meisner, R. H. Luppold, et al. A model-based controller for commercial aero gas turbines. American Society of Mechanical Engineers, International Gas Turbine Institute, Turbo Expo (Publication) IGTI, United States: American Society of Mechanical Engineers, 2002: 189-195.
    [227] M. Henriksson, C. Breitholtz. Estimation of thrust and mass flow in a jet engine. Proceedingsof the 2004 IEEE International Conference on Control Applications, Piscataway, NJ, USA: IEEE, 2004: 229-235.
    [228] D. Ring, M. Henriksson. Thrust control for a turbofan engine using estimation. Proceedings of the ASME Turbo Expo 2006 - Power for Land, Sea, and Air, USA: American Society of Me-chanical Engineers, 2006: 901-910.
    [229] T. Kobayashi, D. Simon, J. Litt. Application of a constant gain extended Kalman filter for in-flight estimation of aircraft engine performance parameters. Proceedings of the ASME Turbo Expo, United states: American Society of Mechanical Engineers, 2005: 617-628.
    [230] I. Guyon, J. Weston, S. Barnhill, et al. Gene selection for cancer classification using support vector machines. Machine Learning, 2002, 46(1-3): 389-422.
    [231] X. Zhou, K. Z. Mao. LS bound based gene selection for DNA microarray data. Bioinformatics, 2005, 21(8): 1559-1564.
    [232] E. K. Tang, P. Suganthan, X. Yao. Gene selection for microarray data based on least squares support vector machine. BMC Bioinformatics, 2006, 7(95): 1471-2105.
    [233] F. Ojeda, J. A. K. Suykens, B. D. Moor. Low rank updated LS-SVM classifiers for fast variable selection. Neural Networks, 2008, 21(2-3): 437-449.
    [234] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification (second edition). UK: Johh Wiley & Sons Inc., 2001.
    [235] K. W. Lau, Q. H. Wu. Leave one support vector out cross validation for fast estimation of gen-eralization errors. Pattern Recognition, 2004, 37(9): 1835-1840.
    [236]姚彦龙,孙健国.自适应遗传神经网络算法在推力估计器设计中的应用.航空动力学报, 2007, 22(10): 1748-1753.
    [237]张鹏.基于卡尔曼滤波的航空发动机故障诊断技术研究, [南京航空航天大学博士学位论文].南京:南京航空航天大学, 2009.
    [238]郝英.基于智能技术的民航发动机故障诊断和寿命预测研究, [南京航空航天大学博士学位论文].南京:南京航空航天大学, 2006.
    [239]钱坤,谢寿生,胡金海等.并行鲁棒观测器对发动机减小转速信号的故障分离.航空动力学报, 2005, 20(1): 136-141.
    [240]何保成,于达仁,史新兴.应用传感器仿真模型分析发动机控制系统故障.推进技术, 2001, 22(5): 364-367.
    [241]辛季岭,方勤思.提高数控系统可靠性的解析余度研究.南京航空航天大学学报, 1994, 26(6): 855-861.
    [242] C. D. John, W. C. Merrill. Advanced detection isolation and accommodation of sensor failure in turbofan engine, real-time microcomputer implementation. NASA-2925, 1990.
    [243] D. L. Mattern, L. C. Jaw, T.-H. Guo, et al. Using neural networks for sensor validation. AIAA-98-3547, 1998.
    [244] N. Aretakis, K. Mathioudakis, A. stamatis. Identification of sensor faults on turbofan engines using pattern recognition techniques. Control Engineering Practice, 2004, 12(7): 827-836.
    [245]李本威,樊照远,王永华等.基于SVR的X型发动机传感器故障诊断研究.航空动力学报, 2007, 22(10): 1754-1759.
    [246]蔡开龙,谢寿生,杨伟等.基于改进LS-SVM的航空发动机传感器故障诊断与自适应重构控制.航空动力学报, 2008, 23(6): 1118-1126.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700