最小二乘支持向量机的改进及其在化学化工中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
最小二乘支持向量机(least squares support vector machine,LSSVM)是一种遵循结构风险最小化(structural risk minimization,SRM)原则的核函数学习机器,近年来化学、化工领域的应用日益广泛。本文以LSSVM在实际应用中的若干问题为主线,针对其应用中存在的高维数据降维、超参数选择和稀疏性等问题,提出了若干新算法,并应用于化学物质结构与性质间关系、化工生产过程等实际问题建模,效果显著。全文的主要内容可以归结为以下六个部分,其中包括了研究工作所取得的主要成果。
     1、系统回顾了统计学习理论和支持向量机的发展历史、研究现状与应用领域;介绍了支持向量机原理,及其应用中存在的一些问题。
     2、针对支持向量机解决非线性分类问题时,必须先将样本向量由原空间映射至高维重建核Hilbert空间的特点,利用核函数技术将线性的分类相关分析算法拓展至高维的重建核Hilbert空间,此即非线性分类相关分析(nonlinear classification correlative analysis,NLCCA)算法。最后,将NLCCA与线性支持向量分类器(linear support vector classifier,LSVC)集成得到NLCCA-LSVC,并应用于两个典型的复杂化学模式识别问题。
     3、对于小样本的LSSVM函数回归问题,在快速留一法的基础上,以全样本的留一预测误差平方和sse为目标,导出了sse对超参数的梯度,并据此以最速下降法优选超参数,构建G-LSSVM模型。最后将之用于一个小样本、非线性柠檬酸发酵过程建模问题。
     4、由于神经网络、LSSVM等经验模型的精度完全依靠测量数据,导致经验模型不能将实际过程的先验知识融合在内,所以模型的预报有时会与过程机理相矛盾。针对二元恒温(恒压)汽液平衡体系的汽相组成计算问题,为解决这一问题,在胡英等人工作基础上,将Gibbs-Duhem方程与多层前传神经网络和LSSVM结合,建立了融入先验知识的汽相组成计算混合模型,使得计算结果受Gibbs-Duhem方程约束。最后混合模型被应用于2个实际二元汽液平衡体系的计算。
     5、由于计算经验风险的损失函数为二次函数形式,LSSVM丧失了标准支持向量机的稀疏性,导致其训练完毕之后,用于分类时效率降低;为使LSSVM具有稀疏性,本文从统计分析的角度出发,选取训练样本中分类作用最大的若干样本个体作为支持向量,并将非支持向量上的分类信息转移至支持向量上,提出了新的LSSVM稀疏化算法,最后将两种新的LSSVM稀疏化应用于若干实际分类问题。另外,本文提出的稀疏化算法可直接应用于多类问题。
     6、本文利用核函数矩阵的奇异值分解,得到了可以节省超参数选取时间的分类器:SVD-LSSVM。SVD-LSSVM用奇异值贡献率来平衡经验风险与LSSVM的模型复杂度,从新的途径实现了SRM原则。
     论文还分析了研究工作的不足,并展望了今后的发展。
Least squares support vector machine (LSSVM) is a kernel learning machine which obeys structural risk minimization (SRM) during training. LSSVM has been widely used in chemistry and chemical process modeling recently. In this paper, several new algorithms were proposed to solve the problems of dimension reduction technologies, selection of optimal hyper parameters and sparseness of LSSVM modeling. These new algorithms were applied to complex chemical pattern classification, process modeling with sample of small size and vapor liquid equilibrium problems; the results show that the new algorithms overcome some deficiencies of standard LSSVM. The main work is as follows:
    1. The history, progress and application of statistical learning theory and support vector machine (SVM) were reviewed first. Subsequently, SVM algorithms were explained and some deficiencies of LSSVM were pointed out.
    2. Since the sample vector must be mapped from original space to high dimensional reproducing kernel Hilbert space (RKHS) when SVM is used to solve nonlinear pattern classification problem, the linear classification correlative analysis (CCA) algorithm was extended to RKHS via kernel trick, and the classification correlative component (CCC) subtracted in RKHS is nonlinear combination of sample vector elements in the original space. Co-linearity and abundant information of the sample can be eliminated by CCA in the RKHS, i.e. non linear CCA (NLCCA), so the sample distribution in the RKHS is improved to be classified easily. At last the NLCCA algorithm was integrated with linear support vector classifier, which is called NLCCA-LSVC here. NLCCA-LSVC was applied to 2 complex chemical pattern classification problems.
    3. G-LSSVM algorithm was proposed based on the fast leave one out (LOO) method with LOO sum square error of prediction, i.e. sse as its minimized object, the gradient of sse to hyper parameters was induced and then the gradient decrease
    method was used to find optimal hyper parameters for LSSVM modeling with small size sample. The G-LSSVM was applied to model a lemon acid fermentation process.
    4. Black box model, such as ANN and LSSVM, model a process only depends on experimental data without making use of any prior knowledge, so the model's prediction may be inconsistent with the process mechanism sometimes. To solve this problem when calculate vapor composition of binary vapor liquid equilibrium with constant temperature (pressure), Gibbs-Duhem equation was integrated with ANN and LSSVM to form a hybrid model, i.e. GD-MFNN & GD-LSSVM, whose output are constrained by Gibbs-Duhcm equation. GD-MFNN & GD-LSSVM were applied to several thermodynamic examples.
    5. Since the empirical risk is calculated via quadratic function, LSSVM loses sparseness of SVM and this leads to the decrease of calculation efficiency when classifying. To sparse LSSVM, statistical method was used to select important examples of training sample as support vector (SV), and the information of non SV examples was transformed to SV, so new sparse algorithms were proposed. The new sparse algorithms were applied to several real life pattern classification problems.
    6. Based on the singular value decomposition of kernel matrix, SVD-LSSVM was proposed, which oan save time for hyper parameters selection via cross validation. SVD-LSSVM balances the empirical risk and model complexity through singular value contribution, so it carries SRM rule in a new way. Several UCI benchmarking data and the Olive classification problem were used to test SVD-LSSVM.
引文
[1] 陈念贻,钦佩,陈瑞亮,陆温聪.模式识别方法在化学化工中的应用,北京:科学出版社,2000.
    [2] 胡上序,陈德钊.观测数据的分析与处理,杭州:浙江大学出版社,1996.
    [3] 陈德钊.多元数据处理,北京:化学工业出版社,1998.
    [4] 边肇祺,张学工.模式识别,第2版,北京:清华大学出版社,1999,9~10.
    [5] 阎平凡,张长水.人工神经网络与模拟进化计算,第1版,北京:清华大学出版社,2000.
    [6] Vapnik V N. The nature of statistical learning theory, New York : Springer, 1995.
    [7] Suykens J A K, Vandewalle, J. Least squares support vector machine classifiers, Neural processing letters, 9: 293-300, 1999.
    [8] Suykens J A K, Van Gestel T, De Brabanter J, De Moor B, Vandewalle J. Least squares support vector machines, Singapore : World Scientific Publishing Co., Pte, Ltd., 2002.
    [9] 陈翀伟.基于先验知识的神经元网络建模与应用,硕士学位论文,杭州:浙江大学,2002.
    [10] Chin Ting-Lan, So Sung-San. Development of neural network QSPR models for hansch substituent constants 1. methods and validations, J. Chem. Inf. Comput. Sci. 44: 147~153, 2004.
    [11] Chiu Ting-Lan, So Sung-Sau. Development of neural network QSPR models for hansch substituent constants 2. applications in QSAR studies of HIV-1 reverse transcriptase and dihydrofolate reductase inhibitors, J. Chem. Inf. Comput. Sci. 44: 154~160, 2004.
    [12] Alguindigue I E, Uhrig R E. Automatic fault recognition in mechanical components using coupled artificial neural networks, in Proc. IEEE World Congr. Computational Intelligence, June-July 1994: 3312~3317, 1994.
    [13] Bhat N V, Minderman P A, McAvoy T, Wang N S. Modeling chemical process systems via neural computation, IEEE Contr. Syst. Mag., 10: 24~30, 1990.
    [14] Hunt K J, Sbarbaro D. Neural networks for control systems-a survey, Automatica, 28 (9): 1083~1112, 1992.
    [15] Du Yang-gunng. Use of novel autoassociative neural network for nonlinear steady-state data reconciliation. AIChE J, 43 (7): 1785~1796, 1997.
    [16] Gupta G, Narasimhan S. Application of neural networks for gross error detection, Ind Eng Chem Res, 32(8): 1651~1657, 1993.
    [17] 阎平凡.人工神经网络的容量学习与计算复杂性,电子学报,23:63~67,1995.
    [18] Reich Y, Barai S V. Evaluating machine learning models for engineering problems, Artificial intelligence in engineering, 13: 257~272, 1999.
    [19] 张洪宾.训练多层网络的样本问题,自动化学报,19(1):71~77,1993.
    [20] Reed R. Pruning algorithms: A survey, IEEE trans, Neural networks, 4: 740~747, 1993.
    [21] Setiono R. A penalty function approach for pruning fecdforward NN, Neural computation, 8: 164~177, 1996.
    [22] Yao X. Evolutionary artificial neural networks, international journal of neural systems, 4 (3): 203~222, 1993.
    [23] 潘卫东.利用GA技术辅助设计ANN,模式识别与人工智能,3:72~76,1994.
    [24] Fahlman S E. The cascade correlation learning architecture, Advances in NIP systems, 2: 524~532, 1990.
    [25] Frean M. The upstart algorithm, Neural computation, 2: 198~209, 1990.
    [26] Girosi F, Jones M, Poggio T. Regularization theory and neural network architecture, Neural computation, 7: 219~269, 1995.
    [27] Weigend A S, Rumelhart D E, Huberman B A. Generalization by weight-elimination with application to forcasting, NIPS 3: 875~882, 1991.
    [28] Mackay D J C. A practical Bayesian framework for backpropagation networks, Neural computation, 4 (3): 448~472, 1992.
    [29] Williams P M. Bayesian regularization and pruning using a Laplace prior, Neural computation, 7: 117~143, 1995.
    [30] Kirkpatrick S. Optimization by simulated annealing: quantitative studies, J. Statis. Phys., 34: 975~986, 1984.
    [31] Holland J H. Adaptation in nature and artificial systems. MIT press, 1991.
    [32] Kennedy J E, Eberhart R. Particle swarm optimization. In Proceedings of the 1995 IEEE international conforonce on neural networks, Perth, Australia, Piscataway, NJ: IEEE. Service Center. 4: 1942~1948. 1995.
    [33] 胡上序,程翼宇.人工神经元计算导论,北京:科学出版社,第1版,1994,218~228.
    [34] Whitely D, Starkweather T, Bogart C. Genetic algorithms and neural networks: Optimizing connections and connectivity, Prallel Comput. (1990), 14 (3): 347~361.
    [35] 张丽平.粒子群优化算法的理论及实践,博士学位论文,杭州:浙江大学,2005.
    [36] 袁亚湘,孙文瑜.最优化理论与方法,北京:科学出版社,第1版,2005,404~417.
    [37] Aronszajn N. Theory of reproducing kernels, Trans. American Mathematical Soc., 68: 337~404, 1950.
    [38] Scholkopf B, Smola A. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
    [39] Herbrich R, Graepel T, Campbell C. Bayesian learning in reproducing kernel Hilbert spaces, Machine learning, Technical report, Technical University of Berlin, Franklinstr. 28/29, 10587 Berlin, 1999.
    [40] Zhang Dao-Qiang, Chin Song-Can. Krnel-based fuzzy and possibilistic c-means clustering, CANN'2003, Istanbul, Turkey, 122~125, 2003.
    [41] Scholkopf B, Smola A J, Muller K R. Nonlinear component analysis as a kernel eigenvalue problem, Neural computation, 10: 1299~1319, 1998.
    [42] Mika S, Ratsch G; Weston J, Scholkopf B, Muller K R. Fisher discriminant analysis with kernels, Proc of IEEE neural networks for signal processing workshop 1999, 41~48, 1999.
    [43] Chapelle O, Vapnik V, Bousquet O, Mukherjee S. Choosing Multiple Parameters for Support Vector Machines, Machine Learning, 46(1): 131-159, 2002.
    [44] Chapelle O, Vapnik V. Model Selection for Support Vector Machines, in Advances in Neural Information Processing System, 1999.
    [45] Opper M, Winther O. Gaussian process and SVM: mean field and leave-one-out. In Smola A J, Bartlett P L, Scholkopf B, Schuurmans D, editors, Advances in Large Margin Classifier, Cambridge: MIT Press, 311~326, 2000.
    [46] Vapnik V, Chapelle O. Bounds on error expectation for support vector machines, Neural Computing, 12(9): 2013~2036, 2000.
    [47] Joachims T. Estimating the generalization performance of a SVM efficiently, Proceedings of ICML-00, 17th International Conference on Machine Learning. Morgan Kanfman, 2000.
    [48] 宋晓峰.优生演进优化和统计学习建模,博士学位论文,杭州:浙江大学,2003.
    [49] 宋晓峰,俞欢军,陈德钊,胡上序.自适应支持向量机为延迟焦化反应过程建模,化工学报,55(1):147~150,2004.
    [50] Osuna E, Freund R, Girosi F. An improved training algorithm for support vector machines, In Proc. Of NNSP'97, 1997.
    [51] Joachims T. Making Iarge-scale support vector machine learning practical, In Scholkopf B, Burges C J C, Smola A J (editors), Advances in kentel methods - support vector learning, Cambridge, MA. MIT Press. 1999.
    [52] Platt J. Sequential minimal optimization: A fast algorithm for training support vector machine. Technical Report MSR-TR-98-14, Microsoft Research, 1998.
    [53] Mangasarian O L, Musicant D R. Successive overrelaxation for support vector machines. IEEE Transactions on Neural Networks, 10:1032-1037, 1999.
    [54] Lin C J. On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks,12 (6):1288-1298, 2001.
    [55] Liao S P, Lin H T, Lin C J. A note on the decomposition methods for support vector regression. Neural Computation, 14: 1267-1281, 2002.
    [56] Hsu C W, Lin C J. A simple decomposition method for support vector machines. Machine Learning, 46: 291-314, 2002.
    [57] 宋晓峰,陈德钊,俞欢军,胡上序.支持向量机中优化算法.计算机科学,30(3):12-15,2003.
    [58] Burges Chris J C, SchOlkopf Bernhard. Advances in neural information processing systems. Massachusetts Avenue Cambridge: MIT Press, 1996: 375~381.
    [59] Tipping M. E. Sparse kernel principal component analysis. In Advances in neural information processing systems 13. MIT Press, 1996.
    [60] Olvi L Mangasasian. Data discrimination via nonlinear generalized support vector machines, Computer Sciences department, University of Wisconsin, Technical report, Mar. 1999.
    [61] Scholkopf B, Bartlett P, Smola A, Williamson R. Support vector regression with automatic accuracy control. In Niklasson L, Boden M, Ziemke T, editors, Proceedings of ICANN'98, Perspectives in neural computing, 111~116, Berlin, 1998. Springer Verlag.
    [62] Yang M H, Ahuja N. A geometric approach to train support vector machines. In proceedings of CVPR 2000, 430~437, 2000.
    [63] 孙宗海.支持向量机及其在控制中的应用研究,浙江大学博士学位论文,杭州:浙江大学,2003.
    [64] Suykens J A K. Nonlinear modeling end support vector machines, IEEE Instrumentation and measurement technology conference, Budapest, Hungary, May 2001, 287~294, 2001.
    [65] Suykens J A K, Vandewalle J. Multiclass Least Squares Support Vector Machines[C], in Proc. of the International Joint Conference on Neural Networks (IJCNN'99), Washington DC, USA, Jul. 1999, CD-rom.
    [66] Golub Gene H, Van Loan C F. Matrix computations, 3rd ed., London: Gene Johns Hopkins University press, 1996.
    [67] Suykens J A K, Lukas L, Van Dooren P, De Moor B, Vandewalle J. Least squares support vector machine classifiers : s large scale algorithm, in Proc. of the European Conference on Circuit Theory and Design (ECCTD'99), Stresa, Italy, Sep. 1999, 839~842, 1999.
    [68] Hamers B, Suykens J A K, De Moor B. A comparison of iterative methods for least squares support vector machine classifiers, Technical Report 01-110, ESAT-SISTA, K.U.Leuven (Leuven, Belgium), 2001.
    [69] Keerthi S S, Shevade S K. SMO algorithm for least squares SVM formulations, Neural computation, 15 (2): 487~507, 2003.
    [70] Kok Seng Chua. Efficient computations for large least squares support vector machine classifiers, Pattern recognition, 24: 75-80, 2003.
    [71] Ferreira L V, Kaszkurewicz E, Bhaya A. Solving systems of linear equations via gradient systems with discontinuous right hand sides: application to LS-SVM, IEEE transactions on neural networks, 16 (2): 501~505, 2005.
    [72] Van Gestel T, Suykens J, Bacsens B, Viaene S, Vanthienen J, Dedane G, De Moor B, Vandewalle J. Benchmarking least squares support vector machine classifiers, Machine Learning, 54 (1), 5~32, 2004.
    [73] Baesens B, Viaene S, Van Gstel T, Suykens J, Dedene G, De Moor B, Vanthienen J. An empirical assessment of kernel type performance for least squares support vector machine classifiers, in Proc. of the Fourth International KES2000, Brighton, UK, Aug. 2000, 313~316, 2000.
    [74] BLAKE C L, MERZ C J. UCI repository of machine learning database [http://www.ics.uci.edu/~mlearn/mlrepository.html]. Irvine, CA: Univesity of California, Dept. of information and computer science, 1998.
    [75] Zhao Ying, Kwoh Chee Keong. Fast leave-one-out evaluation and improvement on inference for LS-SVMs, ICPR'04, 3: 494~497, 2004.
    [76] Embrechts M J, Direct kernel least-squares support vector machines with heuristic regularization, LICNN2004, 687~692, 2004.
    [77] Pelckmans K, Suykens J A K., De Moor B. Regularization constants in LS-SVMs : a fast estimate via convex optimization, IJCNN2004, Budapest, Hungary, Jul. 2004, 699~704, 2004.
    [78] Van Gestel T, Suykens J, Lanckriet G., Lambrechts A, De Moor B, Vandewalle J. Bayesian Framework for Least Squares Support Vector Machine Classifiers, Ganssian Processes and Kernel Fisher Discriminant Analysis, Neural Computation, 15 (5): 1115~1148, 2002.
    [79] Van Gestel T, Suykens J, Baestaens D, Lambrechts A, Lanckriet G, Vandaele B, De Moor B, Vandewalle J. Financial Time Series Prediction using Least Squares Support Vector Machines within the Evidence Framework, IEEE Transactions on Neural Networks, Special Issur on Neural Networks in Financial Engineering, 12 (4): 809~821, 2001.
    [80] Suykons J A K, Lukas L, Vandewalle J. Sparse least squares support vector machines, ESANN'2000, Bruges, Belgium, 37~42, 2000.
    [81] Hoegaerts L, Suykesn J A K, Vandewalle J, De Moor B. A comparison of pruning algorithms for sparse lent squares support vector machines. ICONIP 2004, Calcutta, India, Nov. 2004, 6: 37~42, 2004.
    [82] Zeng Xiangyan, Chen Xuewen, SMO-based pruning methods for sparse least squares support vector machines, IEEE transactions on Neural Networks, 16 (6): 1541~1546, 2005.
    [83] Cawley G. C. Talbot N. L. C., Improved spane least-squares support vector machines, Neurocomputing, 48: 1025~1031, 2002.
    [84] Hoegaerts L, Suykesn J. A. K, Vandewalle J, De Moor B., Primal space sparse kernel partial least squares regression for large scale problems, IJCNN 2004, Hungary, Budapest, Jul. 2004, 561~566, 2004.
    [86] Suykens J A K, De Brabanter J, Lukas L, Vandewalle J. Weighted least squares support vector machines : robustness and sparse approximation, Neurocomputing, Special issue on fundamental and information processing aspects of neurocomputing, 48 (1-4): 85~105, 2002.
    [87] Suykens J A K, Vandewalle J. Recurrent least squares support vector machines, IEEE Transactions on Circuits and Systems-I, 47 (7): 1109~1114, 2000.
    [88] Daisuke Tsujinishi, Shiego Abe. Fuzzy least squares support vector machines for multi class problems, Neural networks, 16: 785~792, 2003.
    [89] Sun Bingyu, Huang Deshuang. Least squares support vector machine ensemble, IJCNN2004, Budapest Hungary, July 25-29, 2004, 2013~2016.
    [90] 朱燕飞,伍建平,李琦,毛宗源.MISO系统的混合核函数LS-SVM建模,控制与决策,20(4):417~425,2005.
    [91] Smits G F, Jordan E M. Improved SVM regression using mixtures of kemels. IEEE Proc. for the 2002 IJCNN. Honolulu, 2002, 3: 2785~2790.
    [92] Suykens J A K, Vandewalle J, De Moor B. Optimal control by least squares support vector machines, Neural Networks, 14 (1): 23-35, 2001.
    [93] 王宇红,黄德先,高东杰,金以慧.基于LS-SVM的非线性预测控制技术,控制与决策,19(4):383~387,2004.
    [94] 柳晓菁,易建强,赵东斌,王伟.基于最小二乘支持向量机的自适应逆扰动消除控制系统,20(8):947~950,2005.
    [95] Suykens J A K. Support Vector Machines : a nonlinear modelling and control perspective, European Journal of Control, Special Issue on fundamental issues in control, 7 (2-3): 311~327, 2001.
    [96] 常玉清,王福利,王小刚,吕哲.基于支持向量机的生物发酵过程软测量建模.东北大学学报(自然科学版),26(11):1025~1028,2005.
    [97] 李曦,曹广益,朱新坚,苗青.基于LS-SVM的燃料电池控制建模的研究,系统仿真学报,17(6):1360~1362,2005.
    [98] 阎威武,常俊林,邵惠鹤.基于滚动时间窗的最小二乘支持向量机回归估计方法及仿真,上海交通大学学报,38(4):524~532,2004.
    [99] 刘斌,苏宏业,储健.一种基于最小二乘支持向量机的预测控制算法,控制与决策,19(2):1399~1402,2004.
    [100] 刘涵,刘丁,郑岗,梁炎明,宋念龙.基于最小二乘支持向量机的天然气负荷预测,化工学报,55(5):828~832,2004.
    [101] Espinoza M, Suykens J, De Moor B. Fixed-Size Least Squares Support Vector Machines : A large Scale application in electrical load forecasting, Computational Management Science , Special Issue on Support Vector Machines, 3 (2): 113~129, 2006.
    [102] Baesens B, Viaene S, Van Gestel T, Suykens J, Van den Poel D, Vanthienen J., De Moor B, Dedene G. Knowledge discovery using least squares support vector machine classifiers : a direct marketing case, in Proc. PKDD2000, Lyon, France, Sup. 2000, 657~664.
    [103] Michel V. Learning high-dimensional data, LFTNC'2000, NATO advanced research workshop on limitations and future trench in neural computing, Siena (Italy), 22-24 Oct. 2001.
    [104] Bellman R. Adaptive control processes: a guide tour, Princeton: Princeton university, 1961.
    [105] Perpinan M A C. A review of dimension reduction techniques, Technical report CS-96-09, Dept. of Computer Science, University of Sheffield, England, Jan. 27, 1997.
    [106] Chen Dezhao, Chen Yaqiu, Hu Shangxun. Correlative component analysis for pattern classification. Chemometrics and Intelligent Laboratory System, 35: 221~229, 1996.
    [107] Talukder Ashit. Nonlinear feature extraction for pattern recognition applications [D], Pennsylvania: Pittsburgh University, 1999.
    [108] Chen Dezhao, Chen Yaqiu, Hu Shangxuu. A pattern classification procedure integrating the multivariate statistical analysis with neural networks. Computers and Chemistry. 21(2):109~113, 1997.
    [109] 王华峰,陈德钊.基于网络与统计方法的集成分类器及其应用,计算机工程,29(14):47~48,2003.
    [110] 程志刚,陈德钊.模式分类的集成策略及其应用,浙江大学学报:工学版,36(6):601~606,2002.
    [111] 孙睎,鲁生业.利用神经网络对胺类有机物急性毒性的分类及定量预测[J],环境科学,19(1),41~45,1998.
    [112] Platt J C, Cristianini Shawe, Taylor J. Large margin DAGs for multiclass classification. Advances in neural information processing systems, 12: 547~553, 2000.
    [113] Burges C J C. A tutorial on support vector machines for pattern recognition. Data Mining end Knowledge Discovery, 2 (2): 955~974, 1998.
    [114] 朱家元,陈开陶,张恒喜.最小二乘支持向量机算法研究,计算机科学,30(7):157~159,2003.
    [115] 常玉清,邹伟,王福利,毛志忠.基于支持向量机的软测量方法研究,控制与决策,20(11):1307~1310,2005.
    [116] 林斌.基于人工神经网络的生物发酵过程建模、优化及故障诊断(D),杭州:浙江大学,1996.
    [117] 徐玲,潘丰,刘飞,须文波.DCS在柠檬酸发酵过程控制中的应用,工业仪表与自动化装置,2:29~31,2001.
    [118] 张明光.柠檬酸发酵温度控制方法研究及其实现,计算机测量与控制,11(11):856~862,2003
    [119] Joerding W H, Meador J L. Encoding a prior information in feedforward networks. Neural networks, 4: 847, 1991.
    [120] 陈翀伟,陈德钊,叶向群,胡上序.基于先验知识的前馈网络对原油实沸点曲线的仿真.高校化学工程学报,15(4):351~356,2001.
    [121] Chen Chongwei, Chen Dezhao. Prior-knowledge-based feedforward network simulation of true boiling point curve of orude oil, Computers and Chemistry, 25 (6): 541~550, 2001.
    [122] 王广军,陈红.基于神经网络和过程机理的热流体系统仿真,化工学报,53(7):711~716,2002.
    [123] Thompson Michael L, Kramer Mark A. Modeling chemical proceses using prior knowledge and neural networks, AIchE journal, 40 (8): 1328~1340, 1994.
    [124] James Scott, Legge Raymond, Budman Hector. Comparative study of blaok-box and hybrid estimation methods in fed-batoh fermentation, J. Prooess Control, 12:113~121, 2002.
    [125] Psiohogios D C, Ungar L H, Hybrid neural networks-first principles approach to process modeling, AIchE journal, 38 (10): 1499~1511, 1992.
    [126] Safari A A, Nooraii A, Romagnoli. A hybrid model formulation for a distillation column and the on-line optimization study, J. Process control, 9: 125~134, 1999.
    [127] 杨慧中,张素贞,陶振麟.聚合反应过程质量指标的推理估计混合模型,高校化学工程学报,17(5):552~558,2003.
    [128] Teissier P, Perret B, Latrille E, Barillere J M, Corrieu G. A hybrid recurrent neural network model for yeast produotion monitoring and control in a wine base medium, J. Biotechnology, 55: 157~169, 1997.
    [129] 陈晓东,马广富,王子才.改进的Elman网络与机理模型的互补建模方法,系统仿真学报,11(2):97~100,1999.
    [130] 阮泉,吴铁军.结合先验知识的神经网络在生化系统建模中的应用,无锡轻工大学学报,20(1):55~57,2001.
    [131] Syed N, Liu H, Sung K. Incremental learning with support vector machines, IJCAI, Stockholm, Sweden, 1999.
    [132] Scholkopf B, Simard P, SVapnik V, Smola A. J. Prior knowledge in support veotor kernels, In: Advances in Neural Information Processing Systems, M. I. Jordan, M. J. Kearus, S. A. Solla (eds), MIT Press, USA, 10: 640~646, 1998.
    [133] 许光,俞欢军,陶少辉,陈德钊.与机理杂交的支持向量机为发酵过程建模,化工学报,56(4):653~658,2005.
    [134] Gautam R, Seider W D. Computation of phase and chemical equilibrium Part Ⅰ: local and constrained minima in Gibbs free energy; Part Ⅱ:. phase splitting; Part Ⅲ: electrolytic solutions. AIChE Journal, 25(6): 991-1015, 1979.
    [135] 林可鸿,陈德钊.混合免疫算法的构建并用于相平衡求解,化工学报,已录用,2006.
    [136] 高晓林,李志坚.BP网络用于计算含C_2H_4体系高压汽液相平衡,化学工业与工程,22(2):145~147,2005.
    [137] Iliuta M C, Iliuta I, Larachi F. Vapour-liquid equilibrium data analysis for mixed solvent-electrolyte systems using neural network models, Chemical engineering science, 55: 2813~2825, 2000.
    [138] 胡英,英徐根.由溶液蒸汽压或沸点确定平衡气相组成的新方法,化工学报,1:27-36,1980.
    [139] Gmehling J, Onken U. Vapor-Liquid Equilibrium Data Collection: Organic Hydroxy Compounds-Alcohols. Gesellschaft: DECHEMA, 1977.
    [140] Gmehling J, Onken U. Vapor-Liquid Equilibrium Data Collection: Aqueous-Organic Systems. Gesellschaft: DECHEMA, 1977.
    [141] Hopke, P K, Massart D L. Reference data sets for chemometrical methods testing. Chemometrics and Intelligent Laboratory Systems, 19:35-41, 1993,
    [142] Pelckmans K, De Brabanter J, Suykens J A K, De Moor B. The differogram: Nonparametrio noise variance estimation and its use for model. Neurocomputing, Special Issue on Signal Processing, 69 (1-3): 100~122, 2005.
    [143] Pelckmans K, Suykens J A K, De Moor B. Additive regularization Trade-off: Fusion of Training and Validation levels in Kernel Methods. Machine Learning, 62 (3): 217-252, 2006.
    [144] 许光,俞欢军,陈德钊.基于支持向量机的柠檬酸发酵过程统计建模,化学反应工程与工艺,20(1):59-63,2004.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700