基于机器学习的软测量技术理论与应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
世界是普遍联系的,并且以某种形式表现出来,这是本课题研究的基本哲学基础。这种普遍联系在数学家的眼里就是一种映射关系,或者说是函数关系。在信息时代里,这种映射关系蕴涵于成千上万的数据中。本文研究的软测量技术就是要寻找埋藏于数据中的各种函数关系。在当今以信息技术带动工业化发展的时代,仪器仪表和测试技术是信息科学技术的重要组成部分。现代社会,随着人们对产品质量要求的提高和人们安全意识与环保意识的提高,对各类测试仪器、检测仪器和分析仪器的需求日益增加。软测量技术是各类综合指标测试仪器、检测仪器和分析仪器的基础技术。此外,作为传统仪器仪表的重要补充,软测量技术在工业测控领域也具有广阔的应用前景。软测量技术研究对仪器仪表和测控技术的发展具有重要意义。
     通常,实际工业过程具有复杂非线性特性和存在大量噪声干扰,这限制了基于机理分析、多元线性回归和神经网络等传统软测量技术的应用。为了克服传统方法的局限性,本课题着重研究基于机器学习理论的、具有好的泛化能力和鲁棒性的非线性软测量建模方法。在保证模型泛化能力和鲁棒性的前提下,研究可以提高建模效率的改进实现算法。本课题的主要研究工作和成果有:
     (1)对传统k-最近邻(kNN)算法进行近邻距离定义的改进,用属性加权距离取代标准欧氏距离。进而,基于改进kNN算法提出了一种数据集剪辑算法,用于滤除矛盾数据样本。针对中大规模数据集提出了一种快速kNN算法,运行速度仅与最近邻数k值和数据集维数n值有关。通常,运行速度较传统算法可提高几倍至十几倍。对于局部学习算法的研究,最近邻子集的快速搜索算法研究具有普遍意义。
     (2)研究基于多神经网络的软测量建模方法,旨在提高工业环境下软测量模型的鲁棒性和泛化能力。提出了一种以聚类子簇数据作为验证数据集(而非训练数据集)的多神经网络,并将其用于构造两层结构多神经网络模型。针对纸浆Kappa值数据集,使用单一神经网络、两类单层多神经网络和两层多神经网络等四种模型进行软测量建模对比实验。实验结果表明,两层多神经网络模型的鲁棒性和泛化能力优于其他三种模型。
     (3)将软间隔支持向量机回归算法用于软测量建模。给出了两个版本的ε-SVMR算法,同时给出了该算法的通用二次规划(QP)解算器和序贯最小优化(SMO)两种实现方法。针对纸浆Kappa值数据集进行两个仿真实验,分别研究ε-SVMR算法的自由参数对模型预测性能的影响和ε-SVMR算法两种实现方法的建模效率。主要结论是,基于SMO算法的SVM方法尤其适用于中大规模实际工业过程的软测量建模。
     (4)研究了时间序列的两种预测建模方法,即样本扩展TDNN方法和特征扩展SVM方法。两种建模方法分别基于过程时间序列的样本扩展数据集和特征扩展数据集。对于制浆蒸煮过程时间序列的仿真实验表明:多步预测的性能优于单步预测的性能,尤其对于样本序列较少的情形;特征扩展SVM方法的性能优于样本扩展TDNN方法的性能,尤其对于单序列输入的情形。
     (5)对过程神经网络(PNN)进行理论研究,揭示了过程神经元和传统神经元间的联系。指出了过程神经元可用传统神经元进行无限逼近,给出了两个逼近定理和证明,以及相关的两个推论。针对模拟产生的正弦波编码信号集进行仿真实验,研究过程神经网络的预测建模性能。实验得出的主要结论是,过程神经网络对于白噪声具有很好的抑制作用,从而增加了模型的鲁棒性。但其使用需要选取用于基展开的正交基函数系。
     (6)引入信号内积和范数的定义,提出了一种新的过程式输入学习算法,即过程支持向量机(PSVM)。针对模拟产生的正弦波编码信号集进行仿真实验,并将实验结果和过程神经网络的实验结果进行比较。PSVM方法的使用比较方便,可以避开正交基函数系的选择问题。当噪声幅度较小时,PSVM方法的表现优于PNN方法;当噪声幅度变大时,PSVM方法的表现稍差于PNN方法,但可通过对信号进行类似于PNN方法的基展开截频处理提高其预测性能。关于PNN和PSVM学习方法的研究为过程式输入的软测量建模提供必要的理论基础。
     本课题研究取得的创造性成果有:(1)提出了一种基于改进kNN算法的数据集剪辑算法,用于滤除数据集中的大误差样本。(2)提出了一种快速kNN算法,对于局部学习(消极学习)算法的研究具有普遍意义。(3)提出了一种以聚类子簇数据作为验证数据集(而非训练数据集)的多神经网络模型,用于构建泛化能力好的预测模型。(4)提出了过程神经元的两个逼近定理并给出了证明,揭示了过程神经元和传统神经元的内在联系。(5)提出了一种新的过程式输入学习算法,即过程支持向量机。
     总之,本课题以基于机器学习的软测量技术理论和应用作为主要研究内容,展开深入研究,取得了一些有益的成果。文中提出的软测量建模方法既丰富了软测量建模理论,也促进了软测量技术的工业实用化。后两章比较侧重理论研究,取得的理论成果不仅对软测量理论的发展具有重要作用,而且对机器学习理论的发展也有一定的促进作用。由于作者水平有限,文中难免有错误或不妥之处,恳请各位专家和读者批评指正。
Relations exist universally in nature and are expressed in certain forms, which is the basic philosophy foundation of this research subject. In mathematicians’view, this kind of relations is a kind of mappings, or mapping functions. In the information age, these mapping relations are contained within many thousands of data. The soft sensing technology (SST), being researched in this dissertation, aims at finding various mapping functions hidden in data. Today, in the age that industrialization is driven by the information technology (IT), the instrumentation and measurement technology is an important part of the information science and technology. With people requiring higher quality and enhancing security awareness and environment protection, various testing, measuring and analyzing instruments are demanded increasingly. Furthermore, as an emerging technology, the SST is widely applied in the industrial measurement and control field. Therefore, research on the SST is very important to the development of instrumentation and measurement technology.
     Usually, a practical industrial process has a complex nonlinearity and is polluted by lots of noises, which limits the applications of traditional SSTs based on the mechanism analysis, multivariate linear regression (MLR) and artificial neural network (ANN). To overcome the limitations of the traditional SSTs, our research puts emphasis on studying the nonlinear soft sensor modeling methods that have good generalization capability and strong robustness. The research work is based on the machine learning (ML) theory. Under the precondition of guaranteeing the model’s generalization capability (GC) and robustness, some modified algorithms are studied and developed to improve the modeling efficiency. Main research work and productions are listed as follows:
     (1) Modify the distance definition of traditional k-nearest neighbors (kNN) algorithm by replacing the standard Euclidian distance with attribute-weighted distance. And develop a dataset editing algorithm based on the modified kNN algorithm for filtering inconsistent samples. Propose a fast kNN algorithm for medium- or large-scale datasets. Its running efficiency is only affected by the neighbor number k and the dimension of dataset n. Usually, it runs several up to twenty times faster than the traditional algorithm. As for developing the locally-approximated learning, there is universal sense to study the algorithms that can fast search for the kNN sub-dataset.
     (2) Research the soft sensor modeling methods based on multiple neural networks (MNN), which aims at improving the model’s GC and robustness in the industrial environment. Propose an MNN model that uses clustering sub-datasets as validation datasets (not training datasets), based on which a two-layer MNN model is built. Two comparison experiments are performed over the pulp Kappa dataset for four different models, including single ANN, ensemble MNN, modular MNN and two-layer MNN. Experiment results show that the two-layer MNN model outperforms other three models on the robustness and GC.
     (3) Apply the soft margin SVM regression algorithm to the soft sensor modeling. Give two versions of theε-SVMR algorithm and its two implementing methods, i.e., the universal quadratic programming solver and sequential minimal optimization (SMO) algorithm. Over the pulp Kappa dataset, two experiments are performed to study how free parameters to impact the performance of theε-SVMR algorithm and to compare the modeling efficiency of two implementing methods. The main conclusion is that the SVMR method, especially the SMO algorithm, is fit for soft sensor modeling of practical industrial processes.
     (4) Study two prediction modeling methods using time series (TS) sampled from a process, i.e., the temporal difference trained neural network (TDNN) and SVMR algorithm. Two methods train prediction models over the sample-expanding dataset and feature-expanding dataset, respectively. Two experiments are carried out on the TSs of the kraft pulping process. The experiment results show the multi-step expanding prediction exceeds the single-step prediction, especially for the small-sized TS set. And the feature-expanding SVMR method exceeds the sample-expanding TDNN method, especially for the single-sequence TS set.
     (5) Do theoretical research on the process neural network (PNN) and reveal the relationship between the process neuron and the traditional neuron. Point out that the process neuron can be approximated infinitely using the traditional neuron, present two approximating theorems and their detailed proof, and give two related corollaries. Experiments are performed to validate the PNN method over a set of function-generated sine wave coded signals. The experiments prove that the PNN can suppress white noises and enhance the robustness of the model. However, applying the PNN need choose a certain function orthogonal basis.
     (6) Define the inner product and norm of signals (functions), and propose a novel process learning algorithm, i.e., the process SVM (PSVM). Experiments are carried out to compare the PSVM with the PNN over the same set of signals as above. Avoid choosing a function orthogonal basis, which makes the PSVM more convenient to be applied than the PNN. When the noise amplitude is small, the PSVM model outperforms the PNN model; when the noise amplitude is rather large, the PSVM model performs a little worse than the PNN model. But its performance can be improved by representing signal samples with finite basis functions.
     Creative achievements obtained in our research include: (1) Propose a dataset editing method based on the modified kNN algorithm for filtering inconsistent samples in a dataset. (2) Present a fast kNN algorithm, which has a universal sense for studying local learning (lazy learning) methods. (3) To improve the generalization ability of a model, present an MNN model that uses clustering sub-datasets as validation datasets. (4) Propose two approximating theorems to the process neuron, which reveal the relationship between the process neuron and the traditional neuron. (5) Present a novel process learning algorithm, namely, the PSVM. In conclusion, this research work focuses on studying theories and applications of the SST based on the machine learning. After doing in-depth research work, we obtain some useful achievements. The soft sensing approaches proposed in the dissertation not only enrich the soft sensing theory, but also promote the industrial utilization of the SST. The last two chapters pay attention to theoretical research. The theoretical fruits will push the development of the soft sensing theory as well as the ML theory. Limit to the author’s knowledge and experience, mistakes and faults are hard to avoid in the dissertation. Please point them out and give some comments or suggestions.
     “In the past, we mined gold with electromechanical machines; In the Knowledge Economy age, we are to mine gold with learning machines. Our gold mines are the databases stored in factories and enterprises.”
引文
[1] 郭萌, 王珏. 数据挖掘与数据库知识发现:综述[J]. 模式识别与人工智能, 1998, 11(3): 292-299
    [2] 中国仪器仪表学会. 现代仪器仪表的发展和未来五年我国对仪器仪表市场需求的分析报告[DB/OL]. http://www.cis.org.cn/manager/shichangyuce.html, 2004
    [3] 中国仪器仪表学会. 中国测量控制与仪器仪表中长期科技发展规划(提要)[DB/OL]. http://www.cis.org.cn/manager/kejiguihua.html, 2004
    [4] 李海清, 黄志尧, 等. 软测量技术原理及应用[M]. 北京: 化学工业出版社, 2000
    [5] 俞金寿, 刘爱伦, 张克进. 软测量技术及其在石油化工中的应用[M]. 北京: 化学工业出版社, 2000
    [6] McAvoy T. J. Contemplative stance for chemical process control [J]. Automatica, 1992, 28(2): 441-442
    [7] 王树青, 荣冈, 金晓明, 等. 先进控制技术及应用[M]. 北京: 化学工业出版社, 2000
    [8] 李向阳. 间歇蒸煮过程纸浆 Kappa 值软测量方法研究与应用[D]. 广州: 华南理工大学, 2001
    [9] 徐喆, 柴天佑, 王伟. 推理控制综述[J]. 信息与控制, 1998, 27(3): 206-214
    [10] 闫友彪, 陈元琰. 机器学习的主要策略综述[J]. 计算机应用研究, 2004, (7): 4-10
    [11] 蔡自兴, 徐光祐. 人工智能及其应用(研究生用书)[M]. 北京: 清华大学出版社, 2004
    [12] 董聪, 郭晓华. 面向 21 世纪的计算智能研究与思考[J]. 计算机科学, 2000, 27(2): 1-5
    [13] 董聪, 郭晓华. 计算智能中若干热点问题的研究与进展[J]. 控制理论与应用, 2000, 17(5): 691-698
    [14] Mitchell T. M. 机器学习[M]. 曾华军, 张银奎, 等译. 北京: 机械工业出版社, 2003
    [15] 高隽. 人工神经网络原理及仿真实例[M]. 北京: 机械工业出版社, 2003
    [16] Vapnik V. N. 统计学习理论的本质[M]. 张学工译. 北京: 清华大学出版社, 2000
    [17] Vapnik V. N. An Overview of Statistical Learning Theory [J]. IEEE Transactions on Neural Networks, 1999, 10(5): 988-999
    [18] Hornik K. M., Stinchcombe M., White H. Multilayer feedforward networks are universal approximators [J]. Neural Networks, 1989, 2(5): 359-366
    [19] Boser B. E., Guyon I. M., Vapnik V. N. A training algorithm for optimal margin classifiers [A]. Haussler D., editor. Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory [C]. ACM Press, 1992: 144-152
    [20] 张学工. 关于统计学习理论与支持向量机[J]. 自动化学报, 2000, 26(1): 32-43
    [21] Cristianini N., Shawe-Tayler J. 支持向量机导论[M]. 李国正, 王猛, 曾华军译. 北京: 电子工业出版社, 2004
    [22] Cristianini N., Shawe-Taylor J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods [M]. Cambridge University Press, Cambridge, 2000
    [23] 邓乃扬, 田英杰. 数据挖掘中的新方法——支持向量机[M]. 北京: 科学出版社, 2004
    [24] 何新贵, 梁久祯. 过程神经元网络的若干理论问题[J]. 中国工程科学, 2000, 2(12): 40-44
    [25] 马江洪, 张文修, 徐宗本. 数据挖掘与数据库知识发现:统计学的观点[J]. 工程数学学报, 2002, 19(1): 1-13
    [26] 唐晓, 王佳. 软测量方法评价区域海水腐蚀性的研究:非平衡态软测量模型[J]. 高技术通讯, 2004, (10): 90-93
    [27] 马勇, 黄德先, 金以慧. 动态软测量建模方法初探[J]. 化工学报, 2005, 56(8): 1516-1519
    [28] 谢代梁, 王保良, 黄志尧, 等. 主成分回归在中药过程软测量中的应用研究[J]. 仪器仪表学报, 2004, 25(4): 671-672
    [29] 梁军, 汪小勇, 王文庆. 基于神经网络 PLS 方法的软测量建模研究[J]. 浙江大学学报(工学版), 2004, 38(6): 676-681
    [30] 冉维丽, 乔俊飞. 基于 PCA 时间延迟神经网络的 BOD 在线预测软测量方法[J]. 电工技术学报, 2004, 19(12): 78-82
    [31] 张磊, 胡春, 钱锋. BP 改进算法及其在乙二醇精制软测量中的应用[J]. 自动化仪表, 2005, 26(6): 31-34
    [32] 李勇刚, 桂卫华, 阳春华. 锌液流量的自适应 BP 算法软测量[J]. 自动化仪表, 2004, 25(7): 24-27
    [33] 王秀云, 孙瑜. 基于 BP 神经网络的串联连续打浆过程打浆度的软测量[J]. 化工自动化及仪表, 2004, 31(4): 7-9
    [34] 邱书波, 王化祥, 刘雪真. RBF 神经网络在卡伯值软测量中的应用研究[J]. 电子测量与仪器学报, 2005, 19(1): 30-34
    [35] 王珏, 石纯一. 机器学习研究[J]. 广西师范大学学报(自然科学版), 2003, 21(2): 1-15
    [36] 常玉清, 王小刚, 王福利. 基于多神经网络模型的软测量方法及应用[J]. 东北大学学报(自然科学版), 2005, 26(6): 519-522
    [37] 马勇, 黄德先, 金以慧. 基于支持向量机的软测量建模方法[J]. 信息与控制, 2004, 33(4): 417-421
    [38] 熊志华, 黄国宏, 邵惠鹤. 基于高斯过程和支持向量机的软测量建模比较及应用研究[J]. 信息与控制, 2004, 33(6): 754-757
    [39] 张英, 苏宏业, 褚健. 基于 ISVM 的软测量建模及其在 PX 生产中的应用研究[J]. 控制与决策, 2005, 20(10): 1102-1106
    [40] 张健, 李艳, 朱学峰. 基于支持向量机的蒸煮过程卡伯值软测量[J]. 计算机测量与控制, 2004, 12(2): 104-106
    [41] 李海生. 支持向量机回归算法与应用研究[D]. 广州: 华南理工大学, 2005
    [42] Hastie T., Tibshirani R., Friedman J. 统计学习基础——数据挖掘、推理与预测[M]. 范明, 柴玉梅, 昝红英, 等译. 北京: 电子工业出版社, 2004
    [43] Hand D., Mannila H., Smyth P. 数据挖掘原理[M]. 张银奎, 廖丽, 等译. 北京: 机械工业出版社和中信出版社, 2005
    [44] Barnett V., Lewis T. Outliers in statistical data [M]. Second edition. New York: John Wiley & Sons, 1994
    [45] Ramaswamy S., Rastogi R., Shim K. Efficient algorithms for mining outliers from large data sets [A]. In Proceedings of the ACM SIGMOD International Conference on Management of Data [C]. New York: ACM Press, 2000, 29: 427-438
    [46] 王宏鼎, 童云海, 谭少华, 等. 异常点挖掘研究进展[J]. 智能系统学报, 2006, 1(1): 67-73
    [47] 罗琪. 硫酸盐法蒸煮过程纸浆Kappa值软测量技术的研究[D]. 广州: 华南理工大学, 1998
    [48] 吴诗钦. 硫酸盐法蒸煮脱木素的动力学数学模型及其应用[J]. 中国造纸, 1984, 3(4): 13-16
    [49] Newman D. J., Hettich, S., Blake C. L., et al. UCI Repository of machine learning databases [DB/OL], [http://www.ics.uci.edu/~mlearn/MLRepository.html]. Irvine CA:University of California, Department of Information and Computer Science, 1998
    [50] Rasmussen C. E., Neal R. M., Hinton G., et al. Data for Evaluating Learning in Valid Experiments (DELVE) [DB/OL], [http://www.cs.toronto.edu/~delve/]. Toronto, Canada: The University of Toronto, Computer Science Department, 1996
    [51] 罗琪, 刘焕彬. 硫酸盐间歇蒸煮过程中有效碱浓度数学模型的研究[J]. 广东造纸, 1997, (3): 1-4
    [52] 刘莉, 徐玉生, 马志新. 数据挖掘中数据预处理技术综述[J]. 甘肃科学学报, 2003, 15(1): 117-119
    [53] 李艳, 张健, 朱学峰, 等. 蒸煮过程软测量中的异常数据发掘方法研究[J]. 计算机测量与控制, 2004, 12(1): 17-20
    [54] 罗健旭, 邵惠鹤. 软测量建模数据的过失误差侦破——一种基于聚类分析的方法[J]. 仪器仪表学报, 2005, 26(3): 238-241
    [55] 胡广书. 现代信号处理教程[M]. 北京: 清华大学出版社, 2004
    [56] Donoho D. L., Johnstone I. M., Kerkyacharian G., et al. Wavelet shrinkage: asymptopia [J]. Journal of the Royal Statistical Society, Series B, 1995, 57(2): 301-369
    [57] Donoho D. L. Denoising by soft-thresholding [J]. IEEE Transactions on Information Theory, 1995, 41(3): 613-627
    [58] 郑治真, 沈萍, 杨选辉, 等. 小波变换及其 MATLAB 工具的应用[M]. 北京: 地震出版社, 2001
    [59] 李艳. 制浆蒸煮过程纸浆卡伯值软测量技术研究与应用[D]. 广州: 华南理工大学, 2003
    [60] Mitchell T. M. Machine Learning [M]. Beijing: McGraw-Hill Education and China Machine Press, 2003
    [61] Cover T. M., Hart P. E. Nearest neighbor pattern classification [J]. IEEE Transaction on Information Theory, 1967, IT-13(3): 21-27
    [62] 罗明, 白雪生, 徐光祐. 基于 SVD 的二次型距离相似索引层次算法[J]. 清华大学学报(自然科学版), 2002, 42(1): 36-39
    [63] Hart P. E. The condensed nearest neighbor rule [J]. IEEE Transaction on Information Theory, 1968, IT-14(3): 515-516
    [64] 张鸿宾, 孙广煜. 近邻法参考样本集的最优选择[J]. 电子学报, 2000, 28(11): 16-21
    [65] Fukunaga K., Narendra P. M. A branch and bound algorithm for computing k-nearest neighbors [J]. IEEE Transaction on Computers, 1975, 24(7): 750-753
    [66] Guttman A. R-trees: A dynamic index structure for spatial searching [A]. In: Proceeding of the ACM SIGMOD International Conference on Management of Data, Boston, MA: ACM Press, 1984, 13(2): 47-57
    [67] Bentley J. L. Multidimesional binary search trees used for associative searching [J]. Communications of the ACM, 1975, 18(9): 509-517
    [68] Friedman J., Bentley J. L., Finkel R. An algorithm for finding best matches in logarithmic expected time [J]. ACM Transactions on Mathematical Software, 1977, 3(3): 209-226
    [69] 叶涛, 朱学峰, 李向阳, 等. 基于改进 k-最近邻回归算法的软测量建模[J]. 自动化学报, 已录用, 2007 年发表
    [70] 董道国, 梁刘红, 薛向阳. VAR-Tree 一种新的高维数据索引结构[J]. 计算机研究与发展, 2005, 42(1): 10-17
    [71] 于静江, 周春晖. 过程控制中的软测量技术[J]. 控制理论与应用, 1996, 13(2): 137-144
    [72] 朱学峰. 软测量技术及其应用[J]. 华南理工大学学报(自然科学版), 2002, 30(11): 61-67
    [73] Sam Waugh. Extending and benchmarking Cascade-Correlation [D]. Hobart, Australia: University of Tasmania, 1995
    [74] Ye T., Zhu X. F., Li X. Y. Predicting the Pulp Kappa Number with a Soft Sensor Based on the k-Nearest Neighbor Algorithm [A]. In Proceedings of the 3rd International Symposium on Emerging Technologies of Pulping and Papermaking [C]. Guangzhou, China: South China University of Technology Press, 2006: 788-791
    [75] Zelenko D. Machine Learning for Information Extraction [D]. Urbana, USA: University of Illinois at Urbana-Champaign, 2003
    [76] 柴天佑, 谢书明, 杜斌, 等. 基于 RBF 神经网络的转炉炼钢终点预报[J]. 中国有色金属学报, 1999, 9(4): 868-872
    [77] 谢书明, 柴天佑, 陶钧. 一种转炉炼钢动态终点预报的新方法[J]. 自动化学报, 2001, 27(1): 136-139
    [78] Wolf A., Barbosa H., Monteiro E. C., et al. Multiple MLP neural networks applied on the determination of segment limits in ECG signals [A]. Mira J., editor. Lecture Notes in Computer Science [C]. Berlin: Springer-Verlag, IWANN 2003, 2687: 607-614
    [79] Xiong Z. H., Wang X., Xu Y. M. Nonlinear model predictive control based on multiple neural networks [A]. In Proceedings of the 3rd World Congress on Intelligent Controland Automation [C]. Hefei, China: IEEE Press, 2000, 2: 1110-1113
    [80] Sharkey A. J. C. Multi-net systems [A]. Sharkey A. J. C., editor. Combining Artificial Neural Nets: Ensemble and Modular Multi-net Systems [C]. New York: Springer-Verlag, 1999: 1-30
    [81] 王耀南, 李树涛. 多传感器信息融合及其应用综述[J]. 控制与决策, 2001, 16(5): 518-522
    [82] Abmad Z., Zhang J. A comparison of different methods for combining multiple neural networks models [A]. In Proceedings of the International Joint Conference on Neural Networks [C], Honolulu, USA: IEEE Press, 2002, 1: 828-833
    [83] McLeish M. D., Yao P., Stirtzinger T. A study on the use of belief functions for medical expert systems [J]. Journal of Applied Statistics, 1991, 18(1): 155-174
    [84] Huang Y. S., Suen C. Y. A method of combining multiple experts for the recognition of unconstrained handwritten numerals [J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1995, 17(1): 90-94
    [85] Bishop C. M. Neural Networks for Pattern Recognition [M]. Oxford, UK: Clarendon, 1995
    [86] Ahmad Z., Zhang J. Bayesian selective combination of multiple neural networks for improving long-range predictions in nonlinear process modeling [J]. Neural Computing & Applications, 2005, 14(1): 78-87
    [87] Lim C. P., Harrison R. F. Online pattern classification with multiple neural network systems: an experimental study [J]. IEEE Transaction on Systems, Man and Cybernetics, Part C, 2003, 33(2): 235-247
    [88] Ye T., Zhu X. F., Li X. Y., et al. A Soft Sensor Based on Multiple Neural Networks Combined with two Information Fusion Methods [A]. In Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision [C]. Singapore: IEEE Press, 2006: 2301-2306
    [89] Bezdek J. C. Pattern Recognition with Fuzzy Objective Function Algorithms [M], New York, USA: Plenum Press, 1981
    [90] Cho S. B., Kim J. H. Combining multiple neural networks by fuzzy integral for robust classification [J]. IEEE Transaction on Systems, Man and Cybernetics, 1995, 25(2): 380-384
    [91] Cho S. Z., Cho Y. J., Yoon S. C. Reliable roll force prediction in cold mill using multiple neural networks [J]. IEEE Transaction Neural Networks, 1997, 8(4): 874-882
    [92] Nguyen H. H., Chan C. W. Multiple neural networks for a long term time series forecast[J]. Neural Computing & Applications, 2004, 13(1): 90-98
    [93] 熊智华, 王雄, 徐用懋. 一种利用多神经网络结构建立非线性软测量模型的方法[J]. 控制与决策, 2000, 15(2): 173-176
    [94] Vapnik V. N. An Overview of Statistical Learning Theory [J]. IEEE Transactions on Neural Networks, 1999, 10(5): 988-999
    [95] Vapnik V. N. The Nature of Statistical Learning Theory [M]. New York, NY: Springer Verlag, 1995
    [96] Vapnik V. N., Chervonenkis A. On the uniform convergence of relative frequencies of events to their probabilities [J]. Theory of Probability and its Applications, 1971, 16(2): 264-280
    [97] Vapnik V. N., Lerner A. Pattern recognition using generalized portrait method [J]. Automation and Remote Control, 1963, 24: 774-780
    [98] Aronszajn N. Theory of reproducing kernels [J]. Transactions of the American Mathematical Society, 1950, 68: 337-404
    [99] Cover T. M. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition [J]. IEEE Transactions on Electronic Computers, 1965, 14: 326-334
    [100] Osuna E., Freund R., Girosi F. An improved training algorithm for support vector machines [A]. Principe J., Gile L., Morgan N., et al, editors. Neural Networks for Signal Processing VII (Proceedings of the 1997 IEEE Workshop) [C]. IEEE Press, 1997: 276-285
    [101] Platt J. C. Sequential minimal optimization: A fast algorithm for training support vector machines [R]. Technical Report MSR-TR-98-14, Microsoft Research, 1998
    [102] Joachims T. Making large-scale SVM learning practical [A]. Sch?lkopf B., Burges C. J. C., Smola A. J., editors. Advances in Kernel Methods -- Support Vector Learning [C]. MIT Press, 1999: 169-184
    [103] Burges C. J. C. A tutorial on support vector machines for pattern recognition [J]. Data Mining and Knowledge Discovery, 1998, 2(2): 121-167
    [104] Cortes C., Vapnik V. N. Support vector networks [J]. Machine Learning, 1995, 20: 273-297
    [105] Smola A. J., Sch?lkopf B. A tutorial on support vector regression [J]. Statistics and Computing, 2004, 14: 199-222
    [106] Cristianini N., Shawe-Taylor J. An Introduction to Support Vector Machines [M].Cambridge, UK: Cambridge University Press, 2000
    [107] Sch?lkopf B., Smola A. J., Williamson R., et al. New support vector algorithms [J]. Neural Computation, 2000, 12: 1207-1245
    [108] Suykens J. A. K. Least squares support vector machines for classification and nonlinear modeling [J]. Neural Network World, 2000, 10(1-2): 29-47
    [109] Shawe-Taylor J., Bartlett P. L., Williamson R. C., et al. Structural risk minimization over data-dependent hierarchies [J]. IEEE Transactions on Information Theory, 1998, 44(5): 1926-1940
    [110] Shawe-Taylor J., Cristianini N. Margin distribution and soft margin [A]. Smola A. J., Bartlett P., Sch?lkopf B., et al, editors. Advances in Large Margin Classifiers [C]. MIT Press, 1999, V: 349-358
    [111] Smola A., Sch?lkopf B., Müller K.-R. General cost functions for support vector regression [A]. Downs T., Frean M., Gallagher M., editors. Proceedings of the 9th Australian Conference on Neural Networks [C]. University of Queensland, 1998: 79-83
    [112] Ye T., Zhu X. F., Huang D. P., et al. Soft Sensor Modeling Based on the Soft Margin Support Vector Regression Machine [A]. In Proceedings of the 6th IEEE International Conference on Control and Automation [C]. Guangzhou, China, accepted, to be published in June, 2007
    [113] Glenn Fung. Machine Learning and Data Mining via Mathematical Programing Based Support Vector Machines [D]. Madison, USA: University of Wisconsin, 2003
    [114] 单鸿亮, 王文海, 孙优贤. 间歇蒸煮过程软测量建模综述[J]. 中国造纸学报, 2003, 18(2): 195-200
    [115] Samuel A. L. Some studies in machine learning using the game of checkers [J]. IBM Journal on Research and Development, 1959, 3: 210-229. Reprinted in Feigenbaum E. A. and Feldman J., editors. Computers and thought, New York: McGraw-Hill, 1963
    [116] Sutton R. S. Learning to Predict by the Methods of Temporal Differences [J]. Machine Learning, 1988, 3(1): 9-44
    [117] Tesauro G. Practical Issues in Temporal Difference Learning [J]. Machine Learning, 1992, 8(3/4): 257-277
    [118] Kaelbling L. P., Littman M. L., Moore A. W. Reinforcement Learning: A Survey [J]. Journal of Artificial Intelligence Research, 1996, 4: 237-285
    [119] 杨璐, 洪家荣, 黄梯云. 将 TD 方法同神经网络相结合进行时间序列实时建模预测[J]. 计算机学报, 1996, 19(9): 695-700
    [120] 王雪松, 程玉虎. 一种基于时间差分算法的神经网络预测控制系统[J]. 信息与控制, 2004, 33(5): 531-535
    [121] Fortuna L., Graziani S., Xibilia M. G. Soft sensors for product quality monitoring in debutanizer distillation columns [J]. Control Engineering Practice, 2005, 13: 499-508
    [122] Malthouse E. C., Tamhane A. C., Mah R. S. H. Nonlinear Partial Least Squares [J]. Computers and Chemical Engineering, 1997, 21: 875-890
    [123] Elman J. L. Finding structure in time [J]. Cognitive Science, 1990, 14(2): 179-211
    [124] Brockwell P. J., Davis R. A. 时间序列的理论与方法[M], 第二版. 田铮译. 北京: 高等教育出版社和施普林格出版社, 2003
    [125] Percival D. B., Walden A. T. 时间序列分析的小波方法[M]. 程正兴, 等译. 北京: 机械工业出版社
    [126] Ye T., Zhu X. F., Li X. Y. A Soft Sensing Method Based on the Temporal Difference Learning Algorithm [A]. In Proceedings of the 6th World Congress on Intelligent Control and Automation [C]. Dalian, China: IEEE Press, 2006, 6: 4861-4865
    [127] Hagan M. T., Demuth H. B., Beale M. Neural Network Design [M]. Boston: PWS Publishing Company, 1996
    [128] 许少华, 何新贵. 基于函数正交基展开的过程神经网络学习算法[J]. 计算机学报, 2004, 27(5): 645-650
    [129] 丁刚, 钟诗胜. 基于过程神经网络的热平衡温度预测研究[J]. 宇航学报, 2006, 27(3): 489-492
    [130] 何新贵, 许少华. 输入输出均为时变函数的过程神经网络及应用[J]. 软件学报, 2003, 14(4): 764-769
    [131] 柳重堪. 正交函数及其应用[M]. 北京: 国防工业出版社, 1982
    [132] 赵录怀, 高金峰, 刘崇新, 等. 信号与系统分析[M]. 北京: 高等教育出版社, 2003
    [133] 何新贵, 许少华. 过程神经元网络及其在时变信息处理中的应用[J]. 智能系统学报, 2006, 1(1): 1-8
    [134] 许少华, 何新贵, 刘坤, 等. 关于连续过程神经元网络的一些理论问题[J]. 电子学报, 2006, 34(10): 1838-1841
    [135] 何新贵, 许少华. 一类反馈过程神经元网络模型及其学习算法[J]. 自动化学报, 2004, 30(6): 801-806
    [136] 叶涛, 朱学峰. 关于过程神经元网络的一些理论探讨[J]. 智能系统学报, 已投稿,审稿之中
    [137] 张健. 提高制浆蒸煮过程纸浆 Kappa 值软测量精度的研究[D]. 广州: 华南理工大学, 2004
    [138] Rudin W. 泛函分析[M], 第二版. 刘培德译. 北京: 机械工业出版社, 2004
    [139] 郭懋正. 实变函数与泛函分析[M]. 北京: 北京大学出版社, 2005
    [140] 黄振友, 杨建新, 华踏红, 等. 泛函分析[M]. 北京: 科学出版社, 2003
    [141] Balakrishnan A. V. Applied functional analysis [M]. New York: Sringer-Verlag, 1981
    [142] 杨万利, 田会英, 周海云. 现代数学基础教程[M]. 北京: 国防工业出版社, 2004
    [143] Cryer C. W. Numerical functional analysis [M]. New York: Oxford University Press, 1982

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700