用户名: 密码: 验证码:
基于支持向量回归机的无迹卡尔曼滤波设计与应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
当今世界科学技术飞速发展,人类应用的工程技术越来越复杂。为了保证工程建设的精度,在设计与实施过程中利用了大量的滤波技术,且应用的范围越来越广泛,如广泛应用到目标跟踪、卫星导航、空间定位、图像处理、工程控制等各个领域中。为了应对各种复杂的数据应用环境,滤波技术与许多新理论与新方法结合使用,可以扩展滤波应用范围、优化滤波结构、提高滤波性能。滤波技术从早期只能应用于线性环境的维纳滤波、卡尔曼滤波发展到在复杂非线性的环境也能应用的扩展卡尔曼滤波、粒子滤波、无迹卡尔曼滤波、差分滤波等,这些新型的滤波技术在各类工程应用中发挥着重要的作用。
     卡尔曼滤波是线性环境下由最优理论估计理论得到的一种时域上的状态空间方法。它由一套递推滤波算法构成,该算法因其具有计算量小、存储量小、结构简单、便于实现等特点而非常适合用计算机进行计算。卡尔曼滤波运用贝叶斯递推关系计算条件状态的概率密度函数,从概率密度函数中获取详细的系统状态估计信息的方式只适用于线性系统环境。但在实际的工程应用中,很多系统都是非线性的。为了解决非线性系统下滤波应用问题,人们进行了广泛而深入地研究。在非线性模型中通常采用近似估计的方法,估计精度取决于其方法对真实数据的近似程度。非线性滤波技术有很多,常用的有扩展卡尔曼滤波、粒子滤波、无迹卡尔曼滤波等。其中无迹卡尔曼滤波效果较好,与扩展卡尔曼滤波相比,无迹卡尔曼滤波无需截断高阶项也无需计算Jacobi矩阵,精度与实用性明显优于扩展卡尔曼滤波;与粒子滤波相比,无迹卡尔曼滤波的确定性采样策略比粒子滤波的随机性采样策略更简便,不会出现粒子退化现象,性能更加稳定。
     由于无迹卡尔曼滤波优秀的性能,应用到很多工程领域解决非线性滤波问题。但在应用过程中存在一些问题值得研究。为了使模型具有更好的泛化性与鲁棒性,很多滤波模型都设置了参数来调整模型性能,以适应不同环境的要求。无迹卡尔曼滤波通过确定性采样生成一组带有权重的Sigma点,通过不同权重的Sigma点经系统方程转换后的统计值作为估计值。计算Sigma点过程中需要定义一个缩放因子来控制采样点的范围与权重,该参数对滤波精度有较大影响。通过数学推导与实验证明,若缩放因子设置得不合理会使Sigma点的分布范围偏离真实值,滤波精度下降,极端情况下会使滤波发散。因此缩放因子是个很重要的参数,需要慎重地设置。常规方法仅根据系统维度选择,缩放因子是个定值,实际效果并不理想。虽然后来加入了比例修正技术,消除了非局部效应保证协方差阵的正定性,增加参数选择的灵活性。但比例修正技术未能解决无迹卡尔曼滤波随机发散问题,而引入了两个新的参数增加了系统调整的难度。
     针对缩放因子选择的问题,需要研究新的缩放参数选择方法。目前针对缩放因子选择的研究并不多,因此本文提出利用差分演化算法对缩放因子进行选择的方法。近年来,差分演化算法应用到很多优化问题中,尤其针对系统参数优化选择问题取得了非常好的效果。本文提出了分段式选择与全时刻选择两种模型。分段式选择是将一段时间内的滤波作为优化对象,得到一个缩放因子使滤波误差最小。全时刻选择是对整个滤波过程进行优化,为每时刻滤波单独选择缩放因子。两种方法前者简单快捷,后者能够显著提升滤波精度。将两种方法应用到不同维度的系统中进行实验。结果表明,使用差分演化算法后滤波精度明显高于使用常规方法的滤波,在高维系统中更为明显。更重要的是,差分演化使缩放因子能够根据当前滤波状态进行调节,避免了Sigma点分布偏离的情况,解决了滤波随机发散问题。
     利用差分演化对缩放因子选择的结果得出结论:在无迹卡尔曼滤波系统中,缩放因子应该根据当前滤波状态进行调整才能达到较好的滤波效果。但差分演化是一种后验式优化方法,需要一组无误差理论值作为标准才能指导缩放因子向最优解聚集。而在实时滤波环境中缺乏无误差的标准值,无法进行后验式的优化。因此需要利用别的方法对缩放因子进行实时地调节。
     支持向量回归机是支持向量机用于解决回归问题时的推广形式。其优点是能够以任意精度逼近各种复杂的非线性连续函数,适用于小样本处理、非线性和高维度的问题。只需指定输入数据与输出数据,选择合适的模型参数就能建立起两者的映射关系建立回归模型。因此本文提出利用支持向量回归机对无迹变换的缩放因子进行实时调节的方法。将差分演化对缩放因子选择的结果作为样本数据,利用支持向量回归机建立缩放因子的回归模型。通过实验分析最后选择径向基核函数作为回归模型的核函数,根据当前时刻滤波协方差的特征值选择缩放因子。这种基于支持向量回归机的无迹卡尔曼滤波模型不仅能够提升滤波精度,还能减少滤波随机发散的几率,降低发散对精度的影响,是一种有效的缩放因子选择方法。
     使用支持向量回归机时发现,常规的支持向量回归机无法同时对多个变量进行回归。这限制了其应用范围,提出了支持向量机多维输出回归问题。本文首先介绍了三种解决方法:基于超球体损失函数的多维输出支持向量回归机、基于协同克里金的多维输出支持向量回归机、基于虚拟化向量的多维输出支持向量回归机。通过分析与实验对比得出结论:基于超球体损失函数的方法精度最高,这种方法仅修改了损失函数的定义,用超球体代替了超立方体作为不敏感区域解决了多维回归时惩罚力度不同的问题。超球体损失函数在回归时包含了各分量的拟合误差,对整体性能优化的同时还具有较强的抗噪声性及鲁棒性。整个过程改动最小,没有引入新的参数,主要参数与常规单维输出支持向量回归机一致,使用起来最简便。应用协同克里金法方法以插值统计的估计结果表达多维输出间的关系。在近似过程中需要选择合适的变异函数模型进行拟合,变异函数模型选择是个不确定因素。在回归时用不同维度变量的协方差作为核函数,需要计算互协方差。整个过程过于繁琐并且参数较多,需要协调两套系统参数加大了应用难度。应用虚拟化向量方法用二进制表示法将特征空间扩围处理,用向量化方法保持数据形式上的完整性。但扩维后的向量计算过程变得复杂,在高维度系统中效率较低。核函数需要引入一个调节不同维度输出变量间的相似度系数函数,这增加了回归的不确定性。
     解决了多维输出回归问题后,可以用支持向量回归机对一些复杂的系统建立拟合模型。通过支持向量回归机建立系统的数学模型,估计系统的状态数据,并利用滤波技术降低误差。将这种基于支持向量回归机的无迹卡尔曼滤波模型用于时间序列分析。由于时间序列数据自身的复杂性与不确定性,其数据具有较强的非平稳性、非线性、非高斯特性。传统的方法在建模时往往具有片面性、时效性,难以到达理想的预测效果。因此用本文提出的模型对时间序列数据进行分析,应用到数据异常监测与股票预测的问题中。利用历史记录建立拟合模型,对系统数据进行有效地估计与预测。在数据异常监测应用中,建立污水处理数据的状态模型,检测与识别记录中的异常数据。在股票市场预测中,建立股票指数的状态模型模型,并对未来1天或5天的股票指数进行预测。实验证明,在建立准确的状态空间模型的前提下,能够对时间序列数据取得较好地估计与预测效果。
With the development of science and technology, the application of engineering becomes more and more complex. In order to guarantee the precision of the engineering construction, a large number of filtering technology is used in the process of design and implementation. Filtering technology is still developing as the scope of the engineering application more and more widely. And it's been widely used in target tracking, satellite navigation, space orientation, image processing, engineering control and other engineering fields. In response to a variety of complex data application environment, the filtering technology is used in combination with many the new theories and the new methods which can extend filtering applications, optimize the filter structure, and improve the filtering performance. Such as the Wiener filter and Kalman filter only used in linear system, but now there are many filter as extended Kalman filter,particle filter,Unscented Kalman filter and differential filter etc also can be used in complex nonlinear system. These new types of filters play an important role in all kinds of engineering application.
     Kalman filter is a linear system by the theory of optimal estimation theory to get a kind of state space method in time domain. Consists of a set of recursive filtering algorithm, the algorithm has a small amount of calculation, the characteristics of small storage capacity, simple structure, easy to realize ideal for using computer to calculate. But Kalman filter conditions are calculated by using Bayesian recursive relationship state probability density functions, obtained from the probability density function is detailed information of system state estimation method is only applicable to linear system. But the practical engineering application cases always are non-linear. In order to solve the problem that applications of filter used in nonlinear system, to estimation the state of nonlinear random system accurately. Many people carried out extensive and in-depth research. The approximate method can only be used in the nonlinear model. The nonlinear estimation accuracy is depends on the degree of approximation method was applied to real data. There are many filters used in nonlinear system, the most commonly used are Extended Kalman filter (EKF), Particle filter (PF) and Unscented Kalman filter (UKF). The UKF is the best one of them. Compared with the EKF, UKF does not need to truncated high-order item and calculate the Jacobi matrix, so practicability and precision should be significantly better than EKF. Compared with PF, the uncertainty of sampling of UKF is better than random sampling strategy of PF, which doesn't appear particle degradation phenomenon, with the higher approximation effect and stability.
     Because of the excellent performance, UKF can be applied to many engineering fields to solve nonlinear filtering problem. But there are some problems have to solve. In order to make the model has better generalization and robustness, majority filtering model is set up parameters control to adjust filter characteristics, to satisfy different condition. UKF the deterministic sampling, producing a set of weights with Sigma point, through the different weights of Sigma point by the state equation and observation equation transformed statistics as estimated values. There is a scaling factor which used determines the scope of the sampling points and weights need to define. Scaling factor for unscented transformation has a great influence on the effect of the nonlinear approximation. By mathematical method and experiments, we get the conclusion that if the scaling factor is bad, the Sigma point distribution deviates from the true value, resulting in a decline in approximation effect is poorer filtering precision. In extreme circumstances can make the filter divergence. So the scaling factor is an important parameter which needs to set up carefully. In conventional methods the factor is a fixed value which set based on the system dimension, and the effect is not good. Although the scaling techniques is added in the improved algorithm, which can eliminate the unscented transform when sampling the nonlocal effect to ensure that the covariance matrix is qualitative, and increase the flexibility of parameter choice. But the scaling technology is failed to solve random divergence of UKF, and introduces two new parameters, which increases the difficulty of parameter adjustment system.
     In view of the scaling factor is a matter of choice, need to study new scale parameter selection method. There is only a small study of scaling factor optimization selection, so in this paper, we propose a new method to select the scaling factor by the Differential Evolution algorithm (DE). In recent years, DE is applied to many kinds of optimization problems especially on the problem in view of the various system parameters optimization selection has obtained the very good application results. This paper proposed two models to use DE choosing scaling factors. They are the part way and the whole way. The part way treats the period of filtering and get one scaling factor for the period of filtering. The whole way is use DE in every moment of the filtering process. The former one is simple and quick;the latter one can significantly improve the filtering accuracy. Apply two kinds of methods to different dimensions of experiment in the system. Experimental results show that the application of differential evolution to choose parameters of the filter precision is higher than the practical fixed scaling factor of the filter value, especially in the high latitude is more obvious. And, more importantly, DE makes scaling factor can be adjusted according to current state filtering, avoids the Sigma point distribution deviates from the situation, solve the filtering random divergence problem.
     So we get the conclusion:scaling factor should be adjusted according to current state filtering can achieve better filtering effect in UKF. But DE is the posteriori type optimization method, which need a group without error theoretical values as a standard to guide scaling factor to the optimal solution. But in real-time environment, there isn't a standard, it can't be optimized in posteriori way. So we need to use other method to real-time adjust the scaling factor.
     Support vector regression(SVR) is support vector machine (SVM) is used to solve regression problem form of promotion.SVR has the advantage that can be close to all kinds of complicated nonlinear continuous function with any degree of accuracy and is suitable for dealing with small samples, nonlinear and high dimension problems. Only need to specify the input data and output data, select the appropriate model parameters can be set up mapping relationship of the two set up regression model. So this paper proposed adjust scaling factor by SVR. The results which get from DE is sample data, build the model of scaling factor by SVR. Through the experiment analysis of radial basis kernel function is chosen as the final regression model of kernel function, based on the current the covariance of filtering to choose the scaling factor.
     We found that SVR cannot get multiple variable regressions at the same time. This limits its application scope, so we proposed the Multi-dimensional output vector regression (MSVR). Firstly the paper introduced the three solutions;they are MSVR based on hyper sphere loss function, MSVR based on collaborative kriging, MSVR based on virtualization vector. By contrast with the experimental analysis concluded that the MSVR based on hyper sphere loss function is the best method. In this method only modify the definition of loss function, use super ball instead of a hypercube as the sensitive areas not directly solve the penalties different problems. The loss functions of hyper sphere in regression with the fitting error of each component to the overall performance optimization but also has stronger noise resistance and robustness. Least changes throughout the whole process, without introducing a new parameter, main parameters and conventional one-dimensional output SVR is consistent, the most convenient to use. MSVR based on collaborative kriging express multidimensional interpolation statistical estimates of the results of the relationship between the outputs. In the process of approximate need to choose the right fitting variorums model, the variorums'model selection effect as of the end result of uncertainty. It need to calculated the cross covariance, when the different dimension variable of covariance as the kernel function. The whole process is too complicated and the system parameters is more, need to coordinate the two sets of system parameter application increased the difficulty, has affected the model regression precision. The MSVR based on virtualization vector extend the feature space by binary representation;keep the form of integrity by virtualization vector. But after enlarge dimensional vector calculation process is complicated;the system dimension is higher when the efficiency is low. Kernel function is needed to introduce one to adjust the degree of similarity between different dimensions of the output variable coefficient function. But the coefficient increased the uncertainty of system
     After solve the multi-dimensional output after regression problems, we can use the support vector regression machine fitting model is set up in some complex system. The mathematical model of system can be estimated by SVR, to estimate the states of the system data and reduce the error by filter. The model of UKF based on MSVR is used in time series analysis problem. Due to the complexity and uncertainty of the time series data itself, characterized by strong non-stationary series, nonlinear, non-Gaussian characteristics. The traditional methods in modeling are difficult to reach ideal forecast effect with the problem of One-sidedness, the timeliness. Using the proposed model to analyze time series data, applied to the abnormal monitoring data and the forecast problem of the stock. The fitting model was established based on historical records, to effectively estimate and forecast system data. In abnormal data monitoring applications, the state of sewage disposal data model is set up, test and identify abnormal data in the record. In stock market prediction, the state model of stock index model is established, and the1day or5days in the future to predict the stock index. Experiments show that under the premise of accurate state space model is established, can better estimate of the time series data and prediction effect.
引文
[1]邓自立.王欣.高嫄,建模与估计.北京:科学出版社.2007.12-20.
    [2]Kalman R E.A New Approach to Linear Filtering and Prediction Theory. Trans. ASME. Journal of Basic Eng,1960,82D:35-46.
    [3]Gustafsson F, Hendeby G. Some Relations Between Extended and Unscented Kalman Filters.IEEE Transactions on Signal Processing,2012,60(2):545-555.
    [4]Gordon N J, Salmond D J, Smith A F M. Novel approach to nonlinear none Gaussian Bayesian state estimation. Procom Radar and Signal Processing,1993,140(2):107-113.
    [5]Haug A J. A tutorial on Bayesian estimation and tracking techniques applicable to nonlinear and non-Gaussian processes. Mclean:MITRE Corporation. University of Denmark, 2005:74-81.
    [6]Gustafsson F, Hendeby G. Some Relations between Extended and Unscented Kalman Filters. IEEE Transactions on Signal Processing,2012,60(2):545-555.
    [7]Carpener J, Clifford P, Fearnhead P. Improved Particle Filter for Nonlinear Problems. Radar Sonar Navig,1999,146(1):1-7.
    [8]Julier S J, Uhlmann J K, Durrant-White H F.A new method for the nonlinear transformation of means and covariance in filters and estimators. IEEE Transactions on Automatic Control, 2000,45(3):477-482.
    [9]Arulampalam M S, Maskell S, Gordon N, et al. A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking. IEEE Transactions on Signal Processing.2002, 50(2):174-188.
    [10]Jain A K, Muriy M N, Flynn P J. data clustering:A review, ACM Computing Surveys,1999, 31(3):264-323.
    [11]Drucker, C. J, Burges C, Kaufman L, et al. Support vector regression machines. Advances in Neural Information Processing Systems, Cambridge:MIT Press,1997.155-161.
    [12]Storn R, Price K. Differential evolution:A simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization,1997, 11(4):341-359.
    [13]Price K, Storn R, and Lampinen J. Differential evolution:A practical approach for global optimization, Berlin, Springer-Verlag,2005:524-527.
    [14]Saha S, Mandal P, Boers Y, et al.Gaussian proposal density using moment matching in SMC methods. Statist. Compute,2009,11 (2):41-59
    [15]Handcock, M S, Wallis, J R. An approach to statistical spatial-temporal modeling of meteorological fields. Journal of the American Statistical Association,1994,89:887-892.
    [16]Julier S J. The spherical simplex unscented transformation. American Control Conf,2003: 2430-2434.
    [17]Julier S J. A skewed approach to filtering. The Proc of Aero Sense:12th Int Symposium Aerospace/Defense Sensing Simulation Control,1998:271-282.
    [18]Julier S J, Uhlmann J K. A consistent debased method for converting between polar and Cartesian coordinate systems. The Proc of Aero Sense:The 11th Int Symposium on Aerospace/Defense Sensing, Simulation and Controls,1997:110-121.
    [19]Julier S J, Uhlmann J K. Reduced sigma point filters for the propagation of means and co variances through nonlinear transformations. American Control Conference,2002:887-892.
    [20]Cheng Y P, Liu Z. Optimized selection of sigma points in the unscented Kalman filter. Electrical and Control Engineering,2011:3073-3075.
    [21]Wu Y X, Hu D W, Wu M P, et al. Unscented Kalman filtering for additive noise case: augmented versus no augmented. Signal Processing Letters,2005,12(5):257-360.
    [22]周卫东,乔相伟,吉宇人,等.基于新息和残差的自适应UKF算法.宇航学报,2010,31(7):1798-1804.
    [23]赵琳,王小旭,孙明,等.基于极大后验估计和指数加权的自适应UKF滤波算法.自动化学报,2010,36(7):1007-1019.
    [24]Sakai and Y. Kuroda. Discriminatively trained unscented Kalman filter for mobile robot localization. Journal of Advanced Research in Mechanical Engineering,2002, 1(3):153-161.
    [25]Dunik J, Simandl M, Straka O. Unscented Kalman filter:Aspects and Adaptive Setting of Scaling Parameter. Automatic Control,2012,57(9):2411-2466.
    [26]Vapnik V N,张学工[译].统计学习理论的本质.北京:清华大学出版社,2000:21-26.
    [27]Mattera, S. Haykin. Support vector machines for dynamic reconstruction of a chaotic system. Advances in Kernel Methods-Support Vector Learning, Cambridge:MIT Press, 1999:211-242.
    [28]Stitson M, Gammerman A, Vapnik V N, et al. Advances in Kernel Methods-Support Vector Learning, Cambridge:MIT Press,1999:285-292.
    [29]DeGroat, Ronald D.Approximating nonlinear systems by nonlinear ARMA and AR models, Signal Processing,1992,5:465-468.
    [30]Liu J, Li K, Zhen Y, et al. An MA model based blind source separation algorithm, TENCON, 1999,2:1363-1366.
    [31]Parker S R, Perry F A. A discrete ARMA model for nonlinear system identification. IEEE Transactions on Circuits and Systems,1981,28(3):224-233.
    [32]Engle, R. Dynamic Conditional Correlation:A Simple Class of Multivariate GARCH models, Journal of Business and Economics Statistics,2002,5:154-160.
    [33]巫影,陈定方,唐小兵,等.神经网络综述.科技进步与对策,2002,(6):133-134.
    [34]范听炜,支持向量机算法的研究及应用.[博士学位论文].浙江:浙江大学,2003.
    [35]Lin C.On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks,2001,12(6):1288-1298.
    [36]Chapelle O, Vapnik V, Bousquet O. Choosing multiple parameters for support vector machines. Machine Learning,2002(46):131-159.
    [37]Vapnik V. Support vector method. Lecture Notes in Computer Science,1997:35-40.
    [38]Boser B E, Guyon I M, Vapnik V. A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 1992:144-152.
    [39]Osuna E, Freund R, Girosi F. An improved training algorithm for support vector machines. Proceedings of the 1997 IEEE Workshop on Neural Network for Signal Processing,1997:276-285.
    [40]Joachims T. Making large-scale support vector machine learning practical. Advances in Kernel Methods-Support Vector Learning, Cambridge, MA:MIT Press,1999:169-184.
    [41]Platt J. Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods-Support Vector Learning, Cambridge, MA: MIT Press,1999:185-208.
    [42]Cauwenberghs G, Poggio T. Incremental and detrimental support vector machine learning, Cambridge, MA:MIT Press,2001:115-121.
    [43]Ralaivola, Flovenc. Incremental support vector machine learning:a local approach. Proceedings of International Conference on Neural Networks,2001:322-330.
    [44]Cauwenberghs G, Poggio T. Incremental and Decremental Support Vector Machine Learning. Machine Learning.2001,44(13):409-415.
    [45]Joachims T, Estimating the generalization performance of a SVM efficiently. Proceedings of the International Conference on Machine Learning,2000:431-438.
    [46]Wahba G. Support vector machine. Reproducing kernel hilbert spaces and the randomized GACV. Cambridge, MA:MITPress,1999:69-88.
    [47]Steinwart. On the influence of the kernel on the generalization ability of support vector machines:Technical Report, Friedrich Schiller University,2001:23-26.
    [48]Burges C J C.Advances in Kernel Methods-Support Vector Learning. Cambridge,MA: MIT Press,1999:89-116.
    [49]Scholkopf B,Simard P, Smola A. Prior knowledge in support vector kernels. Advances in Neural Information Processing Systems,1998,10:640-646.
    [50]Amari S, Wu S. Improving support vector machine classifier by modifying kernel functions. Neural Networks,1999,12(5):783-789.
    [51]Zhou W D. A new principle measuring the generalization performance of SVM. Proceedings of IEEE International Conference on Signal Processing,2002:112-118.
    [52]Duan K, Keerthi S S, Poo A N. Evaluation of simple performance measures for tuning SVM hyper parameters. Neuron computing,2003(51):41-59.
    [53]Lee J H, Lin C J. Automatic model selection for support vector machines:Technical Report. Taiwan:Department of Computer Science and Information Engineering, National Taiwan University,2000:54-61.
    [54]Keerthi S S, Chong J O, Martin M S. Two efficient methods for computing leave-one-out error in SVM algorithms.Technical Report. Singapore:Dept. of Mechanical Engineering National University of Singapore,2001:28-32.
    [55]Cawley G C, Talbot N L C. Fast exact leave-one-out cross-validation of sparse least-squares support vector machines. Neural Networks,2004,17:1467-1475.
    [56]Ping F P, Wei C H. Support vector machines. Journal of Basic Prediction Theory,1996, 8:35-46.
    [57]Takuya, Shigeo. Fuzzy support vector machine for pattern classification. Proceedings of International Joint Conference on Neural Network,2001:1449-1455.
    [58]Wei Z, Xian L. Multi-output LS-SVR Machine in Extended Feature Space. Computational Intelligence for Measurement Systems and Applications.2012:130-134
    [59]Yu S P, Yu K. Multi-Output Regularized Feature Projection. Knowledge and Data Engineering,2006,18(12):1600-1613
    [60]Oachims T. Text categorization with support vector machines:learning with many features: Technical Report, Dortmund, German:University of Dortmund,1997:42-51.
    [61]Osuna E, Freund R, Girosi F. Training support vector machines:an application to face detection. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1997:130-136.
    [62]Perez F, Camps G, Soria O E, et al. Multi-dimensional function approximation and regression estimation International Conference on Artificial Neural Networks. Berlin: Springer.2002:757-762
    [63]Vazquez E, Walter E. Multi-output Support Vector Regression,13th IFAC Symposium on System Identification,2003:1820-1825.
    [64]Muller K R, Smola A, Ratsch C. Predicting time series with support vector machines. Proceedings the International Conference on Artificial Neural Networks,1997:999-1004.
    [65]Vapnik V, Colowich S, Smola A. Support vector method for function approximation, regression estimation, and signal processing. Advances in Neural Information Processing stems,1997,9:281-287.
    [66]Cao L, Tay H. Financial Forecasting Using Support Vector Machines. Neural Computing& Applications,2001,10(2):184-192.
    [67]Daubechies. Ten Lectures on Wavelets. Society for Industrial Applied Mathematics, 1992:12-21.
    [68]Li H C, Zhang J S. Local forecast of chaotic time series based on support vector machine. Chinese Physics Letters,2005,22(11):2776-2779.
    [69]姚肖刚,戴连奎基于LS-SVM的近红外光谱汽油辛烷值测定方法,第五届全球智能控制与自动化大会会议论文集,2004:3779-3782.
    [70]王定成,方廷健,高理富,等.支持向量机回归在线建模及应用.控制与决策,2003,18(1):89-95.
    [71]王华秋,廖晓峰,曹长修,等.基于鲁棒LS-SVM的ARMA时序模型研究.系统仿真学报.2007,19(8):1780-1784.
    [72]杨毓,蒙肖莲.用支持向量机构建企业破产预测模型.金融研究,2006,316:65-75
    [73]龚文引.差分演化算法的改进及其在聚类分析中的应用研究.[博士学位论文].武汉.中国地质大学(武汉),2010.
    [74]Wang Y J, Zhang J S, Zhang G Y. A dynamic clustering based differential evolution algorithm for global optimization. European Journal of Operational Research,2007, 183(1):56-73.
    [75]Rahnamayan S, Tizhoosh H, Salama M. Opposition-based differential evolution. IEEE Transactions on Evolutionary Computation,2008,12(1):64-79.
    [76]Caponio A, Neri F, V.'lirronen. Super-tit control adaptation in mimetic differential evolution. frameworks. Soft Computing,2009,13(8-9):811-831.
    [77]Gong W Y, Cai Z H, Ling C X. Hybrid Differential Evolution based on Fuzzy C-means Clustering, proceedings of Genetic and Evolutionary Computation Conference. 2009:523-530.
    [78]Gong W Y, Cai Z H, Ling C X. DEBBO:A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization, Soft Computing, 2010:121-125.
    [79]Simon D. Biogeography-Based Optimization, IEEE Transactions on Evolutionary Computation,2008,12(6):702-713.
    [80]Liu J, Lampinen J. A fuzzy adaptive differential evolution algorithm. Soft Computing,2005, 9(6):448-462.
    [81]Storn R.Differential evolution design of an IIR-filter with requirements for magnitude and group delay. International Computer Science Institute,1995:45-51.
    [82]Lampinen J, Zelinka I. Mechanical engineering design optimization by differential evolution. New Ideas in Optimization,1999:127-146.
    [83]宋立明,李军,丰镇平.跨音速透平扭叶片的气动优化设计研究,西安交通大学学报,2005,39(11):1277-1281.
    [84]Rogalsky T. Aerodynamic shape optimization of fan blades. Master's thesis, University of Manitoba. Department of Applied Mathematics,1998:11-21.
    [85]Rogalsky T, Kocabiyik S, Derksen R. Differential evolution in aero dynamic optimization. Canadian Aeronautics and Space Journal,2000,46(4):183-190.
    [86]Rogalsky T, Derksen R W. Hybridization of differential evolution for aerodynamic design. In Proceedings of the 8th Annual Conference of the Computational Fluid Dynamics Society of Canada.2000:729-736.
    [87]Vancorenland P, Ranter D, Steyaert M, et al, Optimal RF design using smart evolutionary algorithms. Annual Design Automation Conference,2000:7-10.
    [88]Mohan S, Hershenson S P. Boyd T H. Simple accurate expressions for planar spiral inductances. Solid State Circuits,1999,34(10):1419-1424.
    [89]Tirronen F, Neri T, Karlckainen, et al, An Enhanced Mimetic Differential Evolution in Filter Design for Defect Detection in Paper Production, Evolutionary Computation,2008,16(4): 529-55.
    [90]Chou J P, Wang F S. Hybrid method of evolutionary algorithms for static and dynamic optimization problems with application to a fed-batch fermentation process. Computers and Chemical Engineering,1999,23(9):1277-1291.
    [91]Chou J P, Wang F S. A hybrid method of differential evolution with application to optimal control problems of a bioprocess system. International Conference on Evolutionary Computation,1998:627-632.
    [92]Babu B V, Chakole P G, Mubeen J. Multi objective differential evolution for optimization of adiabatic styrene reactor.Chemical Engineering Science,2005,60:4822-4837.
    [93]Chakraborti N, Deb KA. A genetic algorithm based heat transfer analysis of a bloom re-heating furnace. Steel Research,2000,71(10):396-402.
    [94]Babu B V, Angira R. Modified differential evolution for optimization of non-linear chemical processes.Computers and Chemical Engineering,2006,30:989-1002.
    [95]Wang F S, Chou J P. Differential evolution for dynamic optimization of differential-algebraic systems. International Conference on Evolutionary Computation, 1997:531-536.
    [96]Wang F S, Chou J P. Optimal control and optimal time location problems of differential-algebraic systems by differential evolution. Engineering chemistry research, 1997,36:5348-5357.
    [97]Cheong F, Lai R. Designing a hierarchical fuzzy logic controller using differential evolution. International Fuzzy Systems Conference,1999,1:277-82.
    [98]Chang C S, Xu D Y, Quek H B. Pareto-optimal set based multi objective tuning of fuzzy automatic train operation for mass transit system. Electric Power Applications,1999,146: 577-583.
    [99]Chang C S, Xu D Y, Quek H B. Differential evolution based tuning of fuzzy automatic train operation for mass rapid transit system. Electric Power Applications,2000,147:206-212.
    [100]Lopez L L, Cruz, L G,Van Straten G. Parameter control strategy in differential evolution algorithm for optimal control. International Conference Artificial Intelligence and Soft Computing,2001:211-16.
    [101]Ursem R K, Vadstrup P. Parameter identification of induction motors using differential evolution. Congress on Evolutionary Computation,2003,2:790-796.
    [102]刘靖旭.支持向量回归的模型选择及应用研究[博士学位论文].长沙:国防科技大学,2006,4.
    [103]Keerthi S S, Lin CJ. Asymptotic behaviors of support machines with Gaussian kernel. Neural Computation,2003,15:1667-1689.
    [104]Anguita D, Boni A, Ridella S. Support vector machines:a comparison of functions. International Symposium on Soft Genova,1999:114-151.
    [105]Scholkopf B, SmolaA. Learning with kernels. MIT Press, Cambridge, some kernel Computing, MA,2002:42-50.
    [106]Sundararajan S, Shevade S, Keerthi S. Fast generalized cross-validation algorithm for sparse model learning. Neural Computation,2007,19(1):283-301.
    [107]Hsu C W, Chang C C, Lin C J. A practical guide to support vector classification. Technical Report. Department of Computer Science and Information Engineering National Taiwan University, Taiwan,2003:117-121.
    [108]Chapelle O, Vapnik V, Bousquet O. Choosing multiple parameters for support vector machines.Machine learning,2002(46):131-159.
    [109]Kentaro Ito. Hyper parameters based on Nakano Optimizing support vector regression cross-validation. Joint Conference on Neural Networks,2003:2077-2082.
    [110]邢强,袁保宗,唐晓芳.一种提高支持向量机针对低维向量分类精度的新方法.信号处理,2006,20(3):221-226.
    [111]谢云.模拟退火算法综述.微计算机信息,1998,14(5):66-68.
    [112]席裕庚,柴天佑,恽为民.遗传算法综述,控制理论与应用,1996,13(6):697-708.
    [113]黄磊.粒子群优化算法综述.机械工程与自动化,2010,10:197-199.
    [114]谭艳峰,贾振红,覃锡忠,等.SA-SVR在移动通信话务量预测中的应用.计算机工程,2010,36(22):195-199.
    [115]Ping F P, Wei C H. Support vector machines with simulated annealing algorithms in electricity load forecasting.Energy Conversion and Management,2005(46):2669-2688.
    [116]Ustun B, Melssen W J, Ouden M. Determination of optimal support vector regression parameters by genetic algorithms and simplex optimization. Analytical Chemical Acta,2005, 544:292-305.
    [117]王小旭,潘泉,黄鹤,等.非线性系统确定采样型滤波算法综述.控制与决策,2012,27(6):801-812.
    [118]潘泉,杨峰,叶亮,等.一类非线性滤波器—UKF综述.控制与决策,2005,20(5):481-489.
    [119]Julie S J. The scaled unscented transformation. American Control Conference,2002, 6:455-4559.
    [120]Wan E A, Vander Merwe R. The unscented Kalman filter for nonlinear estimation. Adaptive Systems for Signal Processing Communications and Control Symposium, 2000:153-158.
    [121]Simandl M, Kr'alovec J, Soderstrom T. Anticipative grid design in point-mass approach to nonlinear state estimation. Automatic Control,2002,47(4):699-702.
    [122]张军华,王永刚,王建华,等.克里金技术在地震属性与储层参数转换中的应用,中国地球物理学会第十六届年会,2000:31-35.
    [123]P.I.Brooker,李行.克里金法,有色矿山.1980.4:12-15.
    [124]徐基祥,王才经.综合地质钻井和地震资料预测储层物性参数,石油地球物理勘探.1996,31(1):84-91.
    [125]牛文杰,孟宪海,李吉刚.结合软数据的同位置协同克里金估值新方法.煤田地质与勘探,2011,39(2):13-17.
    [126]牛文杰,朱大培,陈其明.贝叶斯协同克里金法.计算机辅助设计与图形学学报,2002,14(4):343-347.
    [127]Hu G S, Dong L. Multi-Output Support Vector Machine Regression and Its Online Learning. Computer Science and Software Engineering,2008,4:878-881.
    [128]Kotecha J H, Djuric P A. Gaussian particle filtering. Signal Process.2003, 51(10):2592-2601.
    [129]张保稳.时间序列数据挖掘研究.[博士学位论文].西安:西北工业大学.2002,3.
    [130]王达.时间序列数据挖掘研究与应用.[博士学位论文].浙江:浙江大学.2004,2.
    [131]张保稳,何华灿.时态数据挖掘研究进展.计算机科学,2002.29(2):124-63.
    [132]刘汉中.阈值自回归模型参数估计的小样本性质研究.数量经济技术经济研究.2009,10:112-124.
    [133]潘保国.一类双线性模型的参数估计.湖南文理学院学报(自然科学版),2009,21(4):6-12.
    [134]刘大同.基于Online SVR的在线时间序列预测方法及其应用研究.[博士学位论文].哈尔滨:哈尔滨工业大学.2010-06.
    [135]肖政宏,蒋岩.基于卡尔曼滤波和相关系数的异常检测方法.计算机工程与设计,2009,30(10):2401-2403.
    [136]李春兴.白建东.基于组合预测模型的股票预测方法的研究.青岛理工大学学报.2008,2:83-85.
    [137]唐春艳,彭继兵,邓永辉.卡尔曼实时跟踪模型在股票价格预测中的应用.计算机仿真2005,22(9):218-221.
    [138]谢国强.基于支持向量回归机的股票价格预测.计算机仿真.2012,4,379-382.
    [139]夏国恩,邵培基.改进SVR在金融时间序列预测中的应用.金融理论与实践,2008,11,95-98.
    [140]杨建辉,李龙.基于SVR期权价格预测模型.系统工程理论与实践,2011,31(5):848-854.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700