基于机器学习方法的高光谱影像分类研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
高光谱影像分类是高光谱遥感应用的关键技术之一,对于资源调查、环境监测、精细农业、军事测绘与战场环境探测等应用具有重要意义。为了满足高光谱影像分类对精度、速度和可靠性的要求,针对高光谱影像具有的高维相关特征、非线性可分的特点,本文围绕影像分类及特征提取技术,采用机器学习方法展开了深入研究。论文的主要研究内容和创新点如下:
     1.针对基于乘法迭代规则的高光谱影像非负矩阵分解(NMF)收敛速度慢的问题,给出了一种基于改进投影梯度的快速分解算法。采用分块坐标下降法将NMF的整体优化转化为两个子优化问题迭代求解,而每个子优化问题都通过改进的投影梯度法计算。实验分析表明,该方法能够明显提高高光谱影像NMF的收敛速度。
     2.针对高光谱影像广义判别分析(GDA)特征提取中核函数参数选择问题,给出了一种高斯径向基核函数参数选择方法。将训练样本数值范围规范化,按对数形式将参数空间离散化,通过交叉验证获取合适参数。实验分析显示,使用该方法选择的参数,GDA特征提取能够显著提高高光谱影像分类的精度。
     3.提出了一种基于简约集(RS)支持向量机(SVM)的高光谱影像分类方法。采用序列最小优化算法、交叉验证网格搜索参数选择方法构造高精度的多类SVM分类器,通过微分进化算法求解原像问题来获取简约集向量。经验证,SVM具有良好的泛化能力,RS-SVM能够保持分类精度,有效提高分类速度。
     4.提出了一种基于相关向量机(RVM)的高光谱影像模糊分类方法。采用序列稀疏贝叶斯学习算法提高RVM训练速度,针对一对一法构造的多类RVM分类器,将两两配对的概率输出转化为相对于地物类别的隶属度。与SVM的比较表明,RVM参数选择简单、分类速度快;利用模糊隶属度能够标识混合像元,有效提高影像分类的可靠性。
     5.提出了一种基于自适应提升(AdaBoost)高光谱影像集成分类方法。选择决策树树桩作为弱分类器,通过Gentle AdaBoost算法提升为分段线性的强分类器,采用一对余法解决多类分类问题。结果表明,AdaBoost分类方法的训练和分类速度快,分类精度优于一般分类方法。
     6.通过理论和实验分析了SVM、RVM和AdaBoost在分类精度、训练和分类速度方面的表现,指出了各自在不同高光谱影像分类应用中的潜力。SVM适用于实时性要求不高的地物精细分类,RVM适用于获取具有统计意义的地物分类,而AdaBoost适用于分类精度要求较高的快速分类。
Hyperspectral imagery classification is one of the key technologies in application of hyperspectral remote sensing, and is of great significance for resource investigation, environment monitoring, fine agriculture, surveying and mapping, and battlefield detection. To fulfill the requirement of hyperspectral imagery classification for accuracy, speed and reliability, considering the characteristic of correlation in high dimension and non-linear separability, imagery classification and dimensionality reduction were studied in depth using machine learning methods. The main works and creations of this dissertation are listed as follows:
     1. To solve the problem of slow speed of hyperspectral imagery non-negative matrix factorization (NMF) based on multiplicative update rule, a fast factorization algorithm based on improved projection grads was given. Using the block coordinate descent method, the global optimization of NMF was transformed into two sub-optimization problems solved by iteration, and each sub-optimization problem computed by improved projection grads. Through experiments it can be seen that this method increases the convergence speed of NMF evidently.
     2. A radial basis function (RBF) kernel parameters selection method was given to solve the problem of kernel parameter selection in hyperspectral imagery generalized discriminant analysis (GDA) feature extraction. Firstly, the number range of training samples was standardized and parameter space was dispersed logarithmically, then parameter was obtained by cross-validation. The analysis of experiments shows that, using this approach to select kernel parameter, GDA feature extraction can improve the accuracy of hyperspectral classification evidently.
     3. A reduced set (RS) based on support vector machine (SVM) hyperspectral imagery classification method was brought forward. Using the sequence minimum optimized algorithm and cross validation grid searching parameter selection method, a high precision multi-class SVM classifier is constructed. Reduced set vectors are obtained by solving the pre-image problem through differential evolution algorithm. It is validated that SVM has good generalization ability and RS-SVM can keep classification precision and increase the speed of classification.
     4. A relevance vector machine (RVM) based on hyperspectral imagery fuzzy classification method was brought forward. In this algorithm, sequence sparse Bayesian learning algorithm was used to improve the training speed of RVM. For the multi-class RVM classifier constructed by one against one decomposition method, we transform the probability of pairwise coupling classifier into membership of the ground objects classes. Compared with SVM through experiments, RVM parameter selection is simpler, and the training and classifying speed is faster. Using fuzzy membership can label mixed pixels and improve the reliability of imagery classification effectively.
     5. An AdaBoost based hyperspectral imagery ensemble classification method was put forward. Firstly decision stump was selected as weak classifier, and then this classifier can be boosted into subsection linear strong classifier by gentle AdaBoost algorithm, finally multi-class classification was designed by one against rest decomposition method. The speed of training and classifying was proved fast, and the classification accuracy of this method is better than many other methods.
     6. Both theoretically and experimentally, classification accuracy, training and test speed of SVM, RVM and AdaBoost were analyzed, and their different potentials in different hyperspectral imagery classification applications are pointed out. SVM is suitable for fine classification of ground objects which does not require real-time processing. RVM is applicable to classification of ground objects with statistical prediction. AdaBoost can be applied in fast classification which requires high precision.
引文
[1]余旭初,冯伍法,林丽霞.高光谱——遥感测绘的新机遇[J].测绘科学技术学报,2006,23(2):101-105.
    [2]浦瑞良,宫鹏.高光谱遥感及其应用[M].北京:高等教育出版社,2000:47-78.
    [3]童庆禧,张兵,郑兰芬.高光谱遥感——原理、技术与应用[M].北京:高等教育出版社,2006.
    [4]杨国鹏.基于核方法的高光谱影像分类与特征提取[D].郑州:信息工程大学,2007.
    [5]路威.面向目标探测的高光谱影像特征提取与分类技术研究[D].郑州:信息工程大学,2005.
    [6] Vapnik N V著;许建华,张学工译.统计学习理论[M].北京:电子工业出版社,2004:274-322.
    [7] Plaza A J, Benediktsson A et al. Recent Advances in Techniques for Hyperspectral Image Processing [J]. Remote Sensing of Environment, 2009.
    [8]王钰.关于机器学习的讨论[A].见:王钰,周志华,周傲英.机器学习及其应用[M].北京:清华大学出版社,2006:1-21.
    [9] Kramer H J. Earth Observation History on Technology Introduction [EB]. http://www.eoportal.org/ documents/ kramer/history.pdf, 2010. 64-71.
    [10]万余庆,谭克龙,周日平.高光谱遥感应用研究[M].北京:科学出版社,2006:30-49.
    [11]杨国鹏,余旭初,刘伟等.面向高光谱遥感影像的分类方法研究[J].测绘通报,2007,10:17-20.
    [12]钱乐祥,泮学芹,赵芊.中国高光谱成像遥感应用研究进展[J].国土资源遥感,2004.60(2):1-6.
    [13] Rogge D M. Application of Spectral Mixture Analysis to Hyperspectral Imagery for Lithological Mapping [D]. Alberta University. 2007.
    [14] Hughes D C. A Hybrid Neural Network Algorithm for Hyperspectral, Remotely Sensed, Shallow-water Bathymetry [D]. Southern Mississippi University. 2002.
    [15] Chang C I, Recent Advances in Hyperspectral Signal and Image Processing [M], Research Signpost, Trasworld Research Network, India, 2006
    [16] Chang C I, Hyperspectral Data Exploitation: Theory and Applications [M], John Wiley & Sons, 2007.
    [17] Vermote E F. Second Simulation of the Satellite Signal in the Solar Spectrum, 6S: An Overview [J]. IEEE Transactions on Geoscience and Remote Sensing, 1997, 35(3):675-686.
    [18]尹清波.基于机器学习的入侵检测方法研究[D].哈尔滨:哈尔滨工程大学,2007.
    [19] Mitchell T M. Machine Learning [M]. New York: McGraw-Hill,1997.
    [20]蒋艳凰,赵强利.机器学习方法[M].北京:电子工业出版社,2009:1-10.
    [21]王钰.机器学习与人工智能[A].见:周志华,王钰.机器学习机器应用2009[M].北京:清华大学出版社,2009:1-31.
    [22] Duda R O, Hart P E, Stork D G. Pattern Recognition, Second Ediition [M], New York: John Wiley& Sons, 2001.
    [23] Fayyad U, Piatetsky-Shapiro G, Smyth R. Knowledge Discovery and Data Mining: Towards a Unifying Framework [A]. In: Proceedings of the Second International Conference on KnowledgeDiscovery and Data Mining [C], Portland, OR, 1996, 82-88.
    [24]王雪松,程玉虎.机器学习理论,方法及应用[M].北京:科学出版社,2009:1-19.
    [25]陈凯,朱钰.机器学习及其相关算法综述[J].统计与信息论坛,2007,22(5):105-112.
    [26] Shawe-Tsylor J, Cristianini N. Kernel Methods for Pattern Analysis [M]. London: Cambridge University Press, 2004: 47-82.
    [27] Tenenbaum J B, Silva V, Langford J C. A Global Geometric Framework for Nonlinear Dimensionality Reduction [J]. Science, 2000, 290: 2319-2323.
    [28] Roweis S T and Saul L K. Nonlinear Dimensionality Reduction by Locally Linear Embedding [J]. Science, 2000, 290:2323~2326.
    [29] Seung H S, Daniel D L. The Manifold Ways of Perception [J]. Science. 2000, 290(12): 2268- 2269.
    [30] Dietterich T G. Machine Learning Research: Four Current Directions [J]. Artificial Intelligence, 1997, 18(4):97-136.
    [31] Gualtieri J A, Cromp R F. Support Vector Machines for Hyperspectral Remote Sensing Classification [A]. In: The 27th AIPR Workshop on Advances in Computer Assisted Recognition[C]. Washington D C 1998.
    [32]夏建涛.基于机器学习的高维多光谱数据分类[D].西安:西北工业大学,2002.
    [33]董广军.高光谱影像流形降维与融合分类技术研究[D].郑州:信息工程大学,2008.
    [34] Kwok J T, Tsang I W. The Pre-image Problem in Kernel Methods [A]. In: Proceedings of the Twentieth International Conference on Machine Learning [C], Washington, D.C., USA, 2003. 408-415.
    [35] Bishop C M. Pattern Recognition and Machine Learning [M]. Springer, 2007.
    [36] Bishop C M, Tipping M E. Variational Relevance Vector Machines [A]. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence[C]. Morgan Kaufmann, 2000: 46-53.
    [37] Tipping M E, Faul A. Fast Marginal Likelihood Maximization for Sparse Bayesian Models [A]. In: Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics[C], Key West, Florida. 2003.
    [38] Masalmah Y M. Unsupervised Unmixing of Hyperspectral Imagery using the Constrained Positive Matrix Factorization [D]. PhD Thesis of Puerto Rico University. 2007.
    [39] Manolakis D, Marden D, Shaw A G. Hyperspectral Image Processing for Automatic Target Detection Applications [J]. Lincoln Laboratory Journal, 2003, 14(1): 79-115.
    [40] Landgrebe D A. Information Extraction Principles and Methods for Multispectral and Hyperspectral Image Data [M]. In: Information Processing for Remote Sensing. Hackensack: World Scientific Publishing, 2000.
    [41] Jimenez L O, Landgrebe D A. Supervised Classification in High Dimensional Space: Geometrical, Statistical, and Asymptotical Properties of Multivariate Data [J]. IEEE Transactions on System, Man and Cybernetics, 1998, 28(1): 39-54.
    [42] Hughes G F. On the Mean Accuracy of Statistical Pattern Recognizers [J]. IEEE Transactions on Information Theory, 1968, 14(1): 55-63.
    [43] Fukunaga K, Hayes R R. Effects of Sample Size in Classifier Design [J]. IEEE Transactions PatternAnalysis and Machine Intelligence, 1989, 11(8):873~885.
    [44] Jensen R J. Introdctory Digital Image Processing: A Remote Sensing Perspective, Third Edition. Pearson Education Limited, 2005.
    [45] Sweet J N. The Spectral Similarity Scale and Its Application to the Classification of Hyperspectral Remote Sensing Data [D]. New York: State University of New York, 2002.
    [46]王晋年,郑兰芬,童庆禧.成像光谱图象光谱吸收鉴别模型与矿物填图研究[J].环境遥感,1996,11(1):20-30.
    [47] Meer F van der, Bakker W. Cross Correlogram Spctral Matching: Application to Surface Mineralogical Mapping by Using AVIRIS Data from Cuprite, Nevada [J]. Remote Sensing of Environment.1997.61:371-382.
    [48]杜培军,唐宏,方涛.高光谱遥感光谱相似性度量算法与若干新方法研究[J].武汉大学学报(信息科学版),2006,31(2):112-115.
    [49] Johnson L F, Billow C R. Spectrometric Estimation of Total Nitrogen Concentration in Douglas-fir Foliage [J]. International Journal of Remote Sensing, 1996, 17(3): 489-500.
    [50]张良培,张立福.高光谱遥感[M].武汉:武汉大学出版社,2005:102-126.
    [51] Boardman J W. Mapping Target Signatures via Partial Unmixing of AVRIS Data: in Summaries [A]. In: Proceedings of the Fifth JPL airborne geoscience workshop[C]. JPL Publication, 1995, 95(1): 23-26.
    [52] Winter M E. Fast Autonomous Spectral End-member Determination in Hhyperspectral Data [A], In: Proceedings of 13th International Conference on Applied Geologic RemoteSensing [C], 1999: 337-344.
    [53] Neville R A, Staenz K, Szeredi T, et al. Automatic Endmember Extraction from Hyperspectral Data for Mineral Exploration [A], In: Proceedings of 21st Canadian symposium on remote sensing [C], 1999, 21-24.
    [54] Plaza A. Martinez P. Perez R. et al. Spatial/Spectral Endmember Extraction by Multidimensional Morphological Operations [J], IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(9):2015-2041.
    [55] Ren H, Chang C I. Automatic Spectral Target Recognition in Hyperspectral [J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 39(4): 1232-1249.
    [56] Chang C I, Wu C C et al. A New Growing Method for Simplex-Based Endmember Extraction Algorithm [J]. IEEE transactions on geoscience and remote sensing, 2006, 44(10): 2804-2818.
    [57] Plasencia F G. Implementation of the Unsupervised Possibility Fuzzy C-Means Algorithm for Classification of Hyperspectral Data [D]. Puerto Rico University, 2002.
    [58]赵英时.遥感应用分析原理与方法[M].北京:科学出版社,2003:194-208.
    [59]杨哲海.高光谱影像分类若干关键技术的研究[D].郑州:信息工程大学,2006.
    [60] Duda O R,Hart E P,Stock G D著;李宏东,姚天翔等译.模式分类(第二版)[M].北京:机械工业出版社,2003.
    [61] Dundar, M M. Toward an Optimal Analysis of Hyperspectral Data [D]. Purdue University. 2003.
    [62]熊桢,童庆禧,郑兰芬.用于高光谱图像分类的一种高阶神经网络算法[J].中国图象图形学报,2000,5(3).198-201.
    [63] Bosch E H. Perturbed Neural Network Backpropagation Learning and Adaptive Wavelets for Dimension Reduction for Improved Classification of High-dimensional Datasets [D]. George Mason University.2005.
    [64]徐宏根,马洪超,李德.结合SOM神经网络和混合像元分解的高光谱影像分类方法研究[J],遥感学报,2007,28(9),778-786.
    [65]谭琨,杜培军。基于径向基函数神经网络的高光谱遥感图像分类[J].光谱学与光谱分析,2008,28(9),2009-2013.
    [66]王圆圆,李京.基于决策树的高光谱数据特征选择及其对分类结果的影响分析[J].遥感学报,2007,1(11):69-76.
    [67]张丰,熊桢,寇宁.高光谱遥感数据用于水稻精细分类研究[J].武汉理工大学学报,2002,24(10):36-39.
    [68]张倩.基于决策树方法的航空高光谱遥感土地覆盖分类研究[D].青岛:山东科技大学,2005.
    [69] Masalmah Y M. Statistical Modeling of Hyperspectral Data using Gauss-Markov Random Fields and Its Application to Classification [D]. M.Sc Thesis of Puerto Rico University. 2002.
    [70] Plaza A, Martinez P, Perez R, Plaza J. A New Approachto Mixed Pixel Classification of Hyperspectral Imagery Based on Extended Morphological Profiles [J]. Pattern Recognition, 2004.37: 1197-1116.
    [71] Neher R, Srivastava A. A Bayesian MRF Framework for Labeling Terrain Using Hyperspectral Imaging [J]. IEEE Transactions on Geoscience and Remote Sensing, 2005,43(6): 1363-1374.
    [72] Serpico S B, Moser G, Extraction of Spectral Channels from Hyperspectral Images for Classification Purposes [J], IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(2): 484-495.
    [73]刘春红.超光谱遥感图像降维及分类方法研究[D].哈尔滨:哈尔滨工程大学.2005.
    [74]杨金红.高光谱遥感数据最佳波段选择方法研究[D].南京:南京信息工程大学.2005.
    [75] Nakariyakul S. Feature Selection Algorithms for Anomaly Detection in Hyperspectral Data [D]. Carnegie Mellon University. 2007.
    [76] Serpico S B, Moser G, Extraction of Spectral Channels from Hyperspectral Images for Classification Purposes [J], IEEE Transactions on Geoscience and Remote Sensing, 2007.45(2): 484-495.
    [77] Nakariyakul S. Feature selection algorithms for anomaly detection in hyperspectral data [D]. Carnegie Mellon University, 2007.
    [78] Cruz E A. A Comparison on Methods for Dimensionality Reduction of Hyperspectral Images [D]. Puerto Rico University. 2002.
    [79] Jia X, Richards J A. Segmented Principal Components Transformation for Efficient Hyperspectral Remote Sensing Image Display and Classification [J]. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(1): 538-542.
    [80] Kaewpijit S. High-performance Dimension Reduction of Hyperspectral Data [D]. George Mason University.2002.
    [81] Green A A, Berman M, Switzer P, Craig M D. A Transformation for Ordering Multispectral Data in Terms of Image Quality with Implications for Noise Removal [J]. IEEE Transactions on Geoscienceand Remote Sensing, 1988, 26(1): 65-74.
    [82] Lee C, Landgrebe D A. Feature Extraction based on Decision Boundaries [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993, 15(4): 388-400.
    [83] Kuo B C, Landgrebe D A. Hyperspectral Data Classification using Nonparametric Weighted Feature Extraction [A]. In: Proceedings of IEEE International Conference on Geoscience and Remote Sensing Symposium[C]. 2002,1428-1430.
    [84] Kuo B C. Improved Statistics Estimation and Feature Extraction for Hyperspectral Data Classification [D]. PhD Thesis of Purdue University. 2001.
    [85] Kumar S, Ghosh J, Crawford M M. Best-Bases Feature Extraction Algorithms for Classification of Hyperspectral Data [J]. IEEE Transactions on Geoscience and Remote Sensing. 2001, 39(7): 1368-1379.
    [86] Jimenez L O, Landgrebe D A. Projection Pursuit in High Dimensional Data Reduction: Initial Conditions,Feature Selection and the Assumption of Normality [A]. In: IEEE International Conference on Systems, Man and Cybernetics[C], Vancouver Canada, 1995.
    [87] Ifarraguerri A. Hyperspectral Image Analysis with Convex Cones and Projection Pursuit [D]. Maryland University. 2000.
    [88]张连蓬.基于投影寻踪的非线性主曲线的高光谱遥感图像特征提取及分类研究[D].青岛:山东科技大学,2003.
    [89] Hyv?rinen A. Fast and Robust Fixed Point Algorithms for Independent Component Analysis [J]. IEEE Transactions on Neural Networks,1999,10(3): 626~634.
    [90] Robila S A. Independent Component Analysis based Feature Extraction for Hyperspectral Images [D]. Syracuse University, 2002.
    [91] Wang J, Chang C I. Independent Component Analysis-based Dimensionality Reduction with Applications in Hyperspectral Image Analysis [J], IEEE Transactions on Geoscience and Remote Sensing, 2006, 44(6): 1586-1600.
    [92] Bruce M, Koger L, Li J. Dimensionality Reduction of Hyperspectral Data using Discrete Wavelet Transforms Feature Extraction [J]. IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(10): 2331-2338.
    [93]刘正军.高维遥感数据土地覆盖特征提取与分类研究[D].北京:中国科学院遥感应用研究所,2003.
    [94] Diaz A U. Determining the Dimensionality of Hyperspectral Imagery [D]. Puerto Rico University. 2003.
    [95] Saul L and Roweis S. Think Globally, Fit Locally: Unsupervised Learning of Nonlinear Manifolds [J]. Journal of Machine Learning Research, 2003, 4:119-155.
    [96]杨国鹏,余旭初,周欣等.基于广义判别分析的高光谱影像特征提取[J].大连海事大学学报,2008,34(3),59-63.
    [97] Hoffbeck J P, Landgrebe D A. Classification of Remote Sensing Images having High Spectral Resolution [J]. Remote Sensing of Environment, 1996, 57(3): 119-126.
    [98] Ham J, Chen Y, Crawford M, Ghosh J. Investigation of the Random Forest Framework forClassification of Hyperspectral Data, IEEE Transactions on Geoscience and Remote Sensing,accepted for publication.
    [99] Crawford M M, Ham J, Chen Y, Ghosh J. Random Forests of Binary Hierarchical Classifiers for Analysis of Hyperspectral Data [A]. Proceedings of IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data [C], Goddard Space Flight Center, Greenbelt, MD, 2003.
    [100] http://www.kernel-machines.org/
    [101]王国胜.核函数的性质及其构造方法[J].计算机科学,2006,33(6):172-178.
    [102]王华忠,俞金寿.核函数方法及其模型选择[J].江南大学学报(自然科学版),2006,5(4):500-504.
    [103] Lee D D, Seung H S. Learning the Parts of Objects by Non-negative Matrix Factorization [J]. Nature, 1999.401, 788-791.
    [104] Hoyer P O. Non-negative Matrix Factorization with Sparseness Constrains [J]. Journal of Machine Learning Research, 2004, 5(9), 1457-1469.
    [105] Lin C J. Projected Gradient Methods for Non-negative Matrix Factorization [J]. Neural Computation, 2007.19, 2756-2779.
    [106]李乐,章毓晋.非负矩阵分解算法综述[J].电子学报,2008,36(4):737-743.
    [107]狄文羽,何明一,梅少辉.基于快速非负矩阵分解和RBF网络的高光谱图像分类算法[J].遥感技术与应用.2009,24(3):385-390.
    [108] Lee H, Cichocki A, Choi S, Nonnegative Matrix Factorization for Motor Imagery EEG Classification [A], In: Proceedings of the International Conference on Artificial Neural Networks[C]. Athens, Greece, 2006.
    [109] Duin R P W, Juszczak P, Paclik P, Pekalska E, et al. PRTools4.1, A Matlab Toolbox for Pattern Recognition, Delft University of Technology, 2007.
    [110] Liu C, Wechsler H. Robust Coding Schemes for Indexing and Retrieval from Large Face Databases [J], IEEE Transactions on Image Processing, 2000, 9(1):132-136.
    [111] Webb A. Statistical Pattern Recognition [M], John Wiley & Sons, New York, 2002.
    [112] Foley D H, Sammon J W. An Optimal Set of Discriminant Vectors [J]. IEEE Transactions on Computers, 1975, 24(3): 281-289.
    [113] Baudat G, Anouar F. Generalized Discriminant Analysis using A Kernel Approach [J]. Neural Computation, 2000, 12(10): 2385-2404.
    [114]高秀梅,杨静宇,金忠.基于核的Foley-Sammon鉴别分析与人脸识别[J].计算机辅助设计与图形学学报,2004,16(7):962-967.
    [115]李巍华.基于核方法的机械故障特征提取与分类技术研究[D].武汉:华中科技大学,2003.
    [116] Valiant L. A theory of learnability [J]. Communication ACM. 1984, 27: 1134-1142.
    [117]张学工.关于统计学习理论与支持向量机[J].自动化学报,2000,26(1):32-42.
    [118] Watanachaturaporn P. Classification of Remote Sensing Images using Support Vector Machines [D]. PhD Thesis of B.E. King Mongkut’s Institute of technology Ladkrabang. 2005.
    [119] Joachims T. Making Large-scale Support Vector Machine Learning Practica1 [A]. In: Advances in Kernel Methods-Support Vector Learning [M]. Cambridge, MA. MIT Press, 1999.
    [120] Platt J. Sequential minimal optimization: A Fast Algorithm for Training Support Vector Machine [R],Technical Report MSR-TR-98-143. Microsoft Research, 1998.
    [121] Mangasarian O L, Musicant D R. Successive Overrelaxation for Support Vector Machines [J]. IEEE Transactions on Neural Networks. 1999, 10: 1032-1037.
    [122] Scholkopf B, Smola A, Williamson R C, Bartlett P L. New Support Vector Algorithms [J]. Neural Computation, 2000, 12: 1207-1245.
    [123] Suykens J, Lukas L, Vandewalle J. Sparse Approximation using Least Squares Support Vector Machine [A]. In: IEEE International Symposium on Circuitsand Systems [C]. Geneva, 2000. 2: 757-760.
    [124] Rifkin R, Clautau A. In Defense of One-vs-all Classification [J]. Journal of Machine Learning Research, 2001, 5: 101-141.
    [125] Debnath R, Takahide N, Takahashi H. A Decision based on One-against-one Method for Multiclass Support Vector Machine [J]. Pattern Anal Applic, 2004, 7: 164-175.
    [126] Platt J, Cristianini N, Shawe-Taylor J. Large Margin DAGs for Multiclass Classification [M], Advances in Neural Information Processing Systems, MIT Press, 2000.
    [127]黄勇,郑春颖,宋忠虎.多类支持向量机算法综述[J].计算机技术与自动化,2005,24(4):61-63.
    [128] Scholkopf B, Knirsch P, Smola C, Burges A. Fast Approximation of Support Vector Kernel Expansions and an Interpretation of Clustering as Approximation in Feature Spaces [J]. DAGM. Springer 1998.124-132.
    [129]于龙,肖建,刘陆洲.基于简约集向量的Takagi-Sugeno模糊模型[J].控制理论与应用,2009,26(5):555-557.
    [130]王瑞平,陈杰,山世光等.基于支持向量机的人脸检测训练集增强[J].软件学报,2008,19(11):2921-2931.
    [131] Tang B, Mazzoni D. Multiclass Reduced-Set Support Vector Machines [A]. In: Proceedings of the Twenty-third International Conference on Machine Learning [C]. 2006,921-928.
    [132] Stron R, Price K. Differential Evolution-A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces[R]. Technical Report TR-95-012, ICSI, 1995.
    [133] Storn R, Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces [J]. Journal of Global Optimization, 1997, 11: 341-359.
    [134] Ripley B D. Neural Networks and Related Methods for Classification [M]. Journal of Royal Statistical Socociety. Series B, 1994,56:409-456.
    [135] Tipping M E. Sparse Kernel Principal Component Analysis [M]. In: Advances in Neural Information Processing Systems. MIT Press, 2001.
    [136] Boser B E, Guyon I M, Vapnik V N. A Training Algorithm for Optimal Margin Classifiers [A]. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory [C], 1992: 144-152.
    [137] Thayananthan A. Template-based Pose Estimation and Tracking of 3D Hand Motion [D], Department of Engineering, University of Cambridge, September 2005.
    [138] Silva C, Ribeiro B. Scaling Text Classification with Relevance Vector Machines [A]. In: IEEE International Conference on Systems, Man and Cybernetics[C]. 2006: 4186-4191.
    [139] Demir B, Erturk S. Hyperspectral Image Classification Using Relevance Vector Machines [J]. Geoscience and Remote Sensing Letters, IEEE, 2007:586-590.
    [140] Nikolaev N, Tino P. Sequential Relevance Vector Machine Learning from Time Series [C]. IEEE International Joint Conference on Neural Networks, 2005:1308-1313.
    [141] Tipping M E. Sparse Bayesian Learning and the Relevance Vector Machine [J]. Journal of Machine Learning Research, 2001: 211–244.
    [142] MacKay D J C. The Evidence Framework Applied to Classification Networks [J]. Neural Computation, 1992: 720-736.
    [143] Thayananthan A. Relevance Vector Machine based Mixture of Experts [R], Technical Report, Department of Engineering, University of Cambridge, 2005.
    [144] Foody G M. Relating the Land Cover Composition of Mixed Pixels of Artificial Neural Network Classification Output [J]. Photogrammetry Engineering and Remote Sensing, 1996, 62(5): 491-499.
    [145] Kolaczyk E D. On the Use of Prior and Posterior Information in the Subpixel Proportion Problem [J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(11): 2687- 2691.
    [146]吴波,张良培,李平湘.基于支撑向量机概率输出的高光谱影像混合像元分解[J].武汉大学学报:信息科学版,2006,36(1):51-54.
    [147]闫永忠,万余庆.高光谱图像模糊识别分类及精度评价[J].地球信息科学,2005,7(4):20-24.
    [148]别怀江.基于模糊集的遥感图像分类研究[D].哈尔滨:哈尔滨工程大学.2005.
    [149] Platt J.C. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods [EB/OL]. http://research.miscrosoft.com/~jplatt,1999.
    [150]李惠,王云鹏,李岩,王兴芳.基于SVM和PWC的遥感影像混合像元分解[J].测绘学报,2009,38(4):318-323.
    [151] Wu T F, Lin C J, Weng R C. Probability Estimates for Multi-class Classification by Pairwise Coupling [J]. Journal of Machine Learning Research, 2004.5: 975-1005.
    [152] Kearns M, Valiant Leslie G. Learning Boolean formulae or finite automata is as hard as factoring [R]. Technical Report TR-14-88, Harvard University Aiken Computation Laboratory, 1988.
    [153]于玲,吴铁军.集成学习:Boosting算法综述[J].模式识别与人工智能,2004.17(1):52-59.
    [154] Schapire R. The Strength of Weak Learnability [J]. Machine Learning, 1990, 5(2): 197-227.
    [155] Freund, Y, Schapire R E, Experiments with a New Boosting Algorithm [A], In: Proceedings of the Thirteenth International Conference on Machine Learning [C]. Morgan Kaufmann, 1996:148-156.
    [156] Dietterich T G. Ensemble Methods in Machine Learning [A]. In: Multiple Classier Systems [M], Cagliari, Italy, 2000.
    [157]焦李成,公茂果,王爽等.自然计算、机器学习与图像理解前沿[M].西安:西安电子科技大学出版社,2008:184-188.
    [158] Freund Yoav, Schapire R E. A Decision Theoretic Generalization of On-line Learning and an Application to Boosting [J]. Journal of Computer and System Sciences, 1997, 55(1):119–139.
    [159] Schapire R E, Singer Y. Improved Boosting Algorithms using Confidence-rated Predictions [J]. Machine Learning, 1999, 37(3):297-336.
    [160] Freund Y, Schapire R E. A Short Introduction to Boosting [J]. Journal of Japanese Society forArtificial Intelligence, 1999, 14(5):771-780.
    [161] Friedman J H. Greedy Function Approximation: A Gradient Boosting Machine [J]. Annals of Statistics, 2001, 29(5): 1189-1232.
    [162] Ratsch G, Onoda T, Muller K R. Soft Margins for AdaBoost [J].Machine Learningp, 2001, 42(3): 287-32.
    [163] Friedman J, Hastie T, Tibshirani R. Additive Logistic Regression: A statistical View of Boosting [J]. The Annals of Statistics, 2000, 38(2):337-374.
    [164] Viola P, Jones M. Robust Real-time Object Detection [A]. In: Proceedings of the Second International Workshop on Statistical and Computational Theories of Vision [C], Vancouver, Canada, 2001.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700