超光谱遥感图像特征提取和分类研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
超光谱遥感图像将反映目标的光谱信息与反映目标空间和几何关系的图像信息有机结合,具有“谱图合一”的特点,能够有效地提高对地物的分类和监测能力,已经被广泛应用于资源勘探、环境监测、灾害评估、精细农业、目标识别等领域且应用领域还在不断扩大。随着科学技术的进步,图像光谱分辨率不断提高,所获取的超光谱遥感图像数据量也越来越大,波段间相关性增高。另外,不同的地物可能具有相同的光谱反射(光谱信息),仅从光谱信息上去识别不同的地物很难得到满意的结果。与中低空间分辨率图像相比,高空间分辨率超光谱遥感图像的信息更加丰富,不仅包含地物的光谱信息,而且还包含了地物的大量结构、纹理、形状等细节信息。使用传统方法对超光谱遥感图像进行分类时,存在小样本问题-训练样本不足、参数估计不可靠,导致数据维数灾难等问题的发生。因此,研究超光谱遥感图像特征提取和分类的方法及技术具有重要的理论意义和广阔的应用前景。
     本文在对现有的超光谱遥感图像特征提取方法和技术进行分析和研究的基础上,重点研究超光谱遥感图像的光谱特征和空间特征提取方法,并将其有机融合,有效提高超光谱遥感图像的分类精度和效率,主要研究内容及贡献为:
     1.针对超光谱遥感图像中的小样本问题,提出了一种结合局部信息和类别信息的半监督特(SELD)超光谱遥感图像征提取方法。该方法将无监督的特征提取方法(LLFE)和有监督的特征提取方法(LDA)有机结合,为超光谱遥感图像的半监督特征提取方法提供了一种新框架。其中超光谱遥感数据的局部邻域信息是通过SELD中无监督部分(LLFE)利用大量无标识样本来提取与保留;而类别信息则通过SELD中有监督部分(LDA)利用少量有标识样本来实现。实验表明SELD在降低超光谱遥感图像的维数的同时,提取和保留了图像中的局部邻域信息及类别信息,能够有效地提高超光谱遥感图像的分类精度和效率。
     2.针对高空间分辨率超光谱城市遥感图像,应用了数学形态学矢量的部分重构技术来提取图像中空间信息。利用传统的数学形态学算子进行开和闭运算能够有效的检测图像中的空间信息(图像中物体的大小和形状信息),当结构算子的尺度不断增加时,开和闭运算使得图像中越来越多的物体逐渐的消失,其中结构算子的形状对应于图像中物体的形状,而结构算子的尺度对应于物体的大小。然而,传统的开和闭运算容易引起图像中物体形状的改变。常用的数学形态学重构是一个完全重构的过程,能够很好的解决这个问题,使得图像中物体的形状得以较好的保存。但是随着尺度的增加,图像中一些本该消失小物体因为重构却依然保留在图像中,使得图像中物体的大小信息不能得到很好的利用。应用数学形态学中的部分重构技术,仅对图像中满足一定条件的像素进行重构,随着结构算子的尺度增加,图像中的物体在其对应的尺度中逐渐消失,不仅能保留图像中物体的形状信息,而且能保留物体的大小信息。
     3.将光谱信息和空间信息融合并利用数学形态学矢量统一表达,能充分利用超光谱遥感图像的光谱特征和空间形态特征。但矢量维数地不断增加,导致计算量增加和小样本问题的出现。因此提出了一种半监督的数学形态学矢量特征提取方法(GSELD),并首次将半监督的特征提取方法应用到超光谱数学形态学矢量当中。首先利用主成份分析法从超光谱图像中提取若干主成份(特征值累积99%)来充分利用图像的光谱信息,并在选取的每一个主成份中利用不同的结构算子和尺度构造数学形态学矢量。这些矢量中包含了图像的空间信息图像中物体的大小和形态信息。然而所产生的数学形态学矢量是一个高维数据,且冗余度高。GSELD利用有限的训练样本不仅可以从高维的数学形态学矢量中提取出有效的特征,降低数据的维数,而且能够有效地提高各种分类器的分类精度和效率。
     4.提出了一种针对超光谱遥感图像的非线性特征提取方法快速迭代核主成份分析(FIKPCA)方法。在实际应用当中,线性的特征提取方法在对数据降维后将会丢失数据中的非线性的特性。而传统的非线性的方法存在运算量大、内存消耗大、计算速度慢等问题。FIKPCA通过迭代的方法来求解特征向量,而不需要对Gram矩阵进行特征分解,可以极大地降低时间复杂度和空间复杂度,从而提高了非线性特征提取方法在超光谱遥感图像处理中的运算效率。
     5.提出了一种在非线性特征上构造数学形态学矢量的方法。在利用从高维超光谱遥感图像提取出来的线性特征构造数学形态学矢量时,图像中像素间的非线性关系在特征提取和降维过程中常常被忽略。核主成份作为非线性特征能更好地利用数据中的高阶统计信息和非线性关系,利用核主成份构造数学形态学矢量能明显地提高超光谱遥感图像的分类精度和效率。
     本文对超光谱遥感图像的光谱特征提取和分类、空间特征提取和分类进行了深入的研究探讨。提出了结合局部信息和类别信息的半监督特(SELD)超光谱遥感图像特征提取方法和利用数学形态学矢量的部分重构技术的图像空间特征提取方法。将超光谱遥感图像中的光谱信息和空间信息融合并统一表达为数学形态学矢量。针对高维形态学矢量的降维和特征提取提出了一种半监督的数学形态学矢量特征提取方法(GSELD)。对非线性特征,提出了快速迭代核主成份分析(FIKPCA)方法和一种在非线性特征上构造数学形态学矢量的方法。仿真试验和大量实际超光谱遥感图像实验表明,本文提出的特征提取方法效果优于相关方法,能够有效地提高超光谱遥感图像的分类精度和效率。
Recent advances in sensor technology have led to an increased availability of hyper-spectral remote sensing data at very high both spectral and spatial resolutions. Manytechniques are developed to explore the spectral information and the spatial informationof these data. In particular, feature extraction (FE) aimed at reducing the dimensionalityof hyperspectral data while keeping as much spectral information as possible is one ofmethods to preserve the spectral information, while morphological profile analysis is themost popular methods used to explore the spatial information.
     Hyperspectral sensors collect information as a set of images represented by hun-dreds of spectral bands. While ofering much richer spectral information than regularRGB and multispectral images, the high dimensional hyperspectal data creates also achallenge for traditional spectral data processing techniques. Conventional classifica-tion methods perform poorly on hyperspectral data due to the curse of dimensionality(i.e. the Hughes phenomenon: for a limited number of training samples, the classifica-tion accuracy decreases as the dimension increases). Classification techniques in patternrecognition typically assume that there are enough training samples available to obtainreasonably accurate class descriptions in quantitative form. However, the assumptionthat enough training samples are available to accurately estimate the class description isfrequently not satisfied for hyperspectral remote sensing data classification, because thecost of collecting ground-truth of observed data can be considerably difcult and expen-sive. In contrast, techniques making accurate estimation by using only small trainingsamples can save time and cost considerably. The small sample size problem thereforebecomes a very important issue for hyperspectral image classification.
     Very high-resolution remotely sensed images from urban areas have recently becomeavailable. The classification of such images is challenging because urban areas oftencomprise a large number of diferent surface materials, and consequently the heterogeneityof urban images is relatively high. Moreover, diferent information classes can be made upof spectrally similar surface materials. Therefore, it is important to combine spectral andspatial information to improve the classification accuracy. In particular, morphological profile analysis is one of the most popular methods to explore the spatial informationof the high resolution remote sensing data. When using morphological profiles (MPs)to explore the spatial information for the classification of hyperspectral data, one shouldconsider three important issues. Firstly, classical morphological openings and closingsdegrade the object boundaries and deform the object shapes, while the morphologicalprofile by reconstruction leads to some unexpected and undesirable results (e.g. over-reconstruction). Secondly, the generated MPs produce high-dimensional data, which maycontain redundant information and create a new challenge for conventional classificationmethods, especially for the classifiers which are not robust to the Hughes phenomenon.Last but not least, linear features, which are used to construct MPs, lose too muchspectral information when extracted from the original hyperspectral data.In order to overcome these problems and improve the classification results, we de-velop efective feature extraction algorithms and combine morphological features for theclassification of hyperspectral remote sensing data. The contributions of this thesis areas follows.
     1. As the first contribution of this thesis, a novel semi-supervised local discriminantanalysis (SELD) method is proposed for feature extraction in hyperspectral remotesensing imagery, with improved performance in both ill-posed and poor-posed condi-tions. The proposed method combines unsupervised methods (Local Linear FeatureExtraction Methods (LLFE)) and supervised method (Linear Discriminant Analy-sis (LDA)) in a novel framework without any free parameters. The underlying ideais to design an optimal projection matrix, which preserves the local neighborhoodinformation inferred from unlabeled samples, while simultaneously maximizing theclass discrimination of the data inferred from the labeled samples.
     2. Our second contribution is the application of morphological profiles with partialreconstruction to explore the spatial information in hyperspectral remote sensingdata from the urban areas. Classical morphological openings and closings degradethe object boundaries and deform the object shapes. Morphological openings andclosings by reconstruction can avoid this problem, but this process leads to someundesirable efects. Objects expected to disappear at a certain scale remain presentwhen using morphological openings and closings by reconstruction, which meansthat object size is often incorrectly represented. Morphological profiles with partialreconstruction improve upon both classical MPs and MPs with reconstruction. Theshapes of objects are better preserved than classical MPs and the size information is preserved better than in reconstruction MPs.
     3. A novel semi-supervised feature extraction framework for dimension reduction ofgenerated morphological profiles is the third contribution of this thesis. The mor-phological profiles (MPs) with diferent structuring elements and a range of in-creasing sizes of morphological operators produce high-dimensional data. Thesehigh-dimensional data may contain redundant information and create a new chal-lenge for conventional classification methods, especially for the classifiers which arenot robust to the Hughes phenomenon. To the best of our knowledge the use ofsemi-supervised feature extraction methods for the generated morphological profileshas not been investigated yet. The proposed generalized semi-supervised local dis-criminant analysis (GSELD) is an extension of SELD with a data-driven parameter.
     4. In our fourth contribution, we propose a fast iterative kernel principal componentanalysis (FIKPCA) to extract features from hyperspectral images. In many ap-plications, linear FE methods, which depend on linear projection, can result inloss of nonlinear properties of the original data after reduction of dimensional-ity. Traditional nonlinear methods will cause some problems on storage resourcesand computational load. The proposed method is a kernel version of the CandidCovariance-Free Incremental Principal Component Analysis, which estimates theeigenvectors through iteration. Without performing eigen decomposition on theGram matrix, our approach can reduce the space complexity and time complexitygreatly.
     5. Our last contribution constructs MPs with partial reconstruction on nonlinear fea-tures. Traditional linear features, on which the morphological profiles usually arebuilt, lose too much spectral information. Nonlinear features are more suitable todescribe higher order complex and nonlinear distributions. In particular, kernelprincipal components are among the nonlinear features we used to built MPs withpartial reconstruction, which led to significant improvement in terms of classifica-tion accuracies.
     The experimental analysis performed with the novel techniques developed in thisthesis demonstrates an improvement in terms of accuracies in diferent fields of applicationwhen compared to other state of the art methods.
引文
[1] J. R. Schott, Remote sensing: the image chain approach (2nd ed.), Oxford UniversityPress,2007. ISBN9780195178173.
    [2] J. H. Schurmer, Hyperspectral imaging from space, Air Force Research LaboratoriesTechnology Horizons, Dec2003.
    [3] J. Ellis,“Searching for oil seeps and oil-impacted soil with hyperspectral imagery”,Earth Observation Magazine, pp.25-28, Jan2001.
    [4] R. B. Smith, Introduction to hyperspectral imaging with TMIPS, MicroImages Tuto-rial Web site, July14,2006
    [5] F. M. Lacar, et al.,“Use of hyperspectral imagery for mapping grape varieties inthe Barossa Valley,” IEEE International Geoscience and remote sensing symposium(IGARSS’01), vol.6, pp.2875-2877,2001.
    [6] J. G. Ferwerda, Charting the quality of forage: measuring and mapping the variationof chemical components in foliage with hyperspectral remote sensing, WageningenUniversity, ITC Dissertation, pp.126-166,2005. ISBN90-8504-209-7.
    [7] A. K. Tilling, et al., Remote sensing to detect nitrogen and water stress in wheat,The Australian Society of Agronomy (2006)
    [8] J.A. F. Pierna, et al.,“Combination of Support Vector Machines (SVM) and NearInfrared (NIR) imaging spectroscopy for the detection of meat and bone meat (MBM)in compound feeds,” Journal of Chemometrics, vol.18, pp.341-349,2004.
    [9] H. Werf, Knowledge based remote sensing of complex objects: recognition of spec-tral and spatial patterns resulting from natural hydrocarbon, Utrecht University, ITCDissertation, pp.131-138,2006. ISBN90-6164-238-8
    [10] M. F. Noomen, Hyperspectral reflectance of vegetation afected by underground hydro-carbon gas seepage, Enschede, ITC Dissertation, pp.151,2007. ISBN978-90-8504-671-4.
    [11] Bin Laden compound location suggested by2008study, BBC News World,3May2011. http://www.bbc.co.uk/news/world-13275104.
    [12] P. Tremblay, S. Savary, M. Rolland, et al.,“Standof gas identification and quantifica-tion from turbulent stack plumes with an imaging Fourier-transform spectrometer,”Proceedings of SPIE, vol.7673,76730H,2010.
    [13] C. Wallays, B. Missotten, J. De Baerdemaeker and W. Saeys,“Hyperspectral wave-band selection for on-line measurement of grain cleanness,” Biosystems Engineering,nol.104, vol.1, pp.1-7,2009
    [14] J. A. Benediktsson, J. R. Sveinsson and K. Arnason,“Classification and featureextraction of AVIRIS data,” IEEE Transactions on Geoscience and Remote Sensing,vol.33, no.5, pp.1194-628, Sep.1995.
    [15] L. Breiman,“Bagging predictors,” Machine Learning, vol.24, no.2, pp.123-140,1996.
    [16] P. J. Curran, and J. L. Dungan,“Estimation of signal-to-noise: A new procedureapplied to AVIRIS data,” IEEE Transactions on Geoscience and Remote Sensing,vol.27, no.5, pp.620-628, Sep.1989.
    [17] T. G. Dietterich, M. Kearns, Y. Mansour,“Applying the weak learning framework tounderstand the improve c4.5,” Proceedings13t International Conference on MachineLearning. Morgan Kaufmann Publishers, pp.96-104,1996.
    [18] Y. Freund and R. E. Schapire,“A decsion-theoretic generalization of on-line learningand an application to boosting,” In Proc. Second Eur. Conf. Computat. LearningTh., Barcelona, Spain, pp.23-37,1995.
    [19] Y. Freund and R. E. Schapire,“Decisio-theoretic generalization of on-line learningand an application to boosting,” Journal of Computer and System Sciences, vol.55,no.1, pp.119-139,1997.
    [20] R. O. Green, et al.,“Imaging spectrosopy and the airborne visible/infrared imagingspectrometer (AVIRIS),” Remote Sensing of Environment, vol.65, pp.227-248.
    [21] F. van der Heiden, R. P. W. Duin, D. de Ridder, and D. M. J. Tax, Classifica-tion, Parameter Estimation and State Estimation: An Engineering Approach UsingMATLAB, Chichester: John Wiley&Sons,2004.
    [22] H. H. Ho, C. H. Li, B. C. Kuo, and Y. Y. Chang,“novel nearest neighbor classifierbased on adaptive nonparametric separability,” Lecture Notes in Artificial Intelli-gence, vol.4304. pp.204-213,2006.
    [23] T. K. Ho,“The random subspace mehod for constructing decision forests,” IEEETransaction on Pattern Analysis and Machine Intelligence, vol.20, no.8, pp.832-844,1998.
    [24] C. Ji, and S. Ma,“Combinations of weak classifiers,” IEEE Transactions on NeuralNetworks, vol.8, no.1, pp.32-42,1997.
    [25] S. J. Raudys and A. K. Jain,“Small smple size efects in statistical pattern recogni-tion: recommendations for practitioners,” IEEE Transactions on Pattern Analysisand Machine Intelligence, vol.13, no.3, pp.252-264,1991.
    [26] J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson,“Classification of hyper-spectral data from urban areas based on extended morphological profiles,” IEEETrans. Geosci. Remote Sens., vol.43, no.3, pp.480-491, Mar.2005.
    [27] S. B. Serpico and L.Bruzzone,“A newsearch algorithm for feature selection in hy-perspectral remote sensing images,” IEEE Transactions on Geoscience and RemoteSensing, vol.39, no.7, pp.1360-1367, Jul.1994.
    [28] S. B. Serpico and G. Moser,“Extrction of spectral channels from hyperspectralimages for classification purposes,” IEEE Transactions on Geoscience and RemoteSensing, vol.45, no.2, pp.484-495, Feb.2007.
    [29] M. Skruichina and R. P. W. Duin,“Bagging, boosting and random subspace methodfor linear classifiers,” Patter Analysis&Applications, vol.5, no.2, pp.121-135,2002.
    [30] G. F. Hughes,“On the mean accuracy of statistical pattern recognizers,” IEEETransactions on Information Theory, vol.14, no.1, pp.55-63,1968.
    [31] M. Fong,“Dimension reduction on hyperspectral images,” University of California,Los Angeles, United States, Report, August31,2007.
    [32] K.-S. Park, S. Hong, P. Park, and W.-D. Cho,“Spectral content characterizationfor efcient image detection algorithm design,” EURASIP Journal on Advances inSignal Processing, vol.2007,14pages, January,2007.
    [33] M. Fauvel, J. Chanussot and J. A. Benediktsson,“Kernel principal component anal-ysis for the classification of hyperspectral remote-sensing data over urban areas,”EURASIP Journal on Advances in Signal Processing, vol.2009,14pages, February2009.
    [34] T. V. Bandos, L. Bruzzone, G. Camps-Valls,“Classification of hyperspectral imageswith regularized linear discriminant analysis,” IEEE Trans. Geosci. Remote Sens.,vol.47, no.3, pp.862-873, March2009.
    [35] B. C. Kuo and D. A. Landgrebe,“Nonparametric weighted feature extraction forclassification,” IEEE Trans. Geosci. Remote Sens., vol.42, no.5, pp.1096-1105,May2004.
    [36] Wenzhi Liao, Aleksandra Piˇzurica, Paul Scheunders, Wilfried Philips, Youguo Pi,“Semi-Supervised Local Discriminant Analysis for Feature Extraction in Hyperspec-tral Images,” IEEE Transactions on Geoscience and Remote Sensing, accepted.
    [37] H. Hotelling.“Analysis of a complex of statistical variables into principal compo-nents,” Journal of Educational Psychology,24:417-441,1933.
    [38] S. Kaewpijit, J. L. Moigne and T. El-Ghazawi,“Automatic reduction of hyperspec-tral imagery using wavelet spectral analysis,” IEEE Trans. on Geoscience and Rem.Sensing, vol.41, no.4, pp.863-871,2003.
    [39] L. M. Bruce, C. H. Koger and J. Li,“Dimensionality reduction of hyperspectral datausing discrete wavelet transform feature extraction,” IEEE Trans. on Geoscience andRem. Sensing, vol.40, no.10, pp.2331-2338,2002.
    [40] J. Wang and C. I. Chang,“Independent component analysis-based dimensionalityreduction with applications in hyperspectral image analysis,” IEEE Trans. on Geo-science and Rem.Sensing, vol.44, no.6, pp.1586-1600,2006.
    [41] A. Plaza, P. Martinez, J. Plaza and R. Perez,“Dimensionality reduction and clas-sification of hyperspectral image data using sequences of extended morphologicaltransformations,” IEEE Trans. on Geoscience and Rem. Sensing, vol.43, no.3, pp.466-479,2005.
    [42] J. C. Harsanyi,“Detection and classification of subpixel spectral signatures in hy-perspectral image sequences,” Ph.D. dissertation, University of Maryland, BaltimoreCounty, pp.116,1993.
    [43] R. D. Phillips, W. T. Watson, R. H. Wynne, and C. E. Blinn,“Feature reductionusing a singular value decomposition for the iterative guided spectral class rejectionhybrid classifier,” ISPRS J. Photogramm. Remote Sens., vol.64, no.1, pp.107-116,Jan.2009.
    [44] M. He and S. Mei,‘’Dimension reduction by random projection for endmemberextraction,” in Proc. IEEE Conf. Ind. Electron. Appl., Jun.15-17,2010, pp.2323-2327.
    [45] N. Renard, S. Bourennane, and J. Blanc-Talon,“Denoising and dimensionality re-duction using multilinear tools for hyperspectral images,” IEEE Geosci. RemoteSens. Lett., vol.5, no.2, pp.138-142, Apr.2008.
    [46] R. Dianat and S. Kasaei,“Dimension reduction of optical remote sensing imagesvia minimum change rate deviation method,” IEEE Trans. on Geoscience and Rem.Sensing, vol.48, no.1, pp.198-206,2010.
    [47] R. A. Fisher,“The use of multiple measurements in taxonomic problems,” Ann.Eugenics, vol.7, no.2, pp.179-188,1936.
    [48] R. A. Fisher,“The statistical utilization of multiple measurements,” Ann. Eugenics,vol.8, pp.376-386,1938.
    [49] R. A. Johnson and D.W.Wichern, Applied Multivariate Statistical Analysis, UpperSaddle River, NJ: Prentice-Hall,2007.
    [50] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning,New York: Springer-Verlag,2001.
    [51] G. Baudat and F. Anouar,“Generalized discriminant analysis using a kernel ap-proach,” Neural Comput., vol.12, no.10, pp.2385-2404, Oct.2000.
    [52] P. F. Hsieh, D. S. Wang, and C. W. Hsu,“A linear feature extraction for multiclassclassification problems based on class mean and covariance discriminant information,extraction,” IEEE Trans. Pattern Anal. Mach. Intell., vol.28, no.2, pp.223-235,Feb.2006.
    [53] X. Song, G. Fan, and M. Rao,“Automatic CRP mapping using nonparametric ma-chine learning approaches,” IEEE Trans. Geosci. Remote Sens., vol.43, no.4, pp.888-897, Apr.2005.
    [54] M. M. Dundar and D. Landgrebe,“Toward an optimal supervised classifier for theanalysis of hyperspectral data,” IEEE Trans. Geosci. Remote Sens., vol.42, no.1,pp.271-277, Jan.2004.
    [55] B. S. Sebastiano and M. Gabriele,“Extraction of spectral channels from hyperspec-tral images for classification purposes,” IEEE Trans. Geosci. Remote Sens., vol.45,no.2, pp.484-495, Feb.2007.
    [56] I. Jllife, Principal Component Analysis, New York: Springer-Verlag,1986.
    [57] V. Zubko, Y. J. Kaufman, R. I. Burg, and J. V. Martins,“Principal componentanalysis of remote sensing of aerosols over oceans,” IEEE Trans. Geosci. RemoteSens., vol.45, no.3, pp.730-745, Mar.2007.
    [58] A. Hyvarinen and E. Oja,“Independent component analysis: algorithms and appli-cations,” Neural Networks, vol.13, pp.411-430,2000.
    [59] J. A. Palmason, J. A. Benediktsson, J. R. Sveinsson, and J. Chanussot,“Classifica-tion of hyperspectral data from urban areas using morphological preprocessing andindependent component analysis,” in Proc. IGARSS, vol.1, pp.176-179, Jul.2005.
    [60] K. Fukunaga, Introduction to Statistical Pattern Recognition. San Diego, CA: Aca-demic,1990.
    [61] C. Lee and D. A. Landgrebe,“Feature extraction based on decision boundaries,”IEEE Trans. Pattern Anal. Mach. Intell., vol.15, no.4, pp.388-400, Apr.1993.
    [62] J. Schott,“Remote sensing: the image chain approach,” Oxford University Press,1996.
    [63] L. Ma, M. M. Crawford and J. Tian,“Local manifold learning-based k-nearest-neighbor for hyperspectral image classification”, IEEE Trans. Geosci. Remote Sens.,vol.48, no.11, pp.4099-4109, Nov2010.
    [64] G. Chen and S.-E Qian,“Dimensionality reduction of hyperspectral imagery usingimproved locally linear embedding,” Journal of Applied Remote Sensing, vol.1, pp.1-10, March2007.
    [65] C. M. Bachmann, T. L. Ainsworth, and R. A. Fusina,“Exploiting manifold geometryin hyperspectral imagery”, IEEE Trans. Geosci. Remote Sens., vol.43, no.3, pp.441-454, March2005.
    [66] M. Belkin and P. Niyogi,“Laplacia Eigenmaps and Spectral Techniques for Embed-ding and Clustering,” Advances in Neural Information Processing Systems14, pp.585-591, MIT Press, British Columbia, Canada,2002.
    [67] Z. Y. Zhang, H. Y. Zha,“Principal manifolds and nonlinear dimensionality reductionvia tangent space alignment,” SIAM J. Sci. Comput., vol.26, no.1, pp.313-338,2004.
    [68] X. F. He, D. Cai, S. C. Yan, H. J. Zhang,“Neighborhood preserving embedding,”Tenth IEEE International Conference on Computer Vision2005, vol.2, pp.1208-1213,2005.
    [69] X. F. He, P. Niyogi,“Locality preserving projections,” Advances in Neural Informa-tion Processing Systems16, pp.153-160, MIT Press, Cambridge,2004.
    [70] T. Zhang, J. Yang, D. Zhao, X. Ge,“Linear local tangent space alignment and appli-cation to face recognition,” Neurocomputing Letters, vol.70, pp.1547-1553, March2007.
    [71] H. Y. Huang and B. C. Kuo,“Double nearest proportion feature extraction forhyperspectral-image classification”, IEEE Trans. Geosci. Remote Sens., vol.48, no.11, pp.4034-4046, Nov2010.
    [72] C.-I Chang and H. Ren,“An experiment-based quantitative and comparative analy-sis of target detection and image classification algorithms for hyperspectral imagery,”IEEE Trans. Geosci. Remote Sens., vol.38, no.2, pp.1044-1063, March2000.
    [73] Q. Du,“Modified Fisher’s linear discriminant analysis for hyperspectral imagery”,IEEE Geosci. Remote Sens. Lett, vol.4, no.4, pp.503-507, Oct2007.
    [74] B. C. Kuo, C. W. Chang, C. C. Hung and H. P. Wang,“A modified nonparametricweight feature extraction using spatial and spectral information”, Proceedings ofInternational Geoscience and Remote Sensing Symposium, pp.172-175, Aug.,2006.
    [75] B. C. Kuo, C. H. Li, and J. M. Yang,“Kernel nonparametric weighted feature ex-traction for hyperspectral image classification”, IEEE Trans. Geosci. Remote Sens.,vol.47, no.4, pp.1139-1155, April2009.
    [76] B. C. Kuo, C. H. Chang, T. W. Sheu and C. C. Hung,“Feature extractions usinglabeled and unlabeled data,” in Proc. IGARSS, pp.1257-1260, July2005.
    [77] X. Zhu,“Semi-supervised learning literature survey”, Dept. Comput. Sci., Univ.Wisconsin Madison, Madison, WI, Comput. Sci. TR1530, Jul.19,2008.
    [78] A. Blum, T. Mitchell,“Combining labeled and unlabeled data with co-training”,Proceedings of the11th Annual Conference on Computational Learning Theory, pp.92-100,1998.
    [79] V. N. Vapnik, Statistical Learning Theory, John Wiley&Sons, New York,1998.
    [80] L. Bruzzone, M. Chi and M. Marconcini,“A Novel Transductive SVM for Semisuper-vised Classification of Remote-Sensing Images”, IEEE Trans. Geosci. Remote Sens.,vol.44, no.11, pp.3363-3373, Nov2006.
    [81] K. Bennet and A. Demiriz,“Semi-supervised support vector machines,” in Advancesin Neural Information Processing Systems (NIPS), Cambridge, MA, USA, Dec1998.
    [82] T. Joachims, Making large-scale support vector machine learning practical, pp.169-184, MIT Press,1999.
    [83] O. Chapelle and A. Zien,“Semi-supervised classification by low density separation,”in International Conference on Artificial Intelligence and Statistics (AISTATS),2005, pp.57-64.
    [84] I.W. Tsang and J. T. Kwok,“Large-scale sparsified manifold regularization,” inAdvances in Neural Information Processing Systems (NIPS),2006, pp.1401-1408.
    [85] G. Gasso, K. Zapien, and S. Canu,“L1-norm regularization path for sparse semi-supervised Laplacian SVM,” in International Conference on Machine Learning andApplications (ICMLA),2007.
    [86] A. Erkan and Y. Altun,“Semi-supervised learning via generalized maximum en-tropy,” in International Conference on Artificial Intelligence and Statistics (AIS-TATS),2010, vol.9, pp.209-216.
    [87] L. Capobianco, A. Garzelli, G. Camps-Valls,“Target detection with semisupervisedkernel orthogonal subspace projection,” IEEE Trans. Geosci. Remote Sens., vol.47,no.11, pp.3822-3833, July2009.
    [88] J. A. Richards and Xiuping Jia, Remote Sensing Digital Image Analysis. An Intro-duction, Springer-Verlag, Berlin, Heidenberg,3rd edition,1999.
    [89] D. L. Civco,“Artificial neural networks for land-cover classification and mapping,”International Journal of Geophysical Information Systems, vol.7, no.2, pp.173-186,1993.
    [90] H. Bischof and A. Leona,“Finding optimal neural networks for land use classifi-cation,” IEEE Transactions on Geoscience and Remote Sensing, vol.36, no.1, pp.337-341,1998.
    [91] H. Yang, F. van der Meer, W. Bakker, and Z. J. Tan,“A back-propagation neuralnetwork for mineralogical mapping from AVIRIS data,” International Journal ofRemote Sensing, vol.20, no.1, pp.97-110,1999.
    [92] B. E. Boser, I. M. Guyon, and V. N. Vapnik,“A training algorithm for optimalmargin classifiers,” in5th Annual ACM Workshop on COLT, D. Haussler, Ed.,Pittsburgh, PA,1992, pp.144-152, ACM Press.
    [93] B. Scholkopf and A. Smola, Learning with Kernels-Support Vector Machines, Reg-ularization, Optimization and Beyond, MIT Press Series,2002.
    [94] J. A. Gualtieri, S. R. Chettri, R. F. Cromp, and L. F. Johnson,“Support vector ma-chine classifiers as applied to AVIRIS data,” in Proceedings of the8th JPL AirborneGeoscience Workshop, Feb.1999.
    [95] C. Huang, L. S. Davis, and J. R. G. Townshend,“An assessment of support vectormachines for land cover classification,” International Journal of Remote Sensing, vol.23, no.4, pp.725-749,2002.
    [96] G. Camps-Valls, L. Gomez-Chova, J. Calpe, E. Soria, J. D. Martin, L. Alonso, andJ. Moreno,“Robust support vector method for hyperspectral data classification andknowledge discovery,” IEEE Transactions on Geoscience and Remote Sensing, vol.42, no.7, pp.1530-1542, July2004.
    [97] G. Camps-Valls and L. Bruzzone,“Kernel-based methods for hyperspectral imageclassification,” IEEE Transactions on Geoscience and Remote Sensing, vol.43, no.6, pp.1351-1362, June2005.
    [98] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, CambridgeUniversity Press,2004.
    [99] H. Caillol, A. Hillion, and W. Pieczynski,“Fuzzy random fields and unsupervisedimage segmentation,” IEEE Transactions on Geoscience and Remote Sensing, vol.31, no.4, pp.801-810, July1993.
    [100] P. Masson and W. Pieczynski,“SEM algorithm and unsupervised statistical seg-mentation of satellite images,” IEEE Transactions on Geoscience and Remote Sens-ing, vol.31, no.3, pp.618-633, May1993.
    [101] S. Le Hegarat-Mascle, I. Bloch, and D. Vidal-Madjar,“Application of Dempster-Shafer evidence theory to unsupervised classification in multisource remote sensing,”IEEE Transactions on Geoscience and Remote Sensing, vol.35, no.4, pp.1018-1031,July1997.
    [102] P.B.G. Dammert, J.I.H. Askne, and S. Kuhlmann,“Unsupervised segmentation ofmultitemporal interferometric SAR images,” IEEE Transactions on Geoscience andRemote Sensing, vol.37, no.5, pp.2259-2271, Sep1999.
    [103] T. Yamazaki and D. Gingras,“Unsupervised multispectral image classification us-ing MRF models and VQ method,” IEEE Transactions on Geoscience and RemoteSensing, vol.37, no.2, pp.1173-1176, Mar1999.
    [104] Y. Zhong, L. Zhang, B. Huang, and L. Pingxiang,“An unsupervised artificial im-mune classifier for multi/hyperspectral remote sensing imagery,” IEEE Transactionson Geoscience and Remote Sensing, vol.44, no.2, pp.420-431, Feb2006.
    [105] Matthias Seeger,“Learning with labeled and unlabeled data,” Tech. Rep. TR.2001,Institute for Adaptive and Neural Computation, University of Edinburg,2001, Avail-able at http://www.dai.ed.ac.uk/seeger/papers.html.
    [106] Xiaojin Zhu,“Semi-supervised learning literature survey,” Tech. Rep.1530, Com-puter Sciences, University of Wisconsin-Madison, USA,2005.
    [107] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning, MIT Press,Cambridge, Massachusetts and London, England,1st edition,2006.
    [108] N. M. Dempster, A. P. Laird and D. B. Rubin,“Maximum likelihood from incom-plete data via the EM algorithm,” Journal of the Royal Statistical Society, Series B,vol.39, no.1, pp.1-38, Jan1977.
    [109] Q. Jackson and D.A. Landgrebe,“An adaptive classifier design for high-dimensionaldata analysis with a limited training data set,” IEEE Transactions on Geoscienceand Remote Sensing, pp.2664-2679, Dec.2001.
    [110] L. Bruzzone, M. Chi, and M. Marconcini,“Transductive SVMs for semisupervisedclassification of hyperspectral data,” in International Geoscience and Remote Sens-ing Symposium, IGARSS2005, Seoul, Korea, July2005.
    [111] F. Chung,“Spectral graph theory,” in CBMS Regional Conference Series in Math-ematics. Number92in American Mathematical Society, Providence, RI,1997.
    [112] M. I. Jordan, Learning in Graphical Models, MIT Press, Cambridge, Massachusettsand London, England,1st edition,1999.
    [113] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf,“Learning withlocal and global consistency,” in Advances in Neural Information Processing Systems,NIPS2004, Vancouver, Canada, Dec2004, vol.16, MIT Press.
    [114] G. Camps-Valls, L. Gomez-Chova, J. Munoz-Mari, J. Vila-Frances, and J. Calpe-Maravilla,“Composite kernels for hyperspectral image classification,” IEEE Geo-science and Remote Sensing Letters, vol.3, no.1, pp.93-97, Jan2006.
    [115] T. Bandos, D. Zhou, and G. Camps-Valls,“Semi-supervised hyperspectral imageclassification with graphs,” in International Geoscience and Remote Sensing Sym-posium, IGARSS2006, Denver, USA, July2006.
    [116] G. Camps-Valls, T. Bandos and D. Zhou,“Semisupervised graph-based hyperspec-tral image classification”, IEEE Trans. Geosci. Remote Sens., vol.45, no.10, pp.3044-3054, Oct2007.
    [117] D. Cai, X. F. He, J. W. Han,“Semi-supervised discriminant analysis,” IEEE11thInternational Conference on Computer Vision’ICCV07’, pp.1-7, Rio de Janeiro,Brazil,2008.
    [118] S. G. Chen, D. Q. Zhang,“Semisupervised Dimensionality Reduction with Pair-wise Constraints for Hyperspectral Image Classification” IEEE Geosci. Remote Sens.Lett, vol.8, no.2, pp.369-373, March2011.
    [119] M. Sugiyama, T. Ide, S. Nakajima and J. Sese,“Semi-supervised local Fisher dis-criminant analysis for dimensionality reduction,” Machine Learning, Vol.78, pp.35-61, January2010.
    [120] W. Z. Liao, A. Pizurica, W. Philips, and Y. G. Pi,“Feature extraction for hyper-spectral image based on semi-supervised local discriminant analysis”, in Proc. IEEEJoint Urban Remote Sensing Event (JURSE2011), pp.401-404, April2011.
    [121] L. J. P. van der Maaten, E. O. Postma, and H. J. van den Herik,“DimensionalityReduction: A Comparative Review,” Tilburg University Technical Report, TiCC-TR2009-005,2009.
    [122] M. Sugiyama, and S. Roweis,“Dimensionality reduction of multimodal labeled databy local fisher discriminant analysis,” Journal of Machine Learning Research, vol.8,pp.1027-1061,2007.
    [123] D. A. Landgrebe, Signal Theory Methods in Multispectral Remote Sensing. Hoboken,NJ: Wiley,2003.
    [124] J. Ham, Y. Chen, M. Crawford, and J. Ghosh,“Investigation of the random forestframework for classification of hyperspectral data,” IEEE Trans. Geosci. RemoteSens., vol.43, no.3, pp.492-501, March2005.
    [125] R. P. W. Duin, P. Juszczak, D. de Ridder, P. Paclik, E. Pekalska, and D. M. J. Tax,“PRTools, A Matlab Toolbox for Pattern Recognition,2004”.[Online]. Available:http://www.prtools.org.
    [126] Christopher J.C. Burges,“A Tutorial on Support Vector Machines for PatternRecognition,” Data Mining and Knowledge Discovery, vol.2, pp.121-167,1998.
    [127] C. C. Chang and C. J. Lin,“LIBSVM: A Library for Support Vector Machines”,2001, http://www.csie.ntu.edu.tw/cjlin/libsvm.
    [128] J. Munoz-Marf, L. Bruzzone, and G. Camps-Valls,“A support vector domain de-scription approach to supervised classification of remote sensing images”, IEEETrans. Geosci. Remote Sens., vol.45, no.8, pp.2683-2692, Aug.2008.
    [129] B. Mojaradi, H. Abrishami-Moghaddam, M. J. V. Zoej and R. P. W. Duin,“Dimen-sionality Reduction of Hyperspectral Data via Spectral Feature Extraction,” IEEETrans. Geosci. Remote Sens., vol.47, no.7, pp.2091-2105, March2009.
    [130] De Berg, Mark et al, Computational Geometry: Algorithms and Applications,3rdEdition, pages99-101. Springer,2008.
    [131] V. Castelli and T. Cover,“The relative value of labeled and unlabeled samples inpattern recognition with an unknown mixing parameter,” IEEE Trans. Inf. Theory,vol.42, no.6, pp.2102-2117, Nov1996.
    [132] K. Sinha and M. Belkin,“The value of labeled and unlabeled examples when themodel is imperfect,” in NIPS2007, vol.20, Cambridge, MA: MIT Press,2008.
    [133] G. M. Foody,“Thematic map comparison: evaluating the statistical significanceof diferences in classification accuracy,” Photogrammetric Engineering&RemoteSensing, vol.70, no.5, pp.627-633,2004.
    [134] W. Liao, R. Bellens, A. Pizurica, W. Philips, Y. Pi,“Classification of Hyperspec-tral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised Feature Extraction,” In IEEE Journal of Selected Topics in Applied EarthObservations and Remote Sensing (accepted).
    [135] L. Bruzzone, C. Persello,“A Novel Approach to the Selection of Spatially InvariantFeatures for the Classification of Hyperspectral Images With Improved Generaliza-tion Capability”, IEEE Trans. Geosci. Remote Sens., vol.47, no.9, pp.3180-3191,Sep.2009.
    [136] P. Soille, Morphological Image Analysis, Principles and Applications,2nd ed. Berlin,Germany: Springer-Verlag,2003.
    [137] P. Soille and M. Pesaresi, Advances in Mathematical Morphology Applied to Geo-science and remote Sensing, IEEE Trans. Geosci. Remote Sens., vol.40, no.9, pp.2042-2055, Sep.2002.
    [138] M. Pesaresi and J. A. Benediktsson,“A new approach for the morphological seg-mentation of high-resolution satellite imagery”, IEEE Trans. Geosci. Remote Sens.,vol.39, no.2, pp.309-320, Feb.2001.
    [139] R. Bellens, S. Gautama, L. Martinez-Fonte, W. Philips, J.C.-W. Chan, F. Canters,“Improved classification of VHR images of urban areas using directional morpho-logical profiles”, IEEE Trans. Geosci. Remote Sens., vol.46, no.10, pp.2803-2812,Oct.2008.
    [140] M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone,“Extended profileswith morphological attribute filters for the analysis of hyperspectral data,” Int. J.Remote Sens., vol.31, no.22, pp.5975-5991, Nov.2010.
    [141] M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone,“Morphological at-tribute profiles for the analysis of very high resolution images”, IEEE Trans. Geosci.Remote Sens., vol.48, no.10, pp.3747-3762, Oct.2010.
    [142] J. A. Benediktsson, M. Pesaresi, and K. Arnason,“Classification and feature ex-traction for remote sensing images from urban areas based on morphological trans-formations”, IEEE Trans. Geosci. Remote Sens., vol.41, no.9, pp.1940-1949, Sep.2003.
    [143] M. Dalla Mura, A. Villa, J. A. Benediktsson, J. Chanussot, L. Bruzzone,“Classifi-cation of Hyperspectral Images by Using Extended Morphological Attribute Profilesand Independent Component Analysis,” IEEE Geosci. Remote Sens. Lett., Vol.8,No.3, pp.541-545, May2011.
    [144] J. Crespo, J. Serra, and R. Shafer,“Theoretical aspects of morphological filters byreconstruction”, Signal Process., vol.47, no.2, pp.201-225, Nov.1995.
    [145] M. Fauvel, J. A. Benediktsson, J. Chanussot and J. R. Sveinsson,“Spectral andSpatial Classification of Hyperspectral Data Using SVMs and Morphological Profile”,IEEE Trans. Geosci. Remote Sens., vol.46, no.11, pp.3804-3814, Nov.2008.
    [146] T. Castaings, B. Waske, J. A. Benediktsson,J. Chanussot,“On the influence offeature reduction for the classification of hyperspectral images based on the extendedmorphological profile”, Int. J. Remote Sens., vol.31, no.22, pp.5921-5939, July2010.
    [147] M. Dalla Mura, J. Benediktsson, and L. Bruzzone,“Classification of hyperspectralimages with extended attribute profiles and feature extraction techniques”, in Proc.IGARSS, pp.76-79, July2010.
    [148] A. A. Green, M. Berman, P. Switzer, and M. D. Craig,“A transformation forordering multispectral data in terms of image quality with implications for noiseremoval,” IEEE Trans. Geosci. Remote Sens., vol.26, no.1, pp.65-74, Jan.1988.
    [149] W. Liao, A. Pizurica, W. Philips, Y. Pi,“A Fast Iterative PCA Feature Extractionfor Hyperspectral Images,” In Proc. of the2010IEEE International Conference onImage Processing (ICIP2010), Hong-Kong, China, pp.1317-1320,2010.
    [150] C.J.C. Burges, Data Mining and Knowledge Discovery Handbook: A CompleteGuide for Practitioners and Researchers, chapter Geometric Methods for FeatureSelection and Dimensional Reduction: A Guided Tour, Kluwer Academic Publish-ers,2005.
    [151] J.A. Lee and M. Verleysen, Nonlinear dimensionality reduction, Springer, New York,NY, USA,2007.
    [152] M. Niskanen and O. Silven,“Comparison of dimensionality reduction methods forwood surface inspection,” In Proceedings of the6th International Conference onQuality Control by Artificial Vision, pages178-188, Gatlinburg, TN, USA,2003.
    [153] L.K. Saul, K.Q. Weinberger, J.H. Ham, F. Sha, and D.D. Lee,“Spectral methodsfor dimensionality reduction,” In Semisupervised Learning, MIT Press, Cambridge,MA, USA,2006.
    [154] J. Venna,“Dimensionality reduction for visual exploration of similarity structures,”PhD thesis, Helsinki University of Technology,2007.
    [155] S. T. Roweis and L. K. Saul,“Nonlinear dimensionality reduction by locally linearembedding,” Science, vol.290, pp.2323-2326,2000.
    [156] L. K. Saul and S. T. Roweis,“Think globally, fit locally: unsupervised learningof low dimensional manifolds,” Journal of Machine Learning Research, vol.4, pp.119-155,2003.
    [157] H. Chang and D. Y. Yeung,“Robust locally linear embedding,” Pattern Recogni-tion, vol.39, no.6, pp.1053-1065,2006.
    [158] J. B. Tenenbaum, V. de Silva, and J. C. Langford,“A global geometric frameworkfor nonlinear dimensionality reduction,” Science,290(5500):2319-2323, December2000.
    [159] W.S. Togerson, Theory and methods of scaling, Wiley,1958.
    [160] Vin de Silva and Joshua B. Tenenbaum,“Global versus local methods in nonlineardimensionality reduction,” In Advances in Neural Information Processing Systems(NIPS), pages705-712,2002.
    [161] Nello Cristianini, John Shawe-Taylor, and Huma Lodhi,“Latent semantic kernels,”Journal of Intelligent Information Systems,18(2-3):127-152,2002.
    [162] David L. Donoho and Carrie Grimes,“Hessian Eigenmaps: Locally linear embed-ding techniques for high-dimensional data,” Proceedings of National Academy ofSciences (PNAS),100(10):5591-5596, May2003.
    [163] Marc Lennon, Gregoire Mercier, Marie-Catherine Mouchot, and Laurence Hubert-Moy,“Curvilinear component analysis for nonlinear dimensionality reduction ofhyperspectral images,” Image and Signal Processing for Remote Sensing VII,4541(1):157-168,2002.
    [164] Hongyu Li, Wenbin Chen, and I-Fan Shen,“Supervised local tangent space align-ment for classification,” IJCAI, pages1620-1621,2005.
    [165] D.R. Olsen and K. Fukunaga,“Representation of nonlinear data surfaces,” IEEETransactions on Computers,22(10):915-922,1973.
    [166] I. T. Jollife,“Principal Component Analysis,” Springer, New York,2002.
    [167] J. W. Boardman and F. A. Kruse,“Automated spectral analysis: A geologic ex-ample using AVIRIS data, north Grapevine Mountains, Nevada,” Proceedings of theTenth Thematic Conference on Geologic Remote Sensing, Environmental ResearchInstitute of Michigan, Ann Arbor, MI, USA, pp. I-407,1994.
    [168] S. Prasad and L. M. Bruce,“Decision fusion with confidence based weight assign-ment for hyperspectral target recognition,” IEEE Trans. Geosci. Remote Sens., vol.46, no.5, pp.1448-1456, May2008.
    [169] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K.-R. Muller,“Fisher discriminantanalysis with kernels,” in Proc. IEEE Neural Netw. Signal Process. Workshop,1999,pp.41-48.
    [170] Xin Yang, Haoying Fu, Hongyuan Zha, and Jesse Barlow,“Semi-supervised non-linear dimensionality reduction,” ACM ICML, pages1065-1072,2006.
    [171] M. D. Farrel and R. M. Mersereau,“On the impact of PCA dimension reductionfor hyperspectral detection of difcult targets,” IEEE Geosci. Remote Sens. Lett.,vol.2, no.2, pp.192-195, Apr.2005.
    [172] S. Prasad and L. M. Bruce,“Information fusion in kernel-induced spaces for robustsubpixel hyperspectral ATR,” IEEE Geosci. Remote Sens. Lett., vol.6, no.3, pp.572-576, Jul.2009.
    [173] A. A. Nielsen,“Kernel maximum autocorrelation factor and minimum noise fractiontransformations,” IEEE Tran. on image processing, vol.20, no.3, pp.612-624March2011,
    [174] P. L. Lai and C. Fyfe,“Kernel and nonlinear canonical correlation analysis,” Int.J. Neural Syst., vol.10, no.5, pp.365-377,2000.
    [175] F. R. Bach and M. I. Jordan,“Kernel independent component analysis,” J. Mach.Learn. Res., vol.3, pp.1-48,2002.
    [176] C. M. Bishop, Pattern Recognition and Machine Learning, New York: Springer-Verlag,2006.
    [177] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, NumericalRecipes: The Art of Scientific Computing,3rd ed, Cambridge, U.K.: CambridgeUniv. Press,2007.
    [178] A. A. Nielsen and M. J. Canty,“Kernel principal component analysis for changedetection,” in Proc. SPIE Eur. Remote Sens. Conf., Cardif, U.K., Sep.15-19,2008,vol.7109A, pp.71090T-1-71090T-10.
    [179] J. Ham, D. Lee, S. Mika, and B. Scholkopf,“A Kernel View of the DimensionalityReduction of Manifolds,” Proc. Int’l Conf. Machine Learning, pp.47-54,2004.
    [180] C. K. I.Williams,“On a connection between kernel PCA and metric multidimen-sional scaling,” Advances in Neural Information Processing Systems13, MIT Press,2001.
    [181] US Army Topographic Engineering Center, HyperCube,http://www.tec.army.mil/Hypercube/.
    [182] B. Scholkopf, A.J. Smola and K.R. Muller,“Nonlinear component analysis as akernel eigenvalue problem,” Neural Computation, vol.10, pp.1299-1319,1998.
    [183] W.Y. Shi, Y.F. Guo, C. Jin, X.Y. Xue,“An improved generalized discriminant anal-ysis for large-scale data set,” Seventh International Conference on Machine Learningand Applications, pp.769-772, December,2008.
    [184] K. Burgers, Y. Fessehatsion, S. Rahmani, J.Y. Seo,“A comparative analysis ofdimension reduction algorithms on hyperspectral data,” August7,2009.
    [185] K.I. Kim, M.O. Franz and B. Scholkopf,“Iterative kernel principal component anal-ysis for image modeling,” IEEE Trans Pattern Analysis and Machine Intelligence,vol.27, no.9, pp.1351-1366,2005.
    [186] T. Sander,“Optimal unsupervised learning in a single-layer linear feedforward neu-ral network,” Neural Network, vol.12, pp.459-473,1989.
    [187] J. Weng, Y. Zhang and W.S. Huang,“Candid covariance-free incremental principalcomponent analysis,” IEEE Trans Pattern Analysis and Machine Intelligence, vol.25, no.8, pp.1034-1040,2003.
    [188] Y. Zhang and J. Weng,“Covergence analysis of complementary candid incrementalprincipal component analysis,” Technical Report MSU-CSE-01-23, Dept. of Com-puter Science and Eng., Michigan State Univ., East Lansing, August,2001.
    [189] Baofeng Guo, Steve Gunn, Bob Damper and James Nelson,“Band selection forhyperspectral image classification using mutual information,” IEEE Geoscience andRemote Sensing Letters, vol.3, no.4, pp.522-526,2006.
    [190] K. Hotta,“Local normalized linear summation kernel for fast and robust recogni-tion,” Pattern Recognition, vol.43, pp.906-913,2010.
    [191] M. Fauvel, J. Chanussot and J. A. Benediktsson,“Kernel principal componentanalysis for feature reduction in hyperspectrale images analysis”, Proceedings of the7th Nordic Signal Processing Symposium, pp.238-241,2006.
    [192] P. K. Varshney and M. K. Arora, Advanced Image Processing Techniques for Re-motely Sensed Hyperspectral Data, Berlin, Germany: Springer-Verlag,2003.
    [193] Wenzhi Liao, Rik Bellens, Aleksandra Piˇzurica, Wilfried Philips, Youguo Pi,“Clas-sification of hyperspectral data over urban areas based on extended morphologicalprofile with partial reconstruction”, In Proc. Advanced Concepts for Intelligent Vi-sion Systems (ACIVS)2012.(submitted)

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700