基于流形学习算法的高光谱图像分类和异常检测
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
高光谱遥感数据的光谱分辨率很高,能够获取地球表面丰富的光谱和空间信息,反映地物本质的物理化学特性,使得在宽波段遥感中不可识别的地物,在高光谱遥感中能够被识别。地物分类和异常检测是高光谱遥感的重要应用方向,对理解地物分布规律以及探测感兴趣目标具有重要作用,是论文研究的主要内容。
     高光谱图像具有很高的维数,存在数据冗余,噪声,和维数灾难问题,而且不是每个特征对所分析的问题都具有作用,降维是解决这个问题的有效方法。降维可以发现高维数据中所隐藏的低维结构,减少后续处理的负担,而且可能提高数据分析的质量。流形学习是一类重要的非线性降维方法,它假设高维数据位于一个低维流形中,该低维流形能够表示数据的本征结构和非线性特性。由于高光谱数据存在固有的非线性特性,线性降维可能丢失数据某些重要的非线性信息,因此,论文研究基于流形学习非线性降维的高光谱数据分析。流形学习分为全局算法和局部算法,论文重点研究局部流形学习算法在高光谱图像分类和异常检测中的应用,主要从以下几个方面开展研究工作:
     (1)流形学习算法存在的一个问题是无法对新数据进行泛化,论文采用Bengio提出的基于核的泛化算法框架,实现对多种流形学习算法的泛化。泛化算法的关键是给出流形学习算法对应的核函数,对于局部切空间排列算法,论文推导出其核函数,实现其对新数据的泛化。
     (2)对于高光谱图像分类,论文对多种流形学习算法进行比较研究,采用k近邻分类器对各种降维算法的分类性能进行评价,以更好理解流形学习算法的性能,以及高光谱图像在流形域的数据特性。通过大量实验得到多个有意义的结论:流形学习算法是一种有价值和前景的高光谱数据预处理方法,其最大优势是在两类分类问题中,可以提高难以区分地物的分类效果;另外,有监督局部流形学习算法具有最好表现,能够较大幅度提高分类性能。
     (3)基于流形学习算法和k近邻分类器结合的研究,提出一种新的基于有监督局部流形学习算法的加权k近邻分类器,应用于高光谱图像分类。权值由流形学习算法的核函数计算,可以描述数据局部几何结构,有效评价各近邻点对测试点分类的作用,提高k近邻分类器的性能。该分类器计算简单,只需要计算近邻点权值,因而适用于大数据量情况,还可以有效缓解不均衡样本对k近邻分类器的影响。
     (4)针对高光谱图像异常检测存在的问题,采用鲁棒的流形学习算法,以避免异常信息对背景特性的影响,建立更准确的背景流形,提高异常检测性能。首先对鲁棒的局部线性嵌入算法,通过将图像分割成多个子块的方式降低其计算量,但是无法得到全局降维结果。然后提出基于背景训练点选择的鲁棒流形学习算法,其中背景训练点由递归多层图像分割方法获得,可以建立全局数据流形,并避免异常对背景流形的影响。在局部流形学习算法中,鲁棒的局部切空间排列算法具有最好表现。
The hyperspectral data with high spectral resolution are able to obtain abundant spaial and spectral information, characterize the inherent physical and chemical properties of land cover types, and provide superior capability for discriminating materials than multispectral data. Classification and anomaly detection in hyperspectral data are the focus of this research, which can facilitate understanding the land cover distribution and detecting the interesting targets.
     The high number of spectral bands, interband spectral redundancy, and the ever presented noise present challenging problems for analysis of hyperspectral data. Moreover, not all the bands are important for understanding the phenomena. Dimensionality reduction is an important preprocessing step for many approaches to analysis of hyperspectral data, which is capable of exploring the inherent low dimensional structure, reducing the computational complexity, and improving the performance of data analysis. Manifold learning is proposed for nonlinear dimensionality reduction. It assumes that the original high dimensional data lie on a low dimensional manifold that can characterize the structure and nonlinear properties of the original data. Since hyperspectral data exhibit intrinsic nonlinearities, the commonly used linear feature extraction methods may lose some important nonlinear properties of hyperspectral data, motivating the research of manifold learning nonlinear dimensionality reduction for hyperspectral data analysis. Manifold learning methods are categorized as global and local techniques. This study focuses on the local manifold learning methods for hyperspectral image classification and anomaly detection. The main work is as follows:
     (1) The traditional manifold learning methods are restrictedly implemented on training data and lack generalization to new data, and therefore the kernel-based out-of-sample extension method proposed by Bengio is employed. The key point of this approach is to find the kernel function of the specific manifold learning method. Our contribution is to derive the kernel function of local tangent space alignment (LTSA) algorithm and achieve its generalization to new data.
     (2) For hyperspectral image classification, the paper compares multiple manifold learning methods via the classification using k nearest neighbor (kNN) classifier, with the goal of better understanding the capability of manifold learning for classification and the characteristics of hyperspectral data in the manifold domain. Valuable conclusions are achieved using the experiments implemented on several space-based and airborne hyperspectral data sets. The experimental results demonstrate that the nonlinear manifold learning is promising as dimensionality reduction methods. Its greatest advantage is to discriminate difficult classes in two-category classification problems. Moreover, the supervised local manifold learning methods obtain the best performance and can largely improve the classification.
     (3) Based on the research of manifold learning in conjunction with the kNN classifier, a new supervised local manifold learning weighted kNN classifier (SLML-WkNN) is proposed and applied to hyperpectral image classification. The weight that is calculated by the kernel function of the specific manifold learning method can capture the geometric properties of each neighborhood and provide a meaningful measure of the contributions of neighbors. The new classifier does not involve dimensionality reduction, and thereby is suitable for large data sets. Further, it can mitigate the influence of imbalanced data sets on kNN classifier.
     (4) Anomaly detection in hyperspectral images has a problem that the background characteristics may be contaminated by anomalies. Therefore, the robust manifold learning that mitigates the influence of anomalies on background manifold is employed to improve the detection performance. For the robust locally linear embedding (LLE) algorithm, the image is divided into sub-images to reduce the computional complexity. However, this approach cannot obtain the global dimensionality reduced data. Therefore, a background training data selection based robust manifold learning method is then proposed, where the background training data is obtained by the recursive hierachical segmentation. This method can both achieve global data manifold and reduce the complexity. Among the local manifold learning methods, robust LTSA has the best performance.
引文
[1]浦瑞良,宫鹏.高光谱遥感及其应用.北京:高等教育出版社, 2000
    [2]童庆禧,张兵,郑兰芬.高光谱遥感的多学科应用.北京:电子工业出版社, 2006
    [3]张良培,张立福.高光谱遥感.武汉:武汉大学出版社, 2005
    [4] D. A. Landgrebe. Multispectral land sensing: where from, where to? IEEE Trans. Geosci. Remote, 2005, 43(3): 414-421
    [5] J. A. Richards. Analysis of remote sensed data: the formative decades and the future. IEEE Trans. Geosci. Remote, 2005, 43(3): 422-432
    [6] G. Hughes. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Geosci. Remote, 1968, 1T-14(1): 55-63
    [7] L. O. Jimenez, D. A. Landgrebe. Hyperspectral data analysis and supervised feature reduction via projection pursuit. IEEE Trans. Geosci. Remote, 1999, 37: 2653-2664
    [8] I. K. Fodor. A survey of dimension reduction techniques. Technical Report, U. S. Department of Energy, 2002
    [9] M. A. Carreira-Perpinan. A review of dimension reduction techniques. Technical Report CS-96-09, Dept. of Computer Science, University of Sheffiled, 1997
    [10] C. M. Bachmann, T. L. Ainsworth, R. A. Fusina. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote, 2005, 43(3): 441-454
    [11] D. Gillis, J. Bowles, E. Bennert, et al. Exploiting nonlinear structure in hyperspectral coastal data. In: Shen S. S., Lewis P. E., editors. Proc. of SPIE. San Diego, CA, USA. 2006. 630220V
    [12] T. Han, D. G. Goodenough. Investigation of nonlinear in hyperspectral remotely sensed imagery-a nonlinear time series analysis approach. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Barcelona, Spain. 2007: 1556-1560
    [13] T. Han, D. G. Goodenough. Investigation of nonlinearity in hyperspectral imagery using surrogate data methods. IEEE Trans. Geosci. Remote, 2008, 46(10):2840-2847
    [14]尹峻松,肖健,周宗潭等.非线性流形学习算法的分析与应用.自然科学进展, 2007, 17(8): 1015-1025
    [15] A. F. H. Goetz, G. Vane, J. E. Solomon, et al. Imaging spectrometry for earth remote sensing. Science, 1985, 228(4704): 1147-1153
    [16] G. Vane, A. F. H. Goetz. Terrestrial imaging spectroscopy. Remote Sens. Environ., 1988, 24: 1-29
    [17] G. Vane, A. F. H. Goetz. Terrestrial imaging spectrometry: current status, future trends. Remote Sens. Environ., 1993, 44: 117-126
    [18] R. O. Green. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVRIS). Remote Sens. Environ., 1998, 65: 227-248
    [19] R. W. Basedow, D. C. Carmer, M. L. Anderson. HYDICE system, implementation and performance. Proc. of SPIE. Orlando, FL, USA. 1995: 258-267
    [20] R. W. Basedow, W. S. Aldrich, J. E. Colwell, et al. HYDICE system performance-an update. Proc. of SPIE. Denver, CO, USA. 1996. 76-84
    [21] The AVRIS Concept.
    [22] L. J. P. v. d. Matten. An introduction to dimensionality reduction using matlab. Technical Report, MICC, University Maastricht, 2007
    [23] L. J. P. v. d. Maaten, E. O. Postma, H. J. v. d. Heri. Dimensionality reduction: a comparative review. , 2008
    [24] C. I. Chang, Q. Du, T. L. Sun, et al. A joint band prioritization and band-decorrelation approach to band selection for hyperspectral image classification. IEEE Trans. Geosci. Remote, 1999, 37(6): 2631-2641
    [25] C.-I. Chang, S. Wang. Constrained band selection for hyperspectral imagery. IEEE Trans. Geosci. Remote, 2006, 44(6): 1575-1585
    [26] P. Groves, P. Bajcsy. Methodology for hyperspectral band and classification model selection. IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data. Greenbelt, MD, USA. 2003: 120-128
    [27] P. Bajcsy, P. Groves. Methodology for hyperspectral band selection. PhotogrammaEng. Remote Sens. J., 2004, 70(7): 793-802
    [28] B. Guo, S. R. Gunn, R. I. Damper, et al. Band selection for hyperspectral image classfication using mutual information. IEEE Geosci. Remote S., 2006, 3(4): 522-526
    [29] N. Keshava. Distance metrics and band selection in hyperspectral processing with application to material identification and spectral libraries. IEEE Trans. Geosci. Remote, 2004, 42(7): 1552-1565
    [30] Q. Du. Band selection and its impact on target detection and classification in hyperspectral image analysis. IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003: 373-377
    [31] M. Diani, N. Acito, M. Greco, et al. A new band selection strategy for target detection in hyperspectral images. Proceeding of the 12th International Conference on Knowledge-Based Intelligent Information and Engineering Systems, 2008: 424-431
    [32] S. Kumar, J. Ghosh, M. M. Crawford. Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote, 2001, 39(7): 1368-1379
    [33] S. B. Serpico, G. Moser. Extraction of spectral channels from hyperspectral images for classification purposes. IEEE Trans. Geosci. Remote, 2007, 45(2): 484-495
    [34] C. Lee, D. A. Landgrebe. Decision boundary feature extraction for neural networks. IEEE Trans. Neural. Networ, 1997, 8: 75-83
    [35] C. Price. Band selection procedure for multispectral scanners. Appl. Optics, 1994, 33: 3281-3288
    [36] R. Huang, M. He. Band selection based on feature weighting for classification of hyperspectral data. IEEE Geosci. Remote S., 2005, 2(2): 156-159
    [37] T. A. Warner, K. Steinmaus, H. Foote. An evaluation of spatial autocorrelation feature selection. Int. J. Remote. Sens., 1999, 20(8): 1601-1616
    [38] H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 1993, 24: 417-441
    [39] J. Shlens. A tutorial on principal component analysis. Technical Report, Institute for Nonlinear Science, UCSD, 2005
    [40] A. A. Green, M. Berman, P. Switzer, et al. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote, 1988, 26(1): 65-74
    [41] A. Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural. Networ, 1999, 10(3): 626-634
    [42] A. Hyvarinen, E. Oja. Independent component analysis: algorithm and application. Neural Networks, 2000, 12(4-5): 411-430
    [43] K. Fukunaga. Introduction to statistical pattern recognition (2nd ed.). Orlando, FL: Academic Press, 1990
    [44] T. K. Kim, J. Kittler. Locally linear discriminant analysis for multi-modally distributed classes for face recognition with a single model image. IEEE Trans. Pattern. Anal., 2005, 27(3): 318-327
    [45] X. Jia, J. A. Richards. Segmented principal components transformation for efficient hyperspectral remote-sensing image display and classification. IEEE Trans. Geosci. Remote, 1999, 37(1): 538-542
    [46] B. Staab. Investigation of noise and dimensionality reduction transforms on hyperspectral data as applied to target detection. 2005. Center for Imaging Science, Rochester Institute of Technology, Rochester, NY .
    [47] G. Chen, S. Qian. Evaluation and comparison of dimensionality reduction methods and band selection. Can. J. Remote Sens., 2008, 34(1): 26-32
    [48] Q. Du, I. Kopriva, H. Szu. Independent-component analysis for hyperspectral remote sensing imagery classification. Opt. Eng., 2006, 45(1): 1-13
    [49] J. Wang, C.-I. Chang. Independent component analysis-based dimensionality reduction with application in hyperspectral image analysis. IEEE Trans. Geosci. Remote, 2006, 44(6): 1586-1600
    [50] S. S. Chiang, C.-I. Chang, I. W. Ginsberg. Unsupervised target detection in hyperspectral images using projection pursuit. IEEE Trans. Geosci. Remote, 2001, 39(7): 1380-1391
    [51] A. Ifarraguerri, C.-I. Chang. Unsupervised hyperspectral image analysis with projection pursuit. IEEE Trans. Geosci. Remote, 2000, 38(6): 2529-2538
    [52] Q. Du. Modified fisher's linear discriminant analysis for hyperspectral imagery. IEEE Geosci. Remote S., 2007, 4(4): 503-507
    [53] Q. Du, N. H. Younan. On the performance improvement for linear discriminant analysis-based hyperspectral image classification. IAPR Workshop on Pattern Recognition in Remote Sensing. Tampa, Florida, USA. 2008: 1-4
    [54] J. B. Tenenbaum, V. Silva, J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 2000, 290(5500): 2323-2326
    [55] S. T. Roweis, L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290(5500): 2323-2326
    [56] M. Belkin, P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural. Comput., 2003, 15(6): 1373-1396
    [57] Z. Y. Zhang, H. Y. L. Zha. Principal manifolds and nonlinear dimension reduction via local tangent space alignment. Siam. J. Sci. Comput., 2004, 26(1): 313-338
    [58] C. M. Bachmann, T. L. Ainsworth, R. A. Fusina. Improved manifold coordinates representations of large-scale hyperspectral scenes. IEEE Trans. Geosci. Remote, 2006, 44(10): 2786-2803
    [59] Y. Chen, M. M. Crawford, J. Ghosh. Applying nonlinear manifold learning to hyperspectral data for land cover classification. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Seoul, South Korea, 2005: 4311-4314
    [60] Y. Chen, M. M. Crawford, J. Ghosh. Improved nonlinear manifold learning for land cover classification via intelligent landmark selection. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Denver, Colorado, USA. 2006: 549-552
    [61] W. Kim, Y. Chen, M. M. Crawford, et al. Multiresolution manifold learning for classification of hyperspectral data. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Barcelona, Spain. 2007: 3785-3788
    [62] W. Kim, M. M. Crawford, J. Ghosh. Spatially adapted manifold learning for classification of hyperspectral imagery with insufficient labeled data. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Boston, Massachusetts, USA. 2008: 213-216
    [63] W. Kim, M. M. Crawford. A novel adaptive classification method for hyperspectraldata using manifold regularization kernel machines. First Workshop Hyperspectral Image Signal Process Evol Remote Sens. Grenoble, France, 2009: 1-4
    [64] M. M. Crawford, W. Kim. Manifold learning for multi-classifier systems via ensembles. Proceedings of the 8th International Workshop on Multiple Classifier Systems, 2009: 519-528
    [65] L. Ma, M. M. Crawford, J. W. Tian. Anomaly detection for hyperspectral images based on robust locally linear embedding. Journal of Infrared, Millimeter, and Terahertz Waves, 2010, 31(6): 753-762
    [66] L. Ma, M. M. Crawford, J. W. Tian. Anomaly detection for hyperspectral images using local tangent space alignment. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Honolulu, Hawaii, USA. 2010: 1-4
    [67] M. M. Crawford, L. Ma, W. Kim. Exploring nonlinear manifold learning for classification of hyperspectral data. Chapter 11 of "Optical Remote Sensing: Advances in Signal Processing and Exploitation Techniques". Springer, 2010
    [68] L. Ma, M. M. Crawford, J. W. Tian. Local manifold learning based k-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote, 2010, 48(11): 4099-4109
    [69] X. R. Wang, S. Kumar, T. Kaupp, et al. Applying ISOMAP to the learning of hyperspectral image. Australian Conference on Robotics and Automation. Sydney, Australia. 2005. Australian Robotics and Automation Assocation: 1-8
    [70] T. Han, D. G. Goodenough. Nonlinear feature extraction of hyperspectral data based on locally linear embedding. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Seoul, South Korea, 2005: 1237-1240
    [71] D. H. Kim, L. H. Finkel. Hyperspectral image processing using locally linear embedding. Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering. Capri Island, Italy, 2003: 316-319
    [72] S. Qian, G. Chen. A new nonlinear dimensionality reduction method with application to hyperspectral image analysi. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Barcelona, Spain, 2007: 270-273
    [73] Y. Zhou, B. Wu, D. R. Li, et al. Edge detection on hyperspectral imagery via manifold techniques. First workshop on Hyperspectral Image and Signal Processing:Evolution in Remote Sensing. Grenoble, France, 2009: 1-4
    [74] J. He, L. Zhang, Q. Wang. Using diffusion geometric coordinates for hyperspectral imagery representation. IEEE Geosci. Remote S., 2009, 6(4): 767-771
    [75] A. Mohan, G. Sapiro. Spatially coherent nonlinear dimensionality reduction and segmentation of hyperspectral images. IEEE Geosci. Remote S., 2007, 4(2): 206-210
    [76] G. Chen, S. E. Qian. Dimensionality reduction of hyperspectral imagery using improved locally linear embedding. J. Appl. Remote Sens., 2007, 1(013509): 1-10
    [77] D. A. Landgrebe. Signal theory method in multispectral remote sensing. Hoboken, NJ: Wiley, 2003
    [78] X. Jia, J. A. Richards. Fast k-NN classification using the cluster-space approach. IEEE Geosci. Remote S., 2005, 2(2): 225-228
    [79] E. Blanzieri, F. Melgani. Nearest neighbor classification of remote sensing images with the maximal margin principle. IEEE Trans. Geosci. Remote, 2008, 46(6): 1804-1811
    [80] B. C. Kuo, J. M. Yang, T. W. Sheu, et al. Kernel-based kNN and Gaussian classifiers for hyperspectral image classification. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Boston, Massachusetts, USA. 2008: 1006-1008
    [81] J. M. Yang, P. T. Yu, B. C. Kuo. A nonparametric feature extraction and its application to nearest neighbor classification for hyperspectral image data. IEEE Trans. Geosci. Remote, 2010, 48(3): 1279-1293
    [82] J. A. Gualtieri, R. F. Cromp. Support vector machines for hyperspectral remote sensing classification. Advances in Computer-Assisted Recognition, 1999, 3584: 221-232
    [83] J. A. Gualtieri, S. Chettri. Support vector machine for classification of hyperspectral data. IEEE International Geoscience and Remote Sensing Symposium, IGARSS. Honolulu, Hawaii, USA. 2000: 813-815
    [84] F. Melgani, L. Bruzzone. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote, 2004, 42(8): 1778-1790
    [85] F. Melgani, L. Bruzzone. Support vector machines for classification of hyperspectral remote-sensing images. IEEE International Geoscience and Remote SensingSymposium, IGARSS, 2002: 506-508
    [86] M. Fauvel, J. A. Benediktsson, J. Chanussot, et al. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote, 2008, 46(11): 3804-3814
    [87] M. M. Crawford, S. Kumar, M. R. Ricard, et al. Fusion of airborne polarimetric and interferometric SAR for classification of coastal environments. IEEE Trans. Geosci. Remote, 1999, 37(3): 1306-1315
    [88] S. Kumar, J. Ghosh, M. M. Crawford. Hierarchical fusion of multiple classifiers for hyperspectral data analysis. Pattern. Anal. Appl., 2002, 5(2): 210-220
    [89] J. T. Morgan, A. Henneguelle, M. M. Crawford, et al. Adaptive feature spaces for land cover classification with limited ground truth data. Multiple Classifier Systems, 2002, 2364: 189-200
    [90] D. W. J. Stein, S. G. Beaven, L. E. Hoff, et al. Anomaly detection from hyperspectral imagery. IEEE Signal Proc. Mag, 2002, 19(1): 58-69
    [91]耿修瑞,赵永超.高光谱遥感图像小目标探测的基本原理.中国科学D辑:地球科学, 2007, 37(8): 1081-1087
    [92] C. -I. Chang. Orthogonal subspace projection (OSP) revisited: A comprehensive study and analysis. IEEE Trans. Geosci. Remote, 2005, 43(3): 502-518
    [93] J. C. Harsanyi. Detection and classification of subpixel spectral signature in hyperspectral image sequences. Baltimore County: University of Maryland, 1993
    [94] H. A. Ren, Q. Du, J. Wang, et al. Automatic target recognition for hyperspectral imagery using high-order statistics. IEEE Trans. Aero. Elec. Sys, 2006, 42(4): 1372-1385
    [95] H. S. Kwon, S. Z. Der, N. M. Nasrabadi. Adaptive anomaly detection using subspace separation for hyperspectral imagery. Opt. Eng., 2003, 42(11): 3342-3351
    [96] H. Goldberg, H. Kwon, N. M. Nasrabadi. Kernel eigenspace separation transform for subspace anomaly detection in hyperspectral imagery. IEEE Geosci. Remote S., 2007, 4(4): 581-585
    [97] I. S. Reed, X. L. Yu. Adaptive multi-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process, 1990,38(10): 1760-1770
    [98] H. Kwon, N. M. Nasrabadi. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote, 2005, 43(2): 388-397
    [99] D. Manolakis, D. Marden, G. A. Shaw. Hyperspectral image processing for automatic target detection applications. Lincon Laboratory Journal, 2003, 14(1): 79-116
    [100] Q. Du, I. Kopriva. Automated target detection and discrimination using constrained kurtosis maximization. IEEE Geosci. Remote S., 2008, 5(1): 38-42
    [101] S. M. Schweizer, J. M. F. Moura. Hyperspectral imagery: Clutter adaptation in anomaly detection. IEEE Trans. Inform. Theory, 2000, 46(5): 1855-1871
    [102] O. Duran, M. Petrou. A time-efficient method for anomaly detection in hyperspectral images. IEEE Trans. Geosci. Remote, 2007, 45(12): 3894-3904
    [103] L. P. Zhang, B. Du, Y. F. Zhong. Hybrid detectors based on selective endmembers. IEEE Trans. Geosci. Remote, 2010, 48(6): 2633-2646
    [104] A. Banerjee, P. Burlina, C. Diehl. A support vector method for anomaly detection in hyperspectral imagery. IEEE Trans. Geosci. Remote, 2006, 44(8): 2282-2291
    [105] L. Ma, M. M. Crawford, J. W. Tian. Generalised supervised local tangent space alignment for hyperspectral image classification. Electron. Lett., 2010, 46(7): 497-498
    [106] L. Ma, J. W. Tian. Anomaly detection for hyperspectral images based on improved RX algorithm. Proc. of SPIE. Wuhan, China. 2007: Q1-9
    [107]马丽,田金文.基于局部能量最大可分的高光谱图像异常检测算法.遥感学报, 2008, 12(3): 420-427
    [108] S. C. Yan, D. Xu, B. Y. Zhang, et al. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern. Anal., 2007, 29(1): 40-51
    [109] S. C. Yan, Z. Xu, B. Y. Zhang, et al. Graph embedding: A general framework for dimensionality reduction. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA. 2005: 830-837
    [110] S. S. Chern, W. H. Chen, K. S. Lam. Lectures on Differential Geometry. Singapore:World Scientific, 2000
    [111] V. D. Silva, J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. Neural Information Proceeding Systems: Natural and Synthetic. Vancouver, Canada. 2002
    [112] G. E. Hinton. Stochastic neighbor embedding. Advances in Neural Information Processing Systems. Cambridge, MA, USA. 2002: 833-840
    [113] D. L. Donoho, C. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA. 2003: 5591-5596
    [114] M. Brand. Charting a manifold. Advances in Neural Information Processing Systems. Cambrige, MA, USA. 2002: 986-992
    [115] Y. W. Teh, S. T. Roweis. Automatic alignment of hidden representations. Advances in Neural Information Processing Systems. Cambridge, MA, USA. 2002: 682-688
    [116] S. Lafon, A. B. Lee. Diffusion maps and coarse-graining: A unified framework for dimensionality reduction, graph partitioning, and data set parameterization. IEEE Trans. Pattern. Anal., 2006, 28(9): 1393-1403
    [117] X. He, P. Niyogi. Locality preserving projections. Advances in Neural Information Processing Systems. Cambridge, MA, USA. 2004: 37-44
    [118] X. He, D. Cai, S. Yan, et al. Neighborhood preserving embedding. Proceedings of the 10th IEEE International conference on Computer Vision, ICCV. Beijing, China. 2005: 1208-1213
    [119] T. Zhang, J. Yang, D. Zhao, et al. Linear local tangent space alignment and application to face recognition. Neurocomputing, 2007, 70: 1547-1553
    [120] D. d. Ridder, P. R. W. Robert. Locally linear embedding for classification. Technical Report Series, Number PH-2002-01, Pattern Recognition Group, Image Science and Technology Department, Delft University of Technology, 2002
    [121] H. Y. Li, L. Teng, W. B. Chen, et al. Supervised learning on local tangent space. Lect. Notes. Comput. Sc, 2005, 3496: 546-551
    [122] X. L. Yu, X. G. Wang. Kernel uncorrelated neighborhood discriminative embedding for feature extraction. Opt. Eng. Lett., 2007, 46(12): 1205021-1205023
    [123] Q. B. You, N. N. Zheng, S. Y. Du, et al. Neighborhood discriminant projection for face recognition. Pattern. Recogn. Lett., 2007, 28(10): 1156-1163
    [124] M. Belkin, I. Matveeva, P. Niyogi. Regularization and semi-supervised learning on large graphs. Proceedings of Learning Theory, 2004, 3120: 624-638
    [125] M. Belkin, P. Niyogi, V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res., 2006, 7: 2399-2434
    [126] R. Chatpatanasiri, B. Kijsirikul. A unified semi-supervised dimensionality reduction framework for manifold learning. Neurocomputing, 2010, 73(10-12): 1631-1640
    [127] A. Agovic, A. Banerjee. A unified view of graph-based semi-supervised learning; label propagation; graph-cuts, and embeddings. Technical Report 09-012, Department of computer Science and Engineering, University of Minnesota, Minneapolis, MN, USA, 2009
    [128] L. K. Saul, S. T. Roweis. Think globally, fit locally: Unsupervised learning of low dimensional manifolds. J. Mach. Learn. Res., 2004, 4(2): 119-155
    [129] T. Cox, M. Cox. Multidimensional scaling. London: Chapman & Hall, 1994
    [130] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 1964, 29: 1-27
    [131] C. K. I. Williams. On a connection between kernel PCA and metric multidimensional scaling. Mach. Learn., 2002, 46(1-3): 11-19
    [132] E. W. Dijkstra. A note on two problems in connexion with graphs. Numer. Math., 1959, 1(1): 269-271
    [133] B. Sch?lkopf, A. J. Smola, K. R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural. Comput., 1998, 10(5): 583-588
    [134] Y. Bengio, O. Delalleau, N. Le Roux, et al. Learning eigenfunctions links spectral embedding and kernel PCA. Neural. Comput., 2004, 16(10): 2197-2219
    [135] J. Ham, D. Lee, S. Mika, et al. A kernel view of the dimensionality reduction of manifold. Proceedings of the Twenty-First International Conference on Machine Learning. Banff, Alberta, Canada. 2004: 369-376
    [136] Y. Bengio, J. F. Paiement, P. Vincent. Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. Advances in Neural Information Processing Systems 16. Cambridge, MA. 2003: 177-184
    [137] O. Kouropteva, O. Okun, A. Hadid, et al. Beyond locally linear embeddingalgorithm. Technical Report, MVG-01-2002, Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, University of Oulu, Finland, 2002
    [138] H. Li, W. Chen, I. F. Shen. Supervised learning for classification. Lect. Notes. Comput. Sc, 2005, 3614: 49-57
    [139] J. Wang. Improved local tangent space alignment using various dimensional local coordinates. Neurocomputing, 2008, 71: 16-18
    [140] D. Landgrebe. Multispectral data analysis: A signal theory perspective. Report, School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, USA, 1998
    [141] T. Hastie, R. Tibshirani. Discriminant adaptive nearest neighbor classification and regression. Advances in Neural Information Processing Systems 8, 1996, 8: 409-415
    [142] C. Domeniconi, J. Peng, D. Gunopulos. Locally adaptive metric nearest-neighbor classification. IEEE Trans. Pattern. Anal., 2002, 24(9): 1281-1285
    [143] K. Q. Weinberger, L. K. Saul. Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res., 2009, 10: 207-244
    [144] L. Samaniego, A. Bardossy, K. Schulz. Supervised classification of remotely sensed imagery using a modified k-NN technique. IEEE Trans. Geosci. Remote, 2008, 46(7): 2112-2125
    [145] S. A. Dudani. The distance-weighted k-nearest-neighbor rule. IEEE Trans. Syst. Man Cybern., 1976, 6(4): 325-327
    [146] K. Kozak, M. Kozak, K. Stapor. Weighted k-nearest-neighbor techniques for high throughput screening data. Int. J. Biol. Med. Sci., 2006, 1(3): 155-160
    [147] A. F. Atiya. Estimating the posterior probabilities using the k-nearest neighbor rule. Neural. Comput., 2005, 17(3): 731-740
    [148] S. Tan. Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Syst. Appl., 2005, 28(4): 667-671
    [149] W. M. Zuo, D. Zhang, K. Q. Wang. On kernel difference-weighted k-nearest neighbor classification. Pattern. Anal. Appl., 2008, 11(3-4): 247-257
    [150] A. L. Neuenschwander. Remote sensing of vegetation dynamics in response to flooding and fire in the Okavango Delta, Botswana. Austin: University of Texas atAustin, 2007
    [151] M. D. Farrell, R. M. Mersereau. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote S., 2005, 2(2): 192-195
    [152] M. D. Ferrell, R. M. Merserseau. On the impact of covariance contamination for adaptive detection in hyperspectral imaging. IEEE Signal Proc. Let, 2005, 12(9): 649-652
    [153] C.-I. Chang, M. Hsueh. Characterization of anomaly detection in hyperspectral imagery. Sensor Review, 2006, 26(2): 137-146
    [154] C. I. Chang, S. S. Chiang. Anomaly detection and classification for hyperspectral imagery. IEEE Trans. Geosci. Remote, 2002, 40(6): 1314-1325
    [155] J. E. West, D. W. Messinger, E. J. Ientilucci, et al. Matched filter stochastic background characterization for hyperspectral target detection. Proceedings of SPIE. Orlando, Florida. 2005: 1-12
    [156] C. I. Chang, D. C. Heinz. Constrained subpixel target detection for remotely sensed imagery. IEEE Trans. Geosci. Remote, 2000, 38(3): 1144-1159
    [157] H. Chang, D. Y. Yeung. Robust locally linear embedding. Pattern. Recogn., 2006, 39(6): 1053-1065
    [158] H. Chang, D. Y. Yeung. Robust locally linear embedding. Technical Report HKUST-CS05-12, Hong Kong University of Science and Technology, 2005
    [159] B. Wu, Y. Zhou, L. Yan, et al. Object detection from HS/MS and multi-platform remote-sensing imagery by the integration of biologically and geometrically inspired approaches. American Society for Photogrammetry and Remote Sensing. Baltimore, Maryland. 2009
    [160] J. C. Tilton. Analysis of hierarchically related image segmentations. IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003: 60-69

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700