雷达高分辨距离像目标识别的拒判算法和特征提取技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
雷达高分辨距离像(HRRP)是散射点子回波在雷达视线方向上投影的向量和,它能够反映出散射点目标在雷达视线方向上的几何结构信息,且相对于合成孔径雷达(SAR)图像和逆合成孔径雷达(ISAR)图像而言,具有易于获取和存储量小等优点,因而在雷达自动目标识别(RATR)领域受到了广泛关注。本论文主要围绕着国防预研及国家自然科学基金的相关项目,结合距离像识别的工程应用背景,从库外目标拒判和特征提取两个主要方面展开对RATR的相关理论和技术的研究。论文的主要内容包括下述五个部分,其中第一部分为绪论,第二部分主要涉及库外目标拒判问题,第三、四、五部分主要涉及特征提取问题。
     一、首先介绍了高分辨距离像的物理特性,然后结合RATR的工程应用,讨论了库外目标拒判问题的应用背景,分析了该问题和一般意义上的模式识别问题的主要区别,以及解决该问题所面临的主要困难。
     二、针对库外目标拒判问题,提出了一种人工生成库外目标训练样本的方法,为后续的分类器设计提供了数据基础。针对支持向量域描述(SVDD)算法核函数形式过于简单的缺点,将SVDD算法由单个核函数扩展到多个核函数线性组合的形式,并根据对组合系数自由度的不同限制,分别得到了Multikernel-SVDD1算法和Multikernel-SVDD2算法两种扩展版本。SVDD算法、Multikernel-SVDD1算法和Multikernel-SVDD2算法可以分别通过求解二次规划(QP)、二阶锥规划(SOCP)和半正定规划(SDP)问题来获取全局最优解。仿真实验的结果表明:(1)由于采用了更加复杂的核函数形式,Multikernel-SVDD1算法和Multikernel-SVDD2算法取得了比SVDD算法更优的拒判性能;(2)由于多个核矩阵的组合系数具有更高的自由度,Multikernel-SVDD2算法取得了比Multikernel-SVDD1算法更优的拒判性能。SVDD算法、Multikernel-SVDD1算法和Multikernel-SVDD2算法旨在寻求高维核空间的超球体分类边界,它们的区别仅仅在于核空间的不同。不同于上述超球体边界,本章提出了三种采用近邻边界的分类算法,即最近邻(NN)分类器、平均K近邻(A-KNN)分类器和加权K近邻(W-KNN)分类器来处理上述拒判问题。仿真实验结果表明,就雷达HRRP库外目标拒判问题而言,采用近邻边界要优于采用超球体边界。通过比较三种近邻算法,我们发现W-KNN分类器的性能要优于NN分类器和A-KNN分类器,并指出造成这种结果的原因在于W-KNN分类器能够在应用较多的信息量的同时保持较强的局部学习能力。
     三、提出了一种大间隔最近局部均值(LMNLM)算法。该算法通过一个线性变换,将原始欧式距离空间投影到马氏距离空间,并在投影后空间的最近局部均值(NLM)分类器的边界中引入了大的分类间隔,以期望改进NLM分类器的推广性能。通过对所获得的马氏距离矩阵进行特征值分解,可以恢复出投影矩阵,从而实现对HRRP数据的特征提取。LMNLM可以表述为一个半正定规划(SDP)问题,而SDP问题的凸性保证了全局最优解的存在。实验结果表明LMNLM可以同时降低数据维数和提高数据可分性,对多模分布且存在大量噪声和冗余分量的HRRP数据尤为适用。
     四、线性判别分析(LDA)是一种典型的基于全局准则的特征提取算法,在模式识别领域有着广泛的应用。由于全局准则对多模分布数据的不适用性,研究者们提出了一些基于局部准则的相关算法,例如边界Fisher分析(MFA)和局部判别嵌入(LDE),来处理多模分布数据的特征提取和分类问题。本章中,我们从鲁棒性和灵活性两方面入手,分析指出全局算法具有较强的鲁棒性和较弱的灵活性,而局部算法与之相反,其鲁棒性较弱而灵活性较强。结合训练数据采样程度对识别影响的分析,提出了组合判别分析(CDA)来折衷考虑鲁棒性和灵活性,并将其成功应用到雷达HRRP目标识别领域。
     五、分析了线性判别分析(LDA)的四个缺陷:(1)同一类别的样本需要服从高斯分布特性;(2)投影向量的个数受到限制;(3)不同的差分向量在构建散射矩阵时受到同等对待,它们对识别的不同影响没有得到体现;(4)没有考虑投影向量的范数对识别的影响。针对上述四个缺陷,首先提出了一种新的特征提取算法,局部均值判别分析(LMDA),来弥补前三个缺陷带来的不利影响,接着提出了一个广义的重加权(GRW)框架来弥补最后一个缺陷的不利影响。LMDA和GRW可以分别采用广义特征值分解和线性规划(LP)来求解,它们的结合应用可以大大提高数据的可分性,基于人工数据、公用数据以及雷达HRRP数据的实验结果充分表明了它们在提高分类精度方面的有效性。
Radar high-resolution range profile (HRRP) denotes the sum vector of projections of the complex returned echoes from target scattering centers onto the radar line-of-sight (LOS), and it may reflect the structural information of scattering targets in the direction of LOS. Compared with synthetic aperture radar (SAR) images or inverse synthetic aperture radar (ISAR) images, HRRPs are more easily accessible and require much smaller storage, and hence, attract general attentions in the radar automatic target recognition (RATR) community. By considering the engineering background of HRRP recognition, this dissertation gives our researches on theories and techniques of RATR, mainly focusing on following two aspects, i.e., outlier target rejection and feature extraction, which are supported by Advanced Defense Research Programs of China and National Science Foundation of China. This dissertation consists of five sections, in which Section 1 gives a brief introduction of this dissertation, Section 2 refers to the outlier target rejection problem, Section 3, Section 4 and Section 5 refer to the feature extraction problem.
     1、In Section 1, a brief analysis of the physical property of HRRPs is discussed, firstly. Next, we introduce the background of the outlier target rejection problem based on real engineering considerations, and then analyze the main difference of this problem from traditional pattern recognition problems and give the main difficulties for sovling this problem.
     2、In Section 2, we propose a method to artificially generate outlier training samples, which provides a data supportment for following classifier design procedure. Based on the drawback that support vector domain description (SVDD) has a too simple kernel form, we extend SVDD from single kernel to the form with linear combination of multiple kernels, and thus obtain two extended versions of SVDD, i.e., Multikernel-SVDD1 and Multikernel-SVDD2, according to different degrees of freedom on combining coefficients. SVDD, Multikernel-SVDD1 and Multikernel-SVDD2 can be solved with global optimal solutions, by oprating quadratic programming (QP), second-order cone programming (SOCP) and semidifinite programming (SDP) problems, respectively. Experimental results show that:(1) due to the adoption of more complicated kernel formation, Multikernel-SVDD1 and Multikernel-SVDD2 have better rejection performance than SVDD; (2) due to more degrees of freedom on the combinational coefficients of multiple kernel matrices, Multikernel-SVDD2 has better rejection performance than Multikernel-SVDD1. SVDD、Multikernel-SVDD1 and Multikernel-SVDD2 aim at seeking hypersphere boundaries in the high-dimensional kernel spaces, and their difference just lies in that they perform in different kernel spaces. Different from above hypersphere boundary, in this section, we propose three new algorithms, i.e., the nearest neighbor (NN) classifier, the average K nearest neighbors (A-KNN) classifier and the weighted K nearest neighbors (W-KNN) classifier, all of which adopt neighboring boundary to deal with the rejection problem. The experimental results show that adopting the neighboring boundary outperforms adopting the hypersphere one for the radar HRRP outlier rejection problem. By comparing above three neighboring algorithms, we find that W-KNN outperforms both NN and A-KNN, perhaps due to W-KNN can utilize more information and preserve strong local learning ability simultaneously.
     3、A large margin nearest local mean (LMNLM) algorithm is proposed in Section 3. LMNLM projects the initial Euclidean distance space to the Mahalanobis one by a linear transformation, and then introduces large classification margins to the nearest local mean (NLM) classifier in the projected space, with the expectation the generalization ability of NLM classifier can be improved. By oprating generalized eigenvalue decomposition on the obtained Mahalanobis matrix, we can recover the projection matrix and realize the feature extraction task on HRRP data. LMNLM can be expressed as a SDP problem, which assures the accessibility of global optimal solutions. The experimental results show that LMNLM can reduce data's dimensionality and enhance data's discriminability simultaneously, which makes it especially suitable for the multimodal distributed and noisy/redundant components corrupted HRRP data.
     4、Linear discriminant analysis (LDA) is a represental feature extraction algorithm optimized by a global criteria and widely utilized in the pattern recognition field. To make up for the unsuitability of global criteria to multimodal distributed data, researchers propose some local criteria related algorithms, like marginal fisher analysis (MFA) and local discriminant embedding (LDE), to treat with the feature extraction and classification of multimodal distributed data. In this section, we make an analysis on algorithms from two aspects, i.e., robustness and flexibility, and conclude that global algorithms have stronger robustness and weaker flexibility, in contrast with local algorithms'weaker robustness and stronger flexibility. According to the analysis of the effection of training data's sampling extent on classifications, we propose a new algorithm, namely, combinatorial discriminant analysis (CDA), to seek a proper tradeoff between robustness and flexibility, and then successfully apply it to the radar HRRP target recognition community.
     5、In Section 5, we show that LDA has four drawbacks:(1) homogeneous samples should be Gaussian distributed; (2) the number of available projection vectors is limited; (3) different discrepant vectors are treated equivalently, and their different effections on classification do not attract necessary attentions; (4) the effection of the norm of projection vectors on classification is neglected. Based on above analysis, we propose a new feature extraction algorithm, namely, local mean discriminant analysis (LMDA), to make up for the disadvantages caused by first three drawbacks, and a generalized re-weighting (GRW) framework to make up for the disadvantage of the fourth drawback. LMDA and GRW can be solved by operating generalized eigenvalue decomposition and linear programming (LP), respectively. The combination of LMDA and GRW can significantly enhance data's discriminability, which is justified by extensive experiments on synthetic data, benchmark data, and radar HRRP data, respectively.
引文
[1]Skolnik M.. Introduction to Radar Systems. Second Edition, New York: McGraw-Hill.1980.
    [2]杜兰.雷达高分辨距离像目标识别方法研究.博士研究生学位论文,西安电子科技大学,2007.
    [3]廖学军.基于高分辨距离像的雷达目标识别.博士研究生学位论文,西安电子科技大学,1999.
    [4]Chen V.C., Li F., Ho S.S., Wechsler. H.. Micro-doppler effect in radar: Phenomenon, model, and simulation study. IEEE Transactions on Aerospace and Electronic Systems,2006,42 (1):2-21.
    [5]Chen V.C., Li F., Ho S.S.. Analysis of micro-Doppler signatures. IEE Proc.-Radar Sonar Navig.2003,150 (4):271-276.
    [6]高红卫,谢良贵,文树梁.基于微多普勒特征的真假目标雷达识别研究.电波科学学报,2008,23(4):16-20.
    [7]孙慧霞,刘峥,薛宁.摆动进动目标的微多普勒特性分析.系统工程与电子技术,2009,31(2):67-70.
    [8]刘永祥,黎湘,庄钊文.空间目标进动特性及在雷达中的应用.自然科学进展,2004,14(11):1329-1332.
    [9]陈行勇,黎湘,郭桂蓉.微进动弹道导弹目标雷达特征提取.电子与信息学报,2006,28(4):643-646.
    [10]高红卫,谢良贵,文树良.基于微多普勒分析的弹道导弹目标进动特性研究.系统工程与电子技术,2008,30(1):50-52.
    [11]Nebabin V.G.. Methods and techniques of radar recognition. Boston:Artech House, 1997.
    [12]Barton D.K.. Sputnik Ⅱ as observed by C band radar. IRE National Conference, 1959,7 (5):67-73.
    [13]Skolnik M.I.. Radar Handbook. Second Edition.北京:电子工业出版社,2003.
    [14]Marcoz Y., Miguel T.T.. Feature Selection using Genetic Algorithms in SAR Airborne Imagery. http://www.cim.mcgill.ca,2001.
    [15]Bryant M., Garber F.. SVM classifier applied to the MSTAR public data set. Algorithms for Synthetic Aperture Radar Imagery VI. Proceedings of the SPIE, 2001,37 (21):355-360.
    [16]Lewis GN., Postol T.A.. Future Challenges to Ballistic Missile Defense. IEEE Spectrum,1997,9:60-68.
    [17]William P.D., William W.W.. Radar Development at Lincoln Laboratory:An overview of the First Fifty Years. Lincoln Laboratory Journal,2000,12 (2): 147-166.
    [18]Lemnios W.Z., Grometestein A.. Overview of the Lincoln Laboratory Ballistic Missile Defense Program. Lincoln Laboratory Journal,2002,13 (1):9-32.
    [19]王雪松.宽带极化信息处理研究.博士研究生学位论文,国防科学技术大学,1999.
    [20]郭桂蓉,庄钊文,陈曾平.电磁特征抽取与目标识别.长沙:国防科技大学出版社,1996.
    [21]叶炜.逆合成孔径雷达运动补偿与成像研究.博士研究生学位论文,西安电子科技大学,1996.
    [22]裴炳南.高分辨雷达自动目标识别方法研究.博士研究生学位论文,西安电子科技大学,2002.
    [23]邢孟道.基于实测数据的雷达成像方法研究.博士研究生学位论文,西安电子科技大学,2002.
    [24]袁莉.基于高分辨距离像的雷达目标识别方法研究.博士研究生学位论文,西安电子科技大学,2007.
    [25]陈渤.基于核方法的雷达高分辨目标识别技术研究.博士研究生学位论文,西安电子科技大学,2008.
    [26]刘敬.雷达一维距离像特征提取与识别方法研究.博士研究生学位论文,西安电子科技大学,2008.
    [27]陈凤.基于HRRP和JEM信号的雷达目标识别技术研究.博士研究生学位论文,西安电子科技大学,2009.
    [28]侯庆禹.基于高分辨距离像的雷达自动目标识别方法研究.博士研究生学位论文,西安电子科技大学,2009.
    [29]Li H.J., Wang Y.D., Wang L.H.. Matching score properties between range profile of high-resolution radar targets. IEEE trans. A. P.,1996,44 (4):444-452.
    [30]Zyweck A., Bogner R.E.. Coherent averaging of range profiles. IEEE International Radar Conference.1995.
    [31]Li H.J., Yang S.H.. Using range profiles as feature vectors to identify aerospace objects. IEEE Trans. A.P.,1993,41 (3):261-268.
    [32]Zyweck A., Bogner R.E.. Radar target classification of commercial aircraft. IEEE Trans. A.E.S.,1996,32 (2):598-606.
    [33]Zyweck A., Bogner R.E.. Coherent averaging of range profiles. In Proceedings of IEEE International Radar Conference,1995,456-461.
    [34]Du L., Liu H., Bao Z., Xing. M.. Radar HRRP target recognition based on higher order spectra. IEEE transactions on Signal Processing,2005,53 (7):2359-2368.
    [35]Xing M., Bao Z., Pei B.. The properties of high-resolution range profiles. Optical Engineering,2002,41 (2):493-504.
    [36]Hu R., Zhu Z.. Research on Radar Classification based on High Resolution Range Profiles. In Proceedings of Aerospace and Electronics Conference,1997,2: 951-955.
    [37]袁莉,刘宏伟,保铮.雷达高分辨距离像分类器的参数自适应学习算法.电子与信息学报,已录用。
    [38]Du L., Liu H., Bao Z.. Radar HRRP Statistical Recognition:Parametric Model and Model Selection. IEEE transactions on Signal Processing,2008,56 (5): 1934-1944.
    [39]Chen B., Liu H., Yuan L., Bao Z.. Adaptively Segmenting Angular Sectors for Radar HRRP ATR. EURASIP Journal on Advances in Signal Processing,2008.
    [40]Zwart J.P., Heiden R.V., Gelsema S., Groen. F.. Fast translation invariant classification of HRR range profiles in a zero phase representation. IEE Proc. Radar Sonar Navig.,2003,150 (6):411-418.
    [41]陈渤,刘宏伟,保铮.一种基于零相位表示法的HRRP识别方法.西安电子科技大学学报,2005,32(5):657-662.
    [42]陈渤,刘宏伟,保铮.基于三种不同绝对对齐方法的分类器的分析与研究.现代雷达,2006,28(3):58-62.
    [43]杜兰,保铮,刘宏伟.高分辨距离像雷达自动目标识别的模板匹配问题.第九届全国雷达学术年会论文集,2004年8月,509-512.
    [44]Kim K.T., Seo D.K., Kim H.T.. Efficient radar target recognition using the MUSIC algorithm and invariant feature. IEEE Trans A.P.,2002,50 (3):325-337.
    [45]Du L., Liu H., Bao Z.. Radar HRRP target recognition by the higher-order spectra features. Proceedings of the IASTED International Conference-Applied Informatics,2004,2:627-632.
    [46]柴晶,刘宏伟,保铮.一种提高雷达HRRP识别和拒判性能的新方法.西安电子科技大学学报,2009,36(2):233-239.
    [47]柴晶,刘宏伟,保铮.提高雷达HRRP目标识别和拒判性能的核学习算法.西安电子科技大学学报,2009,36(5):793-800.
    [48]柴晶,刘宏伟,保铮.加权KNN分类器在HRRP库外目标拒判中的应用.系统工程与电子技术,已录用.
    [49]Tax. D.M.J.. One-class classification. PhD Thesis. Delft Univ.of Technology, The Netherland,2001.
    [50]Landgrebe. T.C.W., Duin. R.P.W.. Approximating the multiclass ROC by pairwise analysis. Pattern Recognition Letters,2007,28 (13):1747-1758.
    [51]Landgrebe. T.C.W., Duin. R.P.W.. Efficient multiclass ROC approximation by decomposition via confusion matrix perturbation analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence,2008,30 (5):810-822.
    [52]Bao. Z., Xing. M.D., Wang. T.. Radar Imaging Technique. Publishing House of Electronics Industry,2005.
    [53]Mallat. S.. A Wavelet Tour of Signal Processing. Academic Press,1998.
    [54]Daubechies. I.. Ten Lectures on Wavelets. SIAM,1992.
    [55]杜兰,刘宏伟,保铮,张军英.一种用于雷达HRRP功率谱的加权特征压缩方法.西安电子科技大学学报,2006,33(2):173-177.
    [56]Boyd S., Vandenberghe L.. Convex Optimization. Cambridge University Press, Cambridge UK,2004.
    [57]Taylor J.S., Cristianini N.. Kernel Methods for Pattern Analysis. Cambridge Univerisity Press, Cambridge UK,2004.
    [58]Scholkopf B., Burges C.J.C., Smola A.J.. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation,1998,10:1299-1319.
    [59]Mika S., Scholkopf B., Smola A.J., Muller K.R., Scholz M., Ratsch G.. Kernel PCA and de-noising in feature spaces. In Proc. NIPS,1999.
    [60]Roth V., Steinhage V.. Nonlinear discriminant analysis using kernel functions. In Proc. NIPS,2000.
    [61]Baudat G, Anouar F.. Generalized discriminant analysis using a kernel approach. Neural Computation,2000,12 (10):2385-2404.
    [62]Bach F.R., Jordan M.I.. Kernel Independent Component Analysis. Journal of Machine Learning Research,2002,3:1-48.
    [63]Chen B., Liu H.W., Bao Z.. Kernel Subclass Discriminant Analysis. Neurocomputing,2007,71:455-458.
    [64]Chen B., Liu H.W., Bao Z.. A Kernel Optimization Method Based on the Localized Kernel Fisher. Pattern Recognition,2008,41 (3):1098-1109.
    [65]Joachims T.. Text categorization with support vector machines:Learning with many relevant features. In Proc. Europ. Conf. Machine Learning. Berlin, Germany: Springer-Verlag,1998,137-142.
    [66]Dumais S., Platt J., Heckerman D., Sahami M.. Inductive learning algorithms and representations for text categorization. In Proc.7th Int. Conf. Inform. Knowledge Management,1998.
    [67]Drucker H., Wu D., Vapnik V.N.. Support vector machines for span categorization. IEEE Trans. on Neural Networks,1999,10:1048-1054.
    [68]Muller K.R., Smola A.J., Ratsch G., Scholkopf B., Kohlmorgen J., Vapnik V.N.. Predicting time series with support vector machines. In Artificial Neural Networks—ICANN'97. Springer Lecture Notes in Computer Science, Gerstner W., Germond A., Hasler M., Nicoud J.D., Eds. Berlin, Germany:Springer-Verlag,1997, 1327:999-1004.
    [69]Mattera D., Haykin S.. Support vector machines for dynamic reconstructionof a chaotic system. In Advances in Kernel Methods—Support Vector Learning, Scholkopf B., Burges C.J.C., Smola A.J., Eds. Cambridge, MA:MIT Press,1999, 211-242.
    [70]Brown M.P.S., Grundy W.N., Lin D., Cristianini N., Sugnet C., Furey T.S., Ares M., Haussler D.. Knowledge-based analysis of microarray gene expression data using support vector machines. Proc. Nat. Academy Sci.,2000,97 (1):262-267.
    [71]Furey T., Cristianini N., Duffy N., Bednarski D., Schummer M., Haussler D.. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics,2000,16:906-914.
    [72]Zien A., Ratsch G., Mika S., Scholkopf B., Lengauer T., Muller K.R.. Engineering support vector machine kernels that recognize translation initiation sites in DNA. Bioinformatics,2000,16:799-807.
    [73]Jaakkola T.S., Diekhans M., Haussler D.. A discriminative framework for detecting remote protein homologies. [online] http://www.cse.ucsc.edu/~research/compbio/research.html, Oct.1999.
    [74]Haussler D.. Convolution kernels on discrete structures. UC Santa Cruz, Tech. Rep. UCSC-CRL-99-10, July 1999.
    [75]Chapelle O., Vapnik V.N., Bousquet O., Mukherjee S.. Choosing multiple parameters for support vector machines. Machine Learning,2002,46 (1):131-159.
    [76]Kwok J.T.. The evidence framework applied to support vector machines. IEEE Trans. on Neural Networks,2000,11 (5):1162-1173.
    [77]Sollich P.. Bayesian methods for support vector machines:Evidence and predictive class probabilities. Machine Learning,2002,46 (1-3):21-52.
    [78]Lee M.M.S., Keerthi S.S., Ong C.J., DeCoste D.. An efficient method for computing leave-one-out error in support vector machines with Gaussian kernels. IEEE Trans. on Neural Networks,2004,15 (3):750-757.
    [79]Zhang D.Q., Chen S.C., Zhou Z.H.. Learning the kernel parameters in kernel minimum distance classifier. Pattern Recognition,2006,39 (1):133-135.
    [80]Cristianini N., Kandola J., Elisseeff A., Taylor J.S.. On kernel target alignment. In Proc. NIPS,2001.
    [81]Lanckriet G., Cristianini N., Bartlett P., Ghaoui L.E., Jordan M.I.. Learning the kernel matrix with semidefinte programming. Journal of Machine Learning Research,2004,5:27-72.
    [82]Bousquet O., Herrmann D.. On the complexity of learning the kernel matrix. In Proc. NIPS,2003.
    [83]Ong C.S., Smola A.J., Williamson R.C.. Learning the kernel with hyperkernels. Journal of Machine Learning Research,2003,3:1001-1030.
    [84]Crammer K., Keshet J., Singer Y.. Kernel design using boosting. In Proc. NIPS, 2003.
    [85]Hertz T., Hillel A.B., Weinshall D.. Learning a kernel function for classification with small training samples. In Proceedings of the International Conference on Machine Learning, Pittsburgh, PA, USA,2006.
    [86]Xiong H.L., Swamy M.N.S., Ahmad M.O.. Optimizing the kernel in the empirical feature space. IEEE Trans. on Neural Networks,2005,16 (2):460-474.
    [87]Amari S., Wu S.. Improving support vector machine classifiers by modifying kernel functions. Neural Networks,1999,12 (6):783-789.
    [88]Tsang I.W., Kwok J.T.. Efficient Hyperkernel Learning Using Second-Order Cone Programming. IEEE Trans. on Neural Networks,2006,17(1):48-58.
    [89]Chen B., Liu H.W., Bao Z.. Optimizing the Data-dependent Kernel under a Unified Kernel Optimization Framework. Pattern Recognition,2008,41 (6):2107-2119.
    [1]Landgrebe T.C.W, Tax D.M.J., Paclik P., Duin R.P.W... The interaction between classification and reject performance for distance-based reject-option classifiers. Pattern Recognition Letters,2006,27 (8):908-917.
    [2]柴晶,刘宏伟,保铮.一种提高雷达HRRP识别和拒判性能的新方法.西安电子科技大学学报(自然科学版),2009,36(2):233-239.
    [3]柴晶,刘宏伟,保铮.提高雷达HRRP目标识别和拒判性能的核学习算法.西安电子科技大学学报(自然科学版),2009,36(5):793-780.
    [4]Tax D.M.J.. One-class classification. PhD Thesis, Delft Univ.of Technology, The Netherland,2001.
    [5]Tax D.M.J., Duin R.P.W.. Support vector domain description. Pattern recognition letters,1999 (20):1191-1199.
    [6]Tax D.M.J., Duin R.P.W.. Support vector data description. Machine Learning, 2004,54 (1):45-66.
    [7]Landgrebe T.C.W., Duin R.P.W.. Approximating the multiclass ROC by pairwise analysis. Pattern Recognition Letters,2007,28 (13):1747-1758.
    [8]Landgrebe T.C.W., Duin R.P.W.. Efficient multiclass ROC approximation by decomposition via confusion matrix perturbation analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence,2008,30 (5):810-822.
    [9]Cortes C., Vapnik V.. Support-vector networks. Machine Learning,1995,20: 273-297.
    [10]Burges C.J.C.. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery,1998,2 (2):121-167.
    [11]Cristianini N., Taylor J.S.. An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press,2000.
    [12]Boyd S., Vandenberghe L.. Convex Optimization. Cambridge University Press, Cambridge UK,2004.
    [13]Mercer J.. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London,1909, A 209: 415-446.
    [14]Vandenberghe L., Boyd S.. Semidefinite programming. SIAM Review.1996,38: 49-95.
    [15]Alizadeh F., Goldfarb D.. Second-order cone programming. Math. Program.2003, 95:3-51.
    [16]Tseng P.. Second-order cone programming relaxation of sensor network localization. SIAM Journal on Optimization.2007,18(1):156-185.
    [1]Yang L.. An overview of distance metric learning. Technical Report, School of Computer Science, Carnegie Mellon University,2007.
    [2]Cristianini N., Taylor J.S.. An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press,2000.
    [3]Burges C.J.C.. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery,1998,2 (2):121-167.
    [4]Duda R.O., Hart P.E., Stork D.G.. Pattern Classification. John Wiley and Sons, Inc., New York,2001.
    [5]Ng A., Jordan M., Weiss Y.. On spectral clustering:Analysis and an algorithm. In Proc. NIPS,2001.
    [6]Weinberger K.Q., Blitzer J., Saul L.K.. Distance metric learning for large margin nearest neighbor classification. In Proc. NIPS,2006.
    [7]Xing E., Ng A., Jordan M., Russell S.. Distance metric learning, with application to clustering with side information. In Proc. NIPS,2002.
    [8]Bachrach R.G., Navot A., Tishby N.. Margin based feature selection-theory and algorithms. In Proc. ICML,2004.
    [9]Goldberger J., Roweis S., Hinton G., Salakhutdinov R.. Neighbourhood components analysis. In Proc. NIPS,2005.
    [10]Globerson A., Roweis S.. Metric learning by collapsing classes. In Proc. NIPS, 2006.
    [11]Torresani L., Lee K.. Large margin component analysis. In Proc. NIPS,2006.
    [12]Veenman C.J., Tax D.M.J.. LESS:a Model-Based Classifier for Sparse Subspaces. IEEE Trans. on Pattern Analysis and Machine Intelligence,2005,27 (9): 1496-1500.
    [13]Jenssen R., Erdogmus D., Principe J.C., Eltoft T.. The Laplacian classifier. IEEE Trans. on Signal Processing,2007,55 (7):3262-3271.
    [14]Peltonen J., Kaski S.. Discriminative Components of Data. IEEE Trans. on Neural Networks,2005,16(1):68-83.
    [15]Fisher R.A.. The Use of Multiple Measures in Taxonomic Problems. Ann. Eugenics, 1936,7:179-188.
    [16]Belhumeur P.N., Hespanha J.P., Kriengman D.J.. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Trans. on Pattern Analysis and Machine Intelligence,1997,19 (7):711-720.
    [17]Yang L., Jin R., Sukthankar R., Liu Y. An efficient algorithm for local distance metric learning. In Proc. AAAI,2006.
    [18]Grant M., Boyd S., Ye Y. CVX toolbox. Be available at .
    [19]Blake C.L., Merz C.J.. UCI Repository of Machine Learning Databases. Dept. of Information and Computer Sciences, University of California, Irvine. Be available at.
    [20]Du L., Liu H., Bao Z., Xing M.. Radar HRRP Target Recognition Based on Higher Order Spectra. IEEE Trans. on Signal Processing,2005,53 (7):2359-2368.
    [21]Bao Z., Xing M.D., Wang T.. Radar Imaging Technique. Publishing House of Electronics Industry,2005.
    [22]Mallat S.. A Wavelet Tour of Signal Processing. Academic Press,1998.
    [23]Daubechies I.. Ten Lectures on Wavelets. SIAM,1992.
    [1]Taylor J.S., Cristianini N.. Kernel Methods for Pattern Analysis. Cambridge Univerisity Press,2004.
    [2]Belhumeur P.N., Hespanha J.P., Kriegman D.J.. Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence,1997,19 (7):711-720.
    [3]Zheng W., Zou C., Zhao L.. Weighted maximum margin discriminant analysis withkernels. Neurocomputing,2005,67:357-362.
    [4]Golub G.H., Loan C.F.V.. Matrix Computation. Johns Hopkins University Press, Baltimore, MD,1996.
    [5]Li H.F., Jiang T., Zhang K.S.. Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Networks,2006,17 (1):157-165.
    [6]Yan S.C., Xu D., Zhang B.Y., Zhang H.J., Yang Q., Lin S.. Graph embedding and extensions:a general framework for dimensionality reduction. IEEE Trans. Pattern Analysis and Machine Intelligence,2007,29 (1):40-51.
    [7]Chen H.T., Chang H.W., Liu T.L.. Local discriminant embedding and its variants. Int. Conf. on Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005,2:846-853.
    [8]Duda R.O., Hart P.E., Stork D.G.. Pattern Classification. John Wiley and Sons, Inc., New York,2001.
    [9]Blake C.L., Merz C.J.. UCI Repository of Machine Learning Databases. Dept. of Information and Computer Sciences, University of California, Irvine. Be available at.
    [10]Jain A.K., Duin R.P.W., Mao J.. Statistical pattern recognition:a review. IEEE Trans. Pattern Analysis and Machine Intelligence,2000,22 (1):4-37.
    [1]Liu H., Motoda H.. Feature Extraction, Construction and Selection:a Data Mining Perspective. Kluwer Academic Publishers, Boston, USA,1998.
    [2]Guyon I., Gunn S., Nikravesh M., Zadeh L.A.. Feature Extraction:Foundations and Applications. Springer-Verlag New York, Inc. Secaucus, NJ, USA,2006.
    [3]Jolliffe I.J.. Principal Component analysis. Springer-Verlag, New York,1986.
    [4]Belhumeur P.N., Hespanha J.P., Kriegman D.J.. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Trans. Pattern Analysis and Machine Intelligence 19 (7) (1997) 711-720.
    [5]Scholkopf B., Smola A., Muller K.R.. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 10 (5) (1998) 1299-1319.
    [6]Baudat G., Anouar F.. Generalized Discriminant Analysis Using a Kernel Approach. Neural Computation 12 (10) (2000) 2385-2404.
    [7]Fukunaga K.. Introduction to Statistical Pattern Recognition.2nd edition, Academic Press, Boston, USA,1990.
    [8]Bressan M., Vitria J.. Nonparametric Discriminant Analysis and Nearest Neighbor Classification. Pattern Recognition Letters 24 (15) (2003) 2743-2749.
    [9]Li Z.F., Liu W., Lin D.H., Tang X.O.. Nonparametric Subspace Analysis for Face Recognition. Int. Conf. on Computer Vision and Pattern Recognition, San Diego, CA, USA,2005, pp.961-966.
    [10]Yan S.C., Xu D., Zhang B.Y., Zhang H.J., Yang Q., Lin S.. Graph Embedding and Extensions:a General Framework for Dimensionality Reduction. IEEE Trans. Pattern Analysis and Machine Intelligence 29 (1) (2007) 40-51.
    [11]Chen H.T., Chang H.W., Liu T.L.. Local Discriminant Embedding and Its Variants. Int. Conf. on Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp.846-853.
    [12]Zhu M., Martinez A.M.. Subclass Discriminant Analysis. IEEE Trans. Pattern Analysis and Machine Intelligence 28 (8) (2006) 1274-1286.
    [13]Su Y., Shan S.G., Chen X.L., Gao W.. Classifiability-Based Optimal Discriminatory Projection Pursuit. Int. Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA,2008, pp.23-28.
    [14]Freund Y., Schapire R.E.. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences 55(1) (1997) 119-139.
    [15]Ma Y., Ijiri Y., Lao S., Kawade M.. Re-weighting Linear Discrimination Analysis under Ranking Loss. Int. Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA,2008, pp.1-8.
    [16]Duda R.O., Hart P.E., Stork D.G.. Pattern Classification. John Wiley and Sons, Inc., New York,2001.
    [17]Boyd S., Vandenberghe L.. Convex Optimization. Cambridge University Press, Cambridge UK,2004.
    [18]Weinberger K.Q., Blitzer J., Saul L.K.. Distance Metric Learning for Large Margin Nearest Neighbor Classification. Neural Information Processing Systems,2006, pp. 1473-1480.
    [19]Veenman C.J., Tax D.M.J.. LESS:A Model-Based Classifier for Sparse Subspaces. IEEE Trans. on Pattern Analysis and Machine Intelligence 27 (9) (2005) 1496-1500.
    [20]Chen B., Liu H., Chai J., Bao Z.. Large Margin Feature Weighting Method via Linear Programming. IEEE Trans. on Knowledge and Data Engineering 21 (10) (2009) 1475-1488.
    [21]Blake C.L., Merz C.J.. UCI Repository of Machine Learning Databases. Dept. of Information and Computer Sciences, University of California, Irvine. Available at .
    [22]Du L., Liu H., Bao Z., Xing M.. Radar HRRP Target Recognition Based on Higher Order Spectra. IEEE Trans. on Signal Processing,2005,53 (7):2359-2368.
    [23]Bao Z., Xing M., Wang T.. Radar Imaging Technique. Publishing House of Electronics Industry,2005.
    [24]Mallat S.. A Wavelet Tour of Signal Processing. Academic Press,1998.
    [25]Daubechies I.. Ten Lectures on Wavelets. SIAM,1992.