用户名: 密码: 验证码:
多源图像融合方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
多源图像信息融合技术广泛应用于军事、计算机视觉、医疗诊断及遥感应用等领域。论文从像元、特征和决策融合三个方面研究了多源图像融合技术,提出了一些新的分析处理方法。
     像元级图像融合技术的主要研究目的是获得一幅视觉增强的图像。本论文首先研究了一般意义下单纯面向视觉增强的多源图像融合方法,提出了一种基于统计融合模型的多分辨融合方法,给出了一般意义下的统计融合模型,通过引入传感器噪声项,有效地抑制了传感器噪声对图像融合的影响。然后在多光谱图像融合中,提出了一种新的融合方法,将相关约束进一步引入到统计融合模型中。这个方法充分增强了相关的空间信息,同时又有效抑制了光谱失真。
     特征组合、分类等是现有文献中特征级图像融合的主要研究内容。论文提出了特征级融合的新思想,即在图像特征提取过程中利用多源图像的信息融合。基于这个思想,提出了一种融合多源图像边缘信息的直线提取算法,把边缘的相位信息作为融合要素,通过融合规则组合多源图像中的不同特性。算法可以提取出仅利用单一或部分图像不能获得的直线特征。论文进一步把道路的直线特征与光谱特征相结合,发展了一种从遥感多光谱图像提取道路的算法。
     决策级图像融合技术具有广泛的应用范围。Dempster-Shafer(D-S)证据理论是决策融合的主要方法之一,但典型的D-S理论不大适应高冲突证据组合。论文提出了一种基于预处理模式的新方法,在利用Dempster组合规则进行证据组合之前,将冲突焦元的基本概率赋值部分转移到焦元并集,采用证据之间的冲突额度来确定证据组合顺序。由于预方法将冲突化解为不确定的知识表示,D-S理论可以处理冲突证据的组合问题。论文进一步将改进方法应用于高光谱图像分类,发展了一种基于D-S理论的高光谱图像分类方法,提供了比基于典型D-S理论方法更好的结果。
     论文提出的所有算法均应用于真实多源图像数据,实验结果显示了算法的有效性。
Multisource image fusion technology is widely applied in a variety of fields such as military surveillance, computer vision, medical diagnosis and remote sensing. In this dissertation, three levels of image fusion technology: pixel-level, feature-level and decision-level are studied, and some novel methods of analysis and processing are presented.The main research objective of pixel-level image fusion is to obtain a visual enhanced image. A new image multi-resolution fusion method based on statistical model is presented in typical multisource image fusion this thesis. The method results in restraining sensitivity to sensor noise since sensor noise item is introduced into the model. In multi-spectral image fusion, a new method is developed, which introduces correlative restriction into the statistical model. Using the method, the interrelated spatial information is well enhanced, and the spectral information of multi-spectral images is effectively kept.Feature integration and classification from multisource images are main issues in existing references. A new idea of feature-level fusion is presented in this thesis, whose concept is feature extraction is supported by multisource image fusion. Based on this idea, a new line extraction algorithm is developed by fusing the edge information of multisource image, where phase of edge is main fusion element and fusion rules are used to integrate different properties of multisource images. It can extract the line segments, which are not obtained by using single image or part images. Followed it, a road extraction algorithm from multi-spectral images is presented by using the line and spectral features.Decision-level image fusion is widely used. Dempster-Shafer theory of evidence is one of the main methods of decision fusion, but typical D-S theory is sensitive to highly conflict evidences. In this thesis, a new method based on the pretreatment mode is presented, where the basic probability assignments of conflict focal elements are partly transferred to the union of focal elements before using the Dempster combination rule, and the combination order are determined based on the conflicted measurement. Therefore D-S theory can deal with the cases with highly conflict evidences since conflict information is translated into unknown knowledge. Followed it, a classification algorithm for hyperspectral image data is presented, which provides
    better results than the method based on typical D-S theory.All algorithms presented are applied to real multisource image data. The experimental results show their validity and adaptability.
引文
[1] E. Waltz, J. Linas, Multisensor Data Fusion. 1990: Boston: Artech House.
    [2] D. L. Hall, Mathematical Techniques in Multisensor Data Fusion. 1992: Boston: Artech House.
    [3] 何友,王国宏等,多传感器信息融合及应用.2000:北京:电子工业出版社.
    [4] 康耀红,数据融合理论与应用.1997:西安:西安电子科技大学出版社.
    [5] L. Wald, Some terms of reference in data fusion. IEEE Trans. Geosci. Remote Sensing, 1999. 5: 1190-1193.
    [6] C. Pohl, Multisensor image fusion in remote sensing: concepts, methods and applications. International Journal of Remote Sensing. 1998.19(5): 823-854.
    [7] 毛士艺,赵巍,多传感器图像融合技术综述.北京航空航天大学学报,2002.28(5):512-518.
    [8] M. Aguilar, D. A. Fay, W. D. Ross. Real-time fusion of lowlight CCD and uncooled IR imagery for color night vision. Proc. SPIE Conf on Enhanced and Synthetic Vision. 1998. 3364: 124-135.
    [9] H. Jiang, et al., High-speed dual-spectral infrared imaging. Optical Engineering, 1993.6: 1281-1283.
    [10] K. U. Mucahit, C. R. Lianc, K. V. Pramod. Concealed weapon detection: an image fusion appraoch. Proc. of SPIE. 1997. 2942: 123-132.
    [11] G. K. Matsopoulos, S. Marshall, Application of morphological pyramids: fusion of MR and CT Phantoms. Jounal of Visual Communication and Image Representation, 1995. 6(2): 196-207.
    [12] 李勤,俞信,适合于生物图像的图像融合算法研究.光学学报,2000.2000(4):494-500.
    [13] P. J. Burt, R. J. Kolezynski. Enhanced image capture through fusion. Proc. IEEE 4th International Conference on Computer Vision. 1993.4:173-182.
    [14] E. Lallier, M. Farooq. A real time pixel-level based image fusion via adaptive weight averaging. Proc. of the 3th International Conference on Information Fusion. 2000.2: 3-13.
    [15] A. M. Waxman, D. A. Fay, A. Gove, et al. Color night vision:fusion of intensified visible and thermal IR imagery. Proc of SPIE. 1995.2463: 58-68.
    [16] A. Toet, J. Walraven, New false color mapping for image fusion. Optical engineering, 1996. 35(3): 650-658.
    [17] P.M.Steele, P.Perconti. Part task in investigation of multispectral image fusion using gray scale and synthetic color night vision sensor imagery for helicopter pilotage. Proc. of SPIE on Targets and Backgrounds:Characterization and Representation. 1997. 3062: 88-100.
    [18] A.Toet, J.K.ljspeen, J.Walraven, et al. Fusion of vision and thermal imagery improves situation. Proc. of SPIE. 1997. 3088: 177-188.
    [19] D.A.Fay, A.M.Waxman, J.GVerly, et al. Fusion of visible infrared and 3D LADAR imagery. Proc. of the 4th International Conference on Information fusion. 2001. 1: 17-24.
    [20] B.Aizzi, L.Alparone, A.Barducci, et al. Multispectral fusion of multisensor image data by the generlized Laplacian pyramid. Proc. of IEEE International Geoscience and Remote Sensing Symposium. 1999. 2: 1183-1185.
    [21] P.J.Burt, A gradient Pyramid basis for pattern-selective image fusion. Society for Information Display Digest of Technical Papers, 1985. 16: 467-470.
    [22] A.Toet, L.J.Ruyven, J.M.Valeton, Merging themal and visual images by a contrast pyramid. Optical Engineering, 1989. 28(7): 789-792.
    [23] M.Pavel, J.Larimer, A.Ahumada, Sensor fusion for synthetic vision. Society for Information Display Digest of Technical Papers, 1992: 475-478.
    [24] GK.Matsopoulos, S.Marshall, J.Brunt. Multiresolution morphological fusion of MR and CT images of the human brain. Proc. of IEE Vision Image and Signal Processing. 1994. 141:137-142.
    [25] H.Li, B.S.Manjunath, S.K.Mitra, Multisensor image fusion using the wavelet transform. Graphical models and image processing, 1995. 57(3): 235-245.
    [26] I.Koren, A.Laine, F.Taylor. Image fusion using steerable dyadic wavelet transform. Proc. of IEEE International Conference on Image Processing. 1995. 3: 232-235.
    [27] L.J.Chipman, T.M.Orr. Wavelets and image fusion. Proc. of the IEEE International Conference on Image Processing. 1995. Washington D.C. 2569: 248-251.
    [28] Z.Zhang, R.Blum, A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc. of the IEEE, 1999.87(8): 1315-1326.
    [29] S.GNikolov, D.R.Bull, C.N.Canagarajah. 2-D image fusion by multiscale edge graph combination. Proc. of the 3rd International Conference on Information Fusion. 2000. 1: 16-22.
    [30] S. T. Li, Y. N. Wang. Multisensor image fusion using discrete multiwavelet transform. Proc. of the 3rd International Conference on Visual Computing. 2000.
    [31] 李树涛,王耀南,基于树状小波分解的多传感器图像融合.红外与毫米波学报,2001.20(3):219-223.
    [32] 晁锐,张科,李言俊,一种基于小波变换的图像融合算法.电子学报,2004.32(5):750-753.
    [33] Toimo Bretschneider, Odej Kao, Image Fusion in Remote Sensing. 2000.
    [34] J. Vrabel, Multi-spectral imagery advanced band sharping study. Photogammetric Engineering and Remote Sensing, 2000.66(1): 73-79.
    [35] P. S. Chavez, S. C. Sides, J. A. Anderson, Comparison of three different methods to merge multiresolution and multispectral data: TM & SPOT pan. Photogammetric Engineering and Remote Sensing, 1991.57: 295-303.
    [36] F. Sunar, N. Musaoglu, Merging multiresolution SPOT and landsat TM data: the effects and advantages. International Journal of Remote Sensing, 1998. 19(2): 219-224.
    [37] G. Cliche, F. Bonn, P. Teillet, Integration of the SPOT panchromatic channel into its Multi-spectral mode for sharpness enhancement. Photogammetric Engineering and Remote Sensing, 1985.51(3): 311-316.
    [38] 贾永红,多源遥感数据融合方法及其应用的研究[博士论文].2001,武汉:武汉大学.
    [39] D. Pradines. Improving SPOT images size and multispectal resolution. Proc. of the SPIE Earth remote sensing using the Landsat TM and SPOT system. 1986. 660: 98-102.
    [40] J. C. Price, Combing panchromatic and multispectal resolution imagery from dual resolution satellite instrument. Remote Sensing of Enviroment, 1987.21(1): 119-128.
    [41] C. K. Munechika, J. S. Warnick, et al, Resolution enhancement of multispectral image data to improve classfication accurarcy, photogammetric Engineering and Remote Sensing, 1993.59(1): 67-72.
    [42] Y. Zhang, A new merging method and its spectral and spatial effects. International Journal of Remote Sensing, 1999.20(10): 2003-2014.
    [43] V. K. Shettigara, A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. photogammetric Engineering and Remote Sensing, 1992.58(5): 561-567.
    [44] R. Hayden, G. W. Dalke, et al. Application of the IHS color transform to the processing of multisensor data and image enhancement. Proc. of the International Sympo-siumon Remote Sensing of Arid and Semi-arid Lands. 1982. 599-616.
    [45] W. J. Carper, T. M. Lillesand, R. W. Kiefer, The use of intensity-hue-saturation transformations for merging SPOT pnchromatic and multispectral image data. photogammetric Engineering and Remote Sensing, 1990.56(5): 459-467.
    [46] J. R. Harris, IHS Transform for the integration of radar imagery with other remotely sensed data. Photogammetric Engineering and Remote Sensing, 1990. 12:1631-1641.
    [47] 刘哲,郝重阳,冯伟等,一种基于小波系数特征的遥感图像融合算法.测绘学报,2004.33(1):53-57.
    [48] G. B. Duport, The use of multiresolution analysis and wavelet transform for merging SPOT panchromatic and multispectral image data. photogammetric Engineering and Remote Sensing, 1996. 62(9): 1057-1066.
    [49] X. Y. Jiang, L. W. Zhou, Z. Y. Gao. Multispectral image fusion using wavelet transform. Proc. of SPIE. 1996. 2898: 35-42.
    [50] J. Zhou, D. L. Civco, J. A. Silander, A Wavelet transform method to merge Landsat TM and SPOT panchromatic data. International Journal of Remote Sensing, 1998.19(4): 743-757.
    [51] J. Nunez, Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sensing, 1999. 5:1204-1211.
    [52] P. Scheunders, S. D. Backer, Fusion and merging of multispectral images using multiscale fundamental forms. Jounal of the Optical Society of Amercia A, 2001.18(10): 2468-2477.
    [53] B. Gil, A. Mitiche, J. K. Aggarwal, Experiments in combing intensity and range edge maps. CVGIP. 1983.21(5): 395-411.
    [54] S. G. Nadabar, A. K. Jain. Edge Detection And Labeling By Fusion of Intensity And Range. Proc. of SPIE. 1992. 1708: 108-119.
    [55] C. C. Chu, J. K. Aggarwal, The Integration of Image Segmentation Maps Using Region and Edge Information. IEEE Pattern Analysis and Machine Intelligence, 1993. 15(12): 1241-1252.
    [56] M. P. Cain, S. A. Seeware, J. B. Morse. Object classification using multispectral sensor data fusion. Proc. of SPIE. 1989. 1100: 53-67.
    [57] 蔡涛,王润生,一个从多波段遥感图像提取道路网的算法.软件学报,2001.12(6):943-948.
    [58] R. S. Wang, C. Tao. Symbol-based image fusion for object recognition. Proc. of the International Conference on Multisource-Multisensor Information Fusion. 1998. 509-515.
    [59] J. Chanussot, G. Mauris, P. Lambert, Fuzzy fusion techniques for linear features detection in multitemporal SAR images. IEEE Trans. on Geoscience and Remote sensing, 1999. 37(5): 1292-1305.
    [60] T. Lee, J. A. Richards, P. H. Swain, Probabilistic and evidential approaches for multisource data analysis. IEEE Trans. Geosci. Remote Sensing, 1987. 25: 283-292.
    [61] I. A. Benediktsson, P. H. Swain, O. K. Ersoy, Neural network approaches versus statistical methods in classfication of multisource remote sensing data. IEEE Trans. Geosci. Remote Sensing, 1990.28(4): 540-552.
    [62] Jon Atli Benediktsson, Ioannis Kanellopoulos, Classification of Multisource and Hyperspectral Data Based on Decision Fusion. IEEE Trans. Geosci. Remote Sensing, 1999. 37(3): 1367-1377.
    [63] A. Wilson, Perceptual-based image fusion for hyperspectral data. IEEE Trans. Geosci. Remote Sensing, 1998. 35(4): 1007-1017.
    [64] L. O. Jimenez, A. M. Morell, A. Creus, Classification of Hyperdimensional Data Based on Feature and Decision Fusion Approaches Using Projection Pursuit, Majority Voting, and Neural Networks. IEEE Trans on Geoscience and Remote Sensing, 1999. 37(3): 1360-1366.
    [65] 程咏梅,潘泉等,信息融合图像识别算法及其在三维飞机图像识别中的应用研究.航空学报,2004.25(20):176-179.
    [66] I. A. Benediktsson, P. H. Swain, Consensus Theoretic Classification Methods. IEEE Trans on Geoscience and Remote Sensing, 1992.22(4): 688-704.
    [67] I. A. Benediktsson, J. R. Sveinsson, P. H. Swain, Hybrid Consensus Theoretic Classification. IEEE Trans. Geosci. Remote Sensing, 1997.35(4): 833-843.
    [68] R. Murphy, Dempster-shafer theory for sensor fusion in autonomous mobile robots. IEEE Transactions on Robotics and Automation, 1998. 14(2): 197-206.
    [69] H. Tahani, J. M. Keller, Information Fusion in computer Vision Using the Fuzzy Integral. IEEE Trans. on Systems, Man, and Cybernetics, 1990.20(3): 733-741.
    [70] D. Dubois, H. Prade, Representation and combination of uncertainty with belief functions and possibility measures. Computational Intelligence, 1988.4: 244-264.
    [71] R. R. Yager, On the Relationships of methods of aggregation of evidence in experts Systems. Cybernetics and Systems, 1985.16: 1-21.
    [72] P. Smets, The combination of evidence in the transferable belief model. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1990. 12(5): 447-458.
    [73] E. Lefevre, O. Colot, P. Vannoorenberghe, Reply to the comments of R. Haenni on the paper: Belief functions combination and conflict management. Information Fusion, 2003. 4(1): 63-65.
    [74] E. Lefevre, O. Colot, P. Vannoorenberghe, Belief functions combination and conflict management. Information Fusion, 2002.3(2): 149-162.
    [75] R. Haenni, Are alternatives to Dempster's rule of combination real alternatives?Comments on "About the belief function combination and conflict management problem". Information Fusion, 2002.3(4): 237-239.
    [76] C. K. Murphy, Combining belief function when evidence conflicts. Decision support systems, 2000.29(1): 1-9.
    [77] 林作铨,牟克典,韩庆,基于未知扰动的冲突证据.软件学报,2004.15(8):1150-1156.
    [78] 邓勇,施文康,朱振福,一种有效处理冲突证据的组合方法.红外与毫米波学报,2004.23(1):27-32.
    [79] B. K. Ghaffary, A. A. Sawchuk, A survey of new techniques for image registration and mapping. Proc. of SPIE, 1982. 432: 222-239.
    [80] L. G. Brown, A survey of image registration techniques. ACM Computer Surveys, 1992. 24(4): 325-376.
    [81] J. B. A. Maintz, M. A. Viergever, A survey of medical image registration. Medical Image Analysis, 1998.2: 1-36.
    [82] H. Lester, S. R. Arridge, A survey of hierarchical non-linear medical image registration. Pattern Recognition, 1999.32: 129-149.
    [83] L. M. G. Fonseca, B. S. Manjunath, Registration techniques for multisensor remotely sensed imagery. Photogrammetric Engineering and Remote Sensing, 1996.62(9): 1049-1056.
    [84] P. J. Burt, E. H. Adelson, The Laplacian pyramid as a compact image code. IEEE Trans. on Communications, 1983.31 (4): 532-540.
    [85] G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion. Electronics Letters, 2002.38(7): 313-315.
    [86] C. Xydaes, V. Petrovi, Objective image fusion performance measure. Electronic Letters, 2000. 36(4): 308-309.
    [87] 阮秋琦,数字图像处理学.2001:北京:电子工业出版社.
    [88] T. Cover, J. Thomas, Elements of information theory. 1991: New York: John Wiley & Sons.
    [89] 邹谋炎,反卷积和信号复原.2001:北京:国防工业出版社.
    [90] S. G. Mallat, A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989. 11(7): 674-693.
    [91] S. G. Mallat, A Wavelet Tour of Signal Processing. 1998: San Diego: Academic Press.
    [92] 玉振明,高飞,对拉普拉斯金字塔和对比度金字塔图像融合方法的性能比较.广西科学院学报,2004(4):283-285.
    [93] L. Wald, T. Ranchin, M. Mangolini, Fusion of satellite images of different spatial resolution: Assessing the quality of resulting images, photogammetric Engineering and Remote Sensing, 1997.63(6): 691-699.
    [94] M. Ehlers, Multisensor image fusion techniques in remoet sensing. Journal of Photogrammerty and Remote Sensing, 1991.46:19-30.
    [95] K. R. Castleman, Digital Image Processing, 1998:电子工业出版社.
    [96] M. Gonzalez-Audicana, J. L. Saleta, R. G. Catalan, et al., Fusion of Multispectral and Panchromatic Images Using Improved IHS and PCA Mergers Based on Wavelet Decomposition. IEEE Trans. on Geoscience and Remote Sensing, 2004. 42(6): 1291-1299.
    [97] 何国金,李克鲁,胡德永等,多卫星遥感数据的信息融合:理论、方法与实践.中国图象图形学报,1999.4(9):744-750.
    [98] 陈开明,非线性规划.1991:上海:复旦大学出版社.
    [99] 王润生,图像理解.1994:长沙:国防科技大学出版社.
    [100] A. Huerlas, R. Nevatia, Detecting buildings in aerial images. Comput. Vision Graph. Image Process, 1988.41: 131-152.
    [101] A. Huerlas, W. Cole, R. Nevatia, Detecting runways in complex airport scenes. Comput. Vision Graph. Image Process, 1990. 51: 107-145.
    [102] D. Geman, B. Jedynak, An Active Testing Model for Tracking Roads in Satellite Images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1996. 18(1): 1-13.
    [103] F. Tupin, H. Maitre, J. F. Mangin, et al, Detection of linear features in SAR images: application to road network extraction. IEEE Trans. on Geoscience and Remote sensing, 1998.36(3): 434-453.
    [104] R. Nevatia, K. E. Orice, Locating structures in aerial images. IEEE Trans. Pattern Anal. Mach. Intell., 1982.4: 476-484.
    [105] J. H. Mcintosh, M. K. Mutch, Matching straight lines. Comput. Vision Graph. Image Process, 1988.43: 386-408.
    [106] Z. Zhang, Estimating motion and structure from correspondences of line segments between two perspective images. IEEE Trans. Pattern Anal. Mach. Intell., 1995. 17:1129-1139.
    [107] R. Nevatia, K. R. Babu, Linear Feature Extraction and Description. Computer Graphics and Image Processing, 1980. 13(3): 257-269.
    [108] Y. Zhou, V. Venkateswar, R. Chellappa, Edge detection and linear feature extraction using a 2-D random field model. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1989. 11: 84-95.
    [109] V. Venkateswar, R. Chellappa, Extracting Straight lines in aerial images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1992. 14:1111-1114.
    [110] J. Illingworth, J. Kittler, A survey of the Hough transform. Comput. Vision Graph. Image Process, 1988.44: 87-116.
    [111] J. Pricen, J. Illingworth, J. Kittler, Hypothesis testing: aframwork for analyzing and optimizing Hough transform performance. IEEE Trans. Pattern Anal. Mach. Intell., 1994. 16(4): 329-341.
    [112] L. Xu, E. Oja, P. Kultanen, A new curve detection method: Randomized Hough Transform (RHT). Pattern Recognition Letters, 1990. 11 (5): 331-338.
    [113] L. Xu, E. Oja, Randomized Hough Transform(RHT): basic mechanisms, algorithms and computational complexities. Comput. Vision Graphic Image Process: Image Understanding, 1993.57(2): 131-154.
    [114] M. Boldt, R. Weiss, E. Riseman, Token-based extraction of straight lines. IEEE Trans. Syst. Man Cybern, 1989. 19: 1581-1594.
    [115] B. Burns, R. H. Allen, E. M. Riseman, Extracting straight lines. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1986.8(4): 425-455.
    [116] 郑南宁,计算机视觉与模式识别.1998:北京:国防工业出版社.
    [117] J. F. Canny, A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 1986.8: 679-698.
    [118] N. Merlet, J. Zerubia, New prospects in line detection by dynamic programming. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1996. 18(4): 426-431.
    [119] V. Karathanassi, et al., A thinning-based method for recognizing and extracting peri-urban road networks from SPOT panchromatic images. International Journal of Remote Sensing, 1999. 20(1): 153-168.
    [120] R. O. Duda, P. E. Hart, Pattern Classification and Scene Analysis. 1973: New York: John Wiley.
    [121] A. Gruen, H. Li, Road Extraction from Aerial and Satellite Images by Dynamic Programming. ISPRS Journal of Photogrammetry and Remote Sensing, 1995. 20(4): 11-20.
    [122] D. Geiger, A. Gupta, L. A. Costa, J. Vlontzos, Dynamic Programming for Detecting, Tracking, and Matching Deformable Contours. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1995.17(3): 294-302.
    [123] R. Bajcsy, M. Tavakoli, Computer recognition of road from satellite picture. IEEE Trans on Systems, Man and Cbybemetics, 1976.6(9): 623-637.
    [124] M. Barzohar, D. B. Cooper, Automatic Finding of Main Roads in Aerial Images by Using Geometric-Stochastic Models and Estimation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1996. 18(7): 707~721.
    [125] 段新生,证据理论与决策、人工智能.1993:北京:中国人民大学出版社.
    [126] G. Shafer, A Mathematical Theory of Evidence. 1976: Princeton: Princeton University Press.
    [127] 王壮,胡卫东等,数据融合中的Dempster-Shafer证据理论.火力与指挥控制,2001.26(3):6-10.
    [128] J. A. Barnett. Computation Methods for a Mathematical Theory of Evidence. Proc. of the Seventh International Joint Conference on Artificial Intelligence. 1981. 868-875.
    [129] J. Gordon, E. H. Shortliffe, A Method of Managing Evidential Reasoning in a Hierarchical Hypothesis Space. Artificial Intellignce, 1985.26(3): 323-357.
    [130] G. Shafer, R. Logan, Implementing Dempster's Rule for Hierarchical Evidence. Artificial Intellignce, 1987.33(3): 271-298.
    [131] F. A. Voorbraak, A Computationally Efficient Approximation of Dempster-Shafer Theory. Int. J. Man Machines Studies, 1989. 30(50): 525-536.
    [132] D. Dubois, H. Prade, Consonant approximation of Dempster-Shafer theory. International Journal approximation reasoning, 1990. 4: 419-449.
    [133] B. Tessem, Approximations for efficient computation in the theory of evidence. Artificial Intellignce, 1993.61: 315-329.
    [134] M. A. Simard, J. Couture, et al. Data fusion of multiple sensors atribute information for target identity estimation using a Dempster-Shafer evidential combination algorithm. Proc. of SPIE. 1996. 2759: 577-588.
    [135] F. Voobraak, On the justification of dempster's rule of combination. Artificial Intellignce, 1991.48(2): 171-197.
    [136] 肖人彬,王雪,费奇,相关证据合成方法的研究.模式识别与人工智能,1993.9:227-234.
    [137] 孙怀江,杨静宇,一种相关证据合成方法.计算机学报,1999.22(9):1004-1007.
    [138] A. L. Zadeh, Review of Books: A Mathematical Theory of Evidence. AI Magazine, 1984. 5(3): 81-83.
    [139] 浦瑞良,宫鹏,高光谱遥感及其应用.2000:高等教育出版社.
    [140] 张钧萍,张晔,周廷显,基于信息融合的超谱图象分类方法研究.哈尔滨工业大学学报,2002.34(4):464-468.
    [141] 刘大有,欧阳继红,唐海鹰等,一种简化证据理论模型的研究.计算机研究与发展,1999.36(2):134-138.
    [142] P. Smets. Constructing the pingnistic probability function in a context of uncertainty. Proc. Uncertainty in Artificial Intelligence. 1990. 29-40.
    [143] J. J. Sudano, L. Martin. Pignistic Probability Transforms for Mixes of Low- and High-Probability Events. Proc. International Conf. on Information Fusion. 2001.23-27.
    [144] http://dynamo.ecn.purdue.edu/~biehl/MultiSpec/download_win.html.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700