多模医学图像预处理和融合方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着医学影像技术的快速发展,临床可用的不同模态医学图像越来越多,为了克服单一模态医学图像在局部细节信息描述上的局限性,研究者提出了多模医学图像融合技术。该技术通过提取和综合不同模态的医学图像信息,获得对病灶部位更加清晰、全面、准确、可靠的图像描述,为医生对疾病的诊断和合理治疗方案的制定提供可靠依据。多模医学图像融合是多源图像融合在医学领域的一个重要分支,作为一门多学科交叉的新兴科研领域,它不仅有着重要的科研价值同时也与人们日常生活息息相关。经过近三十年的发展,多模医学图像融合取得了不少阶段性成果,也形成了一些成熟的理论方法,但到目前为止在医学图像融合的几个关键环节上仍有许多问题有待解决。为了更好地解决这些问题,本文着眼于融合过程的几个关键环节,从“MRI图像灰度信息校正”、“源图像配准”、“多光谱与全色医学图像融合”和“显著信息保存的医学图像融合”等方面开展了医学图像预处理和融合的研究工作,主要内容和贡献如下:
     在医学图像预处理研究中,本文提出了基于简化PCNN模型的MRI图像灰度不均匀性校正算法和基于级联PCNN模型的医学图像配准算法。前者利用PCNN的脉冲同步发放机制进行图像偏移场估计,在保证校正效果的同时有效提高了算法的实时性。后者利用级联PCNN模型提取图像目标区域的凹点,再结合FCM聚类和坐标系分块,完成医学图像配准。
     在多光谱与全色医学图像融合研究中,本文提出了基于IHS和PCA的图像融合算法,为了进一步改善融合图像的光谱特性,在原算法基础上引入了视网膜激励模型,改进算法的融合图像不仅提高了图像的空间分辨率,而且较好的保持了源图像的光谱信息,有效避免了光谱扭曲现象的发生。
     为了突出融合过程中源图像重要信息的转移,本文提出了基于显著信息保存的多模医学图像融合算法。该算法通过对源图像局部区域的显著性加权,使隐含在图像像素中的重要信息顺利从源图像转移到融合图像。为了突出图像中不同位置(纹理、强边缘、弱边缘、角点和平滑区域等)像素的不同特征,在原算法基础上又引入了区域内像素的特征加权,改进算法的融合图像在视觉效果和信息描述上都优于原算法。
     为了进一步提高融合图像质量,本文提出了两种基于初始融合图像的融合算法。第一种算法是在加权平均融合基础上,结合引导滤波和像素筛选策略得到最终融合图像,该算法的融合结果存留了加权平均融合的缺陷,融合图像对比度较低且纹理等细节信息较为模糊。第二种算法是通过图像块代替的方式得到初始融合图像,在此基础上进行边缘强化等处理得到最终融合图像。比较这两种融合算法,后者的初始融合图像在对比度和图像细节信息描述两方面都优于前者,因此最终的融合结果也要优于前者。
Numerous different modality medical images are available with the fastdevelopment of medical imaging technology. In order to overcome the limitations thatthe single-modal medical images only describe the local detailed information,multimodal medical image fusion technology is proposed. Through extracting andcombining information from different modal medical images, the proposed multimodalfusion technology can obtain more clear, comprehensive, accurate and reliable imagedescription of the focal areas, thus providing a reliable basis for doctors to diagnosedisease and to establish reasonable treatment methods. Multimodal medical imagefusion is an important branch of multi-source image fusion in medical field. As amulti-disciplinary and emerging research field, it not only has important scientific value,but also is closely related to people's everyday life. After developing for nearly30years,multimodality medical image fusion has made many achievements, and formed somemature theories and methods. However, there remains many problems to be solved onthe several key steps of the medical image fusion. In order to solve these problems, theauthor focuses on several key steps of the fusion process, including "MRI image grayinhomogeneity correction","source image registration","multispectral andpanchromatic medical image fusion" and "salient information preservation of medicalimage fusion" etc., and carries out research work on the medical image preprocessingand fusion. In this paper, the main contents and contributions are summarized asfollows:
     As to medical image preprocessing, proposing an MRI image gray inhomogeneitycorrection algorithm based on simplified PCNN model, and medical image registrationalgorithm based on cascaded PCNN model. The former uses pulse synchronizationmechanism of PCNN to estimate the image offset field, insuring the effect of correctionand meanwhile improving the real-time performance of the algorithm. The latter usesthe cascaded PCNN model to extract the foveations in targeted image area andcombines FCM clustering and blocked coordinate system to complete medical imageregistration.
     In the study of multispectral and panchromatic medical image fusion, proposingthe image fusion algorithm based on IHS and PCA. In order to further improve thespectral characteristics of the fused image, the retina inspired model is introduced intothe original algorithm. The improved algorithm not only improves the spatial resolutionof the image, but also maintains the spectrum information of the source image so thatspectral distortions are avoided substantially.
     To highlight important information transfer from source image in the process ofimage fusion, proposing the multimodal medical images fusion algorithm based on thesaliency preservation. Through the saliency weighted on pixels in local areas of thesource image, the algorithm transfers the important information contained in the imagepixels from source image to the fusion image. In order to highlight the differentcharacteristics of pixels at different positions (texture, strong edges, weak edges, cornersand smooth areas etc.), on the basis of original algorithm, the characteristics weightedon pixels in the area is introduced. The improved algorithm performs better than theoriginal algorithm in terms of both the visual effect of fusion image and the informationdescription.
     In order to further improve the quality of fused image, proposing two fusionalgorithms based on the initial fused image. Based on weighted average fusion image,the first algorithm combines the guided filter and pixel screening strategy to obtain thefinal fusion image. The results of the algorithm reserve the defects of the weightedaverage fusion image, that is, the contrast is low and the texture details are relativelyobscured. The other algorithm firstly obtains the initial fused image using image blockreplacement, and then acquires the final fusion image by reinforcing the edges based onthe initial fused image. The original fusion image of the second algorithm is better thanthe first one from the viewpoint of both image contrast and details description, and sothe final fusion result of the second algorithm outperforms the first one.
引文
[1] L. Wald. Some terms of reference in data fusion[J]. IEEE Transactions on Geoscience andRemote Sensing,1999,37(3):1190-1193
    [2] D. L. Hall, J. Llinas. An introduction to multisensor data fusion[C]. Prodceedings of the1998IEEE International Conference on Circuits and Systems, NewYork, USA,1997,6-23
    [3] Y. B. Shalom. Consistency and robustness evaluation of PDAF for target tracking in acluttered environment[J]. IEEE Transactions on Automatic Control,1983,7
    [4] S. T. Li, J. T. Kwok, Y. N. Wang. Combination of images with diverse focuses using thespatial frequency[J]. Information Fusion,2001,2:169-176
    [5] S. T. Li, B. Yang. A new pan-sharping method using a compressed sensing technique[J]. IEEETransactions on Geoscience and remote sensing,2011,49(2):738-747
    [6] S. Smith, L. A. Scarff. Combing visual and IR images for fusion-two approaches[J]. VisualData Interpretation,1992,1668:18-22
    [7] M. G. Gong, Z. Q. Zhou, J. J. Ma. Change detection in synthetic aperture radar images basedon image fusion and fuzzy clustering[J]. IEEE Transactions on Image Processing,2012,21(4):2141-2152
    [8] D. C. Andrew. Spatio-Temporal data fusion for3D+T image reconstruction in cerebralangiography[J]. IEEE Transactions on Medical Imaging,2010,29(6):1238-1252
    [9] M. Nada. Sensors and data fusion[M]. Vienna, Austria,2009,1-490
    [10] D. J. Mirota, M. Ishii, G. D. Hager. Vision-based navigation in image-guided interventions[J].Annual Review of Biomedical Engineering,2011,13:297-319
    [11] B. Luo. Decision-based fusion for pansharpening of remote sensing images[J]. IEEEGeoscience and Remote Sensing Letters,2013,10:19-23
    [12] Z. B. Wang, Y. D. Ma. Medical image fusion using m-PCNN. Information Fusion[J].2008,9:176-185
    [13] B. C. Porter. Three-dimensional registration and fusion of ultrasound and MRI using majorvessels as fiducial markers[J]. IEEE Transactions on Medical Imaging,2001,20:354-359
    [14] D. M. Tsai, S. C. Lai. Independent component analysis-based background subtraction forindoor surveillance[J]. IEEE Transactions on Image Processing,2009,18(1):158-167
    [15] J. L. Davidson. Fusion of images obtained from EIT and MRI[J]. Electronics Letters,2012,48(11):68-69
    [16] P. Calvini. Fusion of the MR image to SPECT with possible correction for partial volumeeffects[J]. IEEE Transactions on Nuclear Science,2006,50(1):189-197
    [17] E. R. Boris. The hermite transform as an efficient model for local image analysis: Anapplication to medical image fusion[J]. Computers and Electrical Engineering,2008,34:99-110
    [18] A.Toet, L. J. Ruyven, J. M. Valeton. Merging thermal and visual images by a contrastpyramid[J]. Optical Engineering,1989,28(7):789-792
    [19] F. N. Stefan. CT-MR image data fusion for computer-assisted navigated surgery of orbitaltumors[J]. European Journal of Radiology,2010,73:224-229
    [20] E. H. William.3D CT-video fusion for image-guided bronchoscopy[J]. Computerized MedicalImaging and Graphics,2008,32:159-173
    [21] L. Yang, B. L. Guo, W. Ni. Multimodality medical image fusion based on multiscalegeometric analysis of contourlet transform[J]. Neurocomputing,2008,72:203-211
    [22] G. Pajares, J. Manuel. A wavelet-based image fusion tutorial[J]. Pattern Recognition,2004,37(9):1855-1872
    [23] Z. Liu, K. Tsukada, K. Hanasaki et al. Image fusion by using steerable pyramid[J]. PatternRecognition Letters,2001,22(9):929-939
    [24] Y. Chai, Y. He, C. L. Ying. CT and MRI image fusion based on contourlet using a novelrule[C]. The2nd International Conference on Bioinformatics and Biomedical Engineering,ICBBE,2008,2064-2067
    [25] B. Yang, S. T. Li. Multifocus image fusion and restoration with sparse representation[J]. IEEETransactions on Instrumentation and Measurement,2010,59(4):884-893
    [26] S. T. Li, H. T. Yin, L. Y. Fang. Group-sparse representation with dictionary learning formedical image denoising and fusion[J]. IEEE Transactions on Biomedical Engineering,2012,59(12):3450-3460
    [27] S. Zebhi, M. T. Sadeghi. Improvement of compressive sampling based approaches in imagesfusion[C].20th Iranian Conference on Electrical Engineering, ICEE,2012,1088-1092
    [28] N. Cui, Z. Q. Guan. Noise analysis and restrain in the fusion process of remote sensing[C].International conference on Intelligent Computing and Intelligent Systems, ICIS,2010,490-495
    [29] Mumtaz, A. Genetic Algorithms and its application to image fusion[C].4th InternationalConference on Emerging Technologies, ICET,2008,6-10
    [30] R. Raghavendra, A. Rao, G. H. Kumar. Multisensor biometric evidence fusion of face andpalmprint for person authentication using particle swarm optimisation[J]. International Journalof Biometrics,2010,2(1):19-33
    [31] G. S. Yu, G. Sapiro. Statistical compressed sensing of gaussian mixture models[J]. IEEETransactions on Signal processing,2011,59(12):5842-5858
    [32] M. Xu, H. Chen, P. K. Varshney. A novel approach for image fusion based on MarkovRandom Fields[C].42nd Annual Conference on Information Sciences and Systems, CISS,2008,344-349
    [33]何元烈.多模医学图像配准与融合技术及医学智能辅助诊断系统研究[D].广州:华南理工大学,2006
    [34] Jie Tian. A Novel Software Platform for Medical Image Processing and Analyzing[J]. IEEETransactions on information technology in biomedicine,2008,12(6):800-812
    [35] A. L. Grosu, W. A. Weber, M. Franza. Reirradiation of recurrent high-grade gliomas usingamino acid PET (SPECT)/CT/MRI image fusion to determine gross tumor volume forstereotactic fractionated radiotherapy[J]. International Journal of Radiation Oncology BiologyPhysics,2005,63(2):511–519
    [36]宋余庆.数字医学图像[M].北京:清华大学出版社,2008,13-19
    [37] P. K. Yan, A. A. Kassim. MRA image segmentation with capillary active contour[J]. LectureNotes in Computer Science,2005,3749:51-58
    [38] D. W. Townsend, T. Beyer. A combined PET/CT scanner: the path to true image fusion[J].British Journal of Radiology,2002,75:24-30
    [39] L. G. Brown. The survey of image registration techniques[J]. ACM Computing Surveys,2000,24(6):325-334
    [40] M. Abidi, R. Gonzalez. Data Fusion in Robotics and Machine Intelligence[M]. New York:Academic,1992
    [41] N. Cvejic, D. Bull, N. Canagarajah. Region-based multimodal image fusion using ICAbases[J]. IEEE Sensors Journal,2007,7(5):743–751
    [42] S. T. Li, B. Yang. Hybrid multiresolution method for multisensory multimodal image fusion.IEEE Sensors Journal,2010,40(9):1519-1527
    [43]何友,王国宏,陆大全等.多传感器信息融合及应用[M].北京:电子工业出版社,2000
    [44]楚恒.像素级图像融合及其关键技术研究[D].成都:电子科技大学,2008
    [45]王海军,柳明.克服灰度不均匀性的脑MR图像分割及去偏移场模型[J].山东大学学报(工学版),2011,41(3):36-42
    [46] Z. Zhang, R. S. Blum. A categorization of multiscale-decomposition-based image fusionschemes with a performance study for a digital camera application[C]. Proceedings of theIEEE,1999,87(8):1315-1326
    [47]李建平,杨万年.小波十讲[M].北京:国防工业出版社,2004,1-327
    [48] P. J. Burt, E. H. Adelson. The Laplacian pyramid as a compact image code[J]. IEEETransactions on Communications,1983,31(4):532-540
    [49] P. J. Burt, E. H. Adelson. Merging images through pattern decomposition[C]. Proceedings ofSPIE on Application of Digital Image Processing Ⅷ,1985,575(3):173-181
    [50] A. Toet. A morphological pyramidal image decomposition[J]. Pattern Recognition Letters,1989,9(4):255-261
    [51] P. J. Burt. A gradient pyramid basis for pattern selective image fusion[C]. Processings of theSociety for Information Display Conference. San Jose: SID Press,1992,467-470
    [52] P. J. Burt. The pyramid as a structure for efficient computation[C]. A RosenfeldMultiresolution Image Processing and Analysis, Berlin, Germany,1984,6-35
    [53] E. H. Adelson. Pyramid methods in image processing[J]. RCA Engineer.1984,29(6):33-41
    [54] S. G. Mallat. A theory for multiresolution signal decomposition the wavelet representation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1989,11(7):647-693
    [55] H. Li, B. S. Manjunath, S. K. Mitra. Multisensor image fusion using the wavelet transform[J].Graphical Models and Image Processing,1995,27(3):235-244
    [56] L. J. Chipman, T. M. Orr. Wavelets and images fusion[C]. Proceedings of IEEE InternationalConference on Image Processing, Washington D.C.,1995,248-251
    [57] I. Koren, A. Laine, F. Taylor. Image fusion using steerable dyadic wavelet transform[C].International Conference on Image Processing.1995,3:232-235
    [58]朱长青,王倩,杨晓梅.基于多进制小波的SPOT全色影像和多光谱遥感影像融合[J].测绘学报,2000,29(2):132-136
    [59] H. H. Wang. A new multiwavelet-based approach to image fusion[J]. Journal of MathematicalImaging and Vision,2004,21(2):177-192
    [60]张学帅,潘泉,赵永强等.基于静态多小波变换的图像融合[J].光电子.激光,2005,6(5):605-609
    [61] S, Ioannidou, V. Karathanassi. Investigation of the dual-tree complex and shift-invariantdiscrete wavelet transforms on Quickbird image fusion[J].IEEE Geoscience and RemoteSensing Letters,2007,4(1):166-170
    [62] J. J. Lewis. Region-base image fusion using complex wavelets[C]. International Conferenceon Information Fusion,2004,1:555-556
    [63] R. Oliver. Image sequence fusion using a shift invariant wavelet transform[C]. Processings ofInternational Conference on Image Processing, LosAlamitos: IEEE Computer Society,1997,288-291
    [64] J. Nùňez. Multiresolution-based image fusion with additive wavelet decomposition[J]. IEEETransactions on Geosciences and Remote Sensing,1999,37(3):1204-1211
    [65] E. J. Candès. Ridgelets and the representation of mutilated Sobolev functions[J]. SIAMJournal of Math. Anal,2002,33(2):347–368
    [66] E. J. Candès, D. L. Donoho. Curvelets-A surprisingly effective nonadaptive representation forobjects with edges[M]. Curves and Surfaces, Nashville: Vanderbilt University Press,1999
    [67] E. J. Candès. Fast discrete curvelet transforms[J]. Appied and Computational Mathematics,2005,1-43
    [68] E. L. Pennec, S. Mallat. Image compression with geometrical wavelets[C]. Proceeding of8thIEEE International Conference on Image Processing, Vancouver,2000,661-664
    [69] M. N. Do, M. Vetterli. The contourlet transform: an efficient directional multiresolution imagerepresentation[J]. IEEE Transactions on Image Processing,2005,14(12):2091-2106
    [70] A. L. Cunha, J. P. Zhou, M. N. Do. The nonsubsampled contourlet transform: theory, designand applications[J]. IEEE Transactions on Image Processing,2006,15(10):3089-3101
    [71] L. Yang, B. L. Guo, W. Ni. Multimodality medical image fusion based on multiscalegeometric analysis of contourlet transform[J]. Neurocomputing,2008,72:203-211
    [72] T. J. Li, Y. Y. Wang. Biological image fusion using a NSCT based variable-weight method[J].Information Fusion,2010,1-8
    [73] X. Chang, L. C. Jiao, F. Liu. Multicontourlet based adaptive fusion of infrared and visibleremote sensing images[J]. IEEE Geoscience and Remote Sensing Letters,2010,7(3):549-554
    [74] W. W. Kong. Image fusion technique based on non-subsampled contourlet transform andadaptive unit-fast-linking pulse-coupled neural network[J]. IEE Image Processing,2011,5(2):113-121
    [75] J. Choi, K. Yu, Y. Kim. A new adaptive component-substitution-based satellite image fusionby using partial replacement[J]. IEEE Transactions on Geoscience and remote sensing,2011,49(1):295-310
    [76] S. T. Li, B. Yang. Multifocus image fusion using region segmentation and spatial frequency[J].Image and Vision Computing,2008,26(7):971-979
    [77] Z. H. Li, Z. L. Jing, X. H. Yang et al. Color transfer based remote sensing image fusion usingnon-separable wavelet frame transform[J]. Pattern Recognition Letters,2005,26(13):2006-2014
    [78] H. Li, B. S. Manjunath, S. K. Mitra. Multisensor image fusion using the wavelet transform[J].Graphical Models and Image Processing,1995,57(3):235-245
    [79] D. L. Zhang, P. F. Yan. Information measure for performance of image fusion[J]. ElectronicsLetters,2002,38(7):313-315
    [80] Z. Wang, A. Rehman. SSIM-based non-local means image denoising[C].18th IEEEInternational Conference on Image Processing,2011,217-220
    [81] W. Dou, Y. H. Chen, X. B. Li et al. A general framework for component substitution imagefusion: An implementation using the fast image fusion method[J]. Computers&Geosciences,2007,33(2):219-228
    [82] C. S. Xydeas, V. Petrovic. A new quality metric for image fusion[J]. Electronics Letters,2000,36(4):308-309
    [83]罗述谦,周国宏.医学图像处理与分析[M].北京:科学出版社,2003
    [84] B. R. Condon, J. Patterson, D. Wyper. Image nonuniformity in magnetic resonance imaging:Its magnitude and methods for its correction[J]. Journal of Radiology,1987,60:83–87
    [85] A. Simmons. Sources of intensity nonuniformity in spin echo images at1.5T[J]. MagneticResonance Medicine,1994,32:121-128
    [86]周震,刘春红.一种MR图像灰度偏差场的修正方法[J].中国纺织工程研究与临床康复,2010,14(26):4823-4827
    [87] M. Tincher. Polynomial modeling and reduction of RF body coil spatial inhomogeneity inMRI[J]. IEEE Transactions on Medical Imaging,1993,12:361–365
    [88] B. M. Dawant, A. P. Zijdenbos, R. A. Margolin. Correction of intensity variations in MRimages for computer-aided tissue classification[J]. IEEE Transactions on Medical Imaging,1993,12:770–781
    [89] S. Gilles. Bias field correction of breast MR images[C]. In Proceeding. VBC’96,153–158
    [90] W. M. Wells. Statistical intensity correction and segmentation of MRI data[C]. InProceeding.VBC’94,13–24
    [91] S. K. Lee, M. W. Vannier. Post-acquisition correction of MR inhomogeneities[J]. MagneticResonance Medicine,1996,36:275–286
    [92] D. L. Pham, J. L. Prince. Adaptive fuzzy segmentation of magnetic resonance images[J]. IEEETransactions on Medical Imaging,1999,18:737-752
    [93] L. Axel, J. Constantini, J. Listerud. Intensity correction in surface-coil MR imaging[J].American Journal of Roentgenol,1987,148:418–420
    [94] H. B. Benjamin, A. Manduca, A. R. Richard. Optimized homomorphic unsharp masking forMR grayscale inhomogeneity correction[J]. IEEE Transactions on Medcial Imaging,1998,17(2):161-172
    [95] R. Eckhorn. High frequency oscillations in primary visual cortex of awake monkey[J].NeuroRep,1993,4(3):243-246
    [96] E. M. Izhikevich. Class I neural excitability, conventional synapses, weakly connectednetworks, and mathematical foundations of pulse coupled models[J]. IEEE Transactions onNeural Networks,1999,10(3):499-507
    [97]马义德,李廉,待若兰等.脉冲耦合神经网络原理及其应用[M].北京:科学出版社,2006
    [98]顾晓东,余道衡.PCNN的原理及其应用[J].电路与系统学报,2001,6(3):45-50
    [99] A. N. Skourikhine. Parallel image processing with autowaves: segmentation and edgeextraction[C]. In4th World Multiconference on Systemics, Cybernetics and Informatics,Orland,2000,23-26
    [100] G. Kuntimad, H. S. Ranganath. Perfect image segmentation using pulse coupled neuralnetworks[J]. IEEE Transactions on Neural Networks,1999,10(3):591-598
    [101] H. S. Ranganath, G. Kuntimad. Object detection using pulse coupled neural networks[J]. IEEETransactions on Neural Networks,1999,10(3):615-620
    [102] J. M. Kinser. Foveation by a pulse coupled neural network[J]. IEEE Transactions NeuralNetworks,1999,10(3):621-625
    [103] M. N. Ahmed. A modified fuzzy c-means algorithm for bias estimation and segmentation ofMRI data[J]. IEEE Transaction on Medical Imaging,2002,21:193-199
    [104] X. Li. Inhomogeneity correction for magnetic resonance images with fuzzy c-meansalgorithm[C]. Proc. SPIE of Medical Imaging, San Diego, CA,2003,995-1005
    [105]王敬敏,张艳.基于变异系数法与距离综合评价的多因素排序法研究[C].中国模糊逻辑与计算智能联合学术会议,2005
    [106] M. Tanaka, W. Takashi, Y. Baba. Autonomous foveation system and integration of thefoveatimages[C]. IEEE SMC’99. USA: IEEE SMC apos,1999,559-564
    [107] J. L. Johnson. Pulse-coupled neural nets: translation, rotation, scale, distortion and intensitysignal invariance for images[J]. Applied Optics,1994,33(26):6239-6253
    [108] T. Lindblad, J. M. Kinser. Image processing using pulse-coupled neural network[J].BerlHeidelberg: Springer-Verlag,2005,107-110
    [109]邱秀清等.基于非抽样contourlet和PCNN的图像凹点检测方法研究.系统仿真学报,2008,20(10):2579-2584
    [110] Z. Y. Qiu, J. Y. Dong, Z. Chen. MR Image registration based on pulse-coupled neuralnetworks[J]. Springer-Verlag Berlin Heidelberg,2007,914-922
    [111] J. L. Salem. Fusion of multispectral and panchromatic images using improved IHS and PCAmergers based on wavelet decomposition[J]. IEEE Transactions on Geoscience and RemoteSensing,2004,42(6):1291-1299
    [112] S. H. Chen. SAR and multispectral image fusion using generalized IHS transform based on àtrous wavelet and EMD decompositions[J]. IEEE sensors journal,2010,10(3):737-746
    [113]洪日昌.多源图像融合算法及应用研究[D].合肥:中国科学技术大学,2007
    [114] C. Pohl, J. V. Genderen. Multisensor Image Fusion in Remote Sensing: Concepts, Methodsand Applications[J]. International Journal of Remote Sensing,1998,19(5):823-854
    [115] R. W. Wendell. Seneor fusion: a pre-atentive vision approach[C]. Proc. SPIE,2000,59-67
    [116]赵英等.遥感应用分析原理与方法[M].北京:科学出版社,2003
    [117] W. J. Carper. The use of intensity-hue-saturation transformations for merging SPOTpanchromatic and multispectral image data[J]. Photogrametric Engineering and RemoteSensing,1990,56:59-467
    [118] Y. H. Jia. Fusion of Landsat TM and SAR images based on principal component analysis[J].Remote Sensing Technology and Application,1998,13(1):46-49
    [119] Y. Zhang. A new merging method and its spectral and spatial effects[J]. International Journalof Remote Sensing,1999,20(10):2003-2014
    [120] J. G. Liu. Smoothing filter-based intensity modulation: A spectral preserve image fusiontechnique for improving spatial details[J]. International Journal of Remote Sensing,2000,21(18):3461–3472
    [121] M. M. Khan. Indusion: Fusion of multispectral and panchromatic images using the inductionscaling technique[J]. IEEE Transactions on Geoscience and Remote Sensing,2008,5(1):98-103
    [122] C. A. Laben, B. V. Brower. Process for enhancing the spatial resolution of multispectralimagery using pan-sharpening[P]. U.S. Patent,6011875, January4,2000
    [123] J. A. Malpica. Hue adjustment to IHS pan-sharpened IKONOS imagery for vegetationenhancement[J]. IEEE Geoscience and Remote Sensing Letters,2007,4(1):27–31
    [124] J. H. Park, M. G. Kang. Spatially adaptive multi-resolution multi-spectral image fusion basedon Bayesian approach[C]. Proc of SPIE,2006,6064:263-270
    [125] H. Chu, W. L. Zhu. Fusion of IKONOS satellite imagery using IHS transform and localvariation[J]. IEEE Geoscience and Remote Sensing Letters,2008,5(4):653-658
    [126] Z. Li, Z. Jing, X. Yang. Color transfer based remote sensing image fusion using non-separablewavelet frame transform[J]. Pattern Recognition Letters,2005,26:2006–2014
    [127] H. Kolb. The neural organization of the human retina, in: J.R. Heckenlively, G.B.Arden (Eds.),Principles and Practices of Clinical Electrophysiology of Vision[M]. Mosby Year Book Inc. St.Louis,1991,25–52
    [128] H. Kolb. How the retina works[J]. Journal of American Scientist,2003,91:23–35
    [129] D. Sabalan, G. Hassan. MRI and PET image fusion by combining IHS and retina-inspiredmodels[J]. Information Fusion,2010,11:114-123
    [130] W. Luo, H. L. Li, G. H. Liu et al. Global salient information maximization for saliencydetection[J]. Signal Processing: Image Communication,2012,27:238-248
    [131] J. Han, K. Ngan, M. Li et al. Unsupervised extraction of visual attention objects in colorimages[J]. IEEE TCSV,2006,16(1):141-145
    [132] U. Rutishauser, D.Walther, C. Koch et al. Is bottom-up attention useful for object recognition?[C]. CVPR,2004,37-44
    [133] C. Christopoulos, A. Skodras, T. Ebrahimi. The JPEG2000still image coding system: anoverview[J]. IEEE Transactions on Consumer Electronics,2002,46(4):1103-1127
    [134] T. Chen, M. M. Cheng, P. Tan. Sketch2photo: Internet image montage[J]. ACM TOG,2009,28(5):1-10
    [135] C. Wang, Q. Yang, X. O. Tang, et al. Salience preserving image fusion with dynamic rangecompression[C], ICIP,2006
    [136] R. A. chanta, F. Estrada, P. Wils et al. Salient region detection and segmentation[C]. ICVS,2008,66-75
    [137] S. Gofer, L. Z. Manor. Context aware saliency detection[C]. CVPR,2010,2376-2383
    [138] S. K. Mannan, C. Kennard, M. Husain. The role of visual salience in directing eye movementsin visual object agnosia[J]. Current biology,2009,19(6):247-248
    [139] C. Koch, S. Ullman. Shifts in selective visual attention: towards the underlying neuralcircuitry[J]. Human Neurbiology,1985,4:219-227
    [140] L. Itti, C. Koch, Niebur. A model of saliency based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259
    [141] Y. F. Ma, H. J. Zhang. Constrast based image attention analysis by using fuzzy growing[J].ACM Multimedia,2003,374-381
    [142] T. Liu, Z.Yuan, J. Sun et al. Learning to detect a salient object[C]. CVPR,2007
    [143] Y. Zhai, M. Shah. Visual attention detection in video sequences using spatiotemporal cues[J].ACM Multimedia,2006,815-824
    [144] R. Achanta, S. Hemami, F. Estrada et al. Frequency-tuned salient region detection[C]. CVPR,2009,1597-1604
    [145] M. M. Cheng, G. X. Zhang, N. J. Mitra et al. Global contrast based salient region detection[C].CVPR,2012
    [146] K. Alexander, M. Marcin, S. Cordelia. A spatio-temporal descriptor based on3D-gradients[C].BMVC,2008
    [147] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection[C]. CVPR,2005
    [148] H. Takeda, S. Farsiu, P. Milanfar. Kernel regression for image processing andreconstruction[J]. IEEE Transactions on image processing,2007,16(2):349-367
    [149] K. M. He, J. Sun, X. O. Tang. Guided image filtering[C]. ECCV,2010
    [150] N. Draper, N. Smith. Applied regression analysis[M].2nd Edition, John Wiley,1981
    [151] D. G. Lowe. Object recognition from local scale-invariant features[C]. ICCV,1999,1150–1157
    [152] C. Liu et al. SIFT Flow: Dense Correspondence across Different Scenes[C]. ECCV,2008

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700