摘要
In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition,vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error(RMSE) and fractal dimension of the visual and infrared images to obtain the final fused image.Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition,vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error(RMSE) and fractal dimension of the visual and infrared images to obtain the final fused image.Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
引文
[1]A.Toet,J.J.van Ruyven,and J.M.Valeton,“Merging thermal and visual images by a contrast pyramid,”Opt.Eng.,vol.28,no.7,pp.789-792,Jul.1989.
[2]E.A.Essock,J.S.McCarley,M.J.Sinai,and W.K.Krebs,“Functional assessment of night-vision enhancement of real-world scenes,”Invest.Ophthalmol.and Vis.Sci.,vol.36,no.S,pp.S5172368,Feb.1996.
[3]M.I.Smith and G.Rood,“Image fusion of II and IR data for helicopter pilotage,”in Proc.SPIE,Integrated Command Environments,San Diego,CA,USA,vol.4126,pp.186-197,Nov.2000.
[4]S.Das,Y-L Zhang,and W.K.Krebs,“Color night vision for navigation and surveillance,”Transport.Res.Rec.J.Transport.Res.Board,vol.1708,pp.40-46,2000.
[5]H.U.Dhler,P.Hecker,and R.Rodloff,“Image data fusion for enhanced situation awareness,”in Proc.RTO SC1 Symp.on the Application of Information Technologies(Computer Science)to Mission Systems,Monterey,California,USA,1998,pp.4-1-4-12,1998.
[6]P.S.Chavez Jr and A.Y.Kwarteng,“Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis,”Photogrammetric Engineering and Remote Sensing,vol.55,no.3,pp.339-348,1989.
[7]P.J.Burt and R.J.Kolczynski,“Enhanced image capture through fusion,”in Proc.4th Int.Conf.on Computer Vision,Berlin,Germany,pp.173-182,1993.
[8]V.S.Petrovic and C.S.Xydeas,“Gradient-based multiresolution image fusion,”IEEE Trans.Image Process.,vol.13,no.2,pp.228-237,Feb.2004.
[9]H.Li,B.S.Manjunath,and S.K.Mitra,“Multisensor image fusion using the wavelet transform,”Graph.Models Image Process.,vol.57,no.3,pp.235-245,May 1995.
[10]I.De and B.Chanda,“A simple and efficient algorithm for multifocus image fusion using morphological wavelets,”Signal Process.,vol.86,no.5,pp.924-936,May 2006.
[11]A.Toet,“Natural colour mapping for multiband night vision imagery,”Inform.Fusion,vol.4,no.3,pp.155-166,Sep.2003.
[12]V.Tsagiaris and V.Anastassopoulos,“Fusion of visible and infrared imagery for night color vision,”Displays,vol.26,no.4-5,pp.191-196,Oct.2005.
[13]A.Toet and M.A.Hogervorst,“Portable real-time color night vision,”in Proc.SPIE Volume 6974,-Multisensor,Multisource Information Fusion:Architectures,Algorithms,and Applications 2008 International Society for Optical Engineering,Orlando,FL,USA,vol.6974,pp.697402,Mar.2008.
[14]Z.Q.Zhou,B.Wang,S.Li,and M.J.Dong,“Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,”Inform.Fusion,vol.30,pp.15-26,Jul.2016.
[15]D.P.Bavirisetti and R.Dhuli,“Fusion of infrared and visible sensor images based on anisotropic diffusion and KarhunenCLoeve transform,”IEEE Sens.J.,vol.16,no.1,pp.203-209,Jan.2016.
[16]B.B.Mandelbrot,Fractals:Form,Chance,and Dimension.San Francisco,CA,USA:Freeman,1977.
[17]B.B.Mandelbrot,The Fractal Geometry of Nature.San Francisco,CA,USA:Freeman,1982.
[18]S.Peleg,J.Naor,R.Hartley,and D.Avnir,“Multiple resolution texture analysis and classification,”IEEE Trans.on Pattern Anal.and Mach.Intell.,vol.PAMI-6,no.4,pp.518-5232,Jul.1984.
[19]J.L′evy-V′ehel,“Fractal approaches in signal processing,”Fractals,vol.3,no.4,pp.755-775,Dec.1995.
[20]G.H.Qu,D.L.Zhang,and P.F.Yan,“Information measure for performance of image fusion,”Electron.Lett.,vol.38,no.7,pp.313-315,Mar.2002.
[21]M.Hossny,S.Nahavandi,and D.Creighton,“Comments on information measure for performance of image fusion,”Electron.Lett.,vol.44,no.18,pp.1066-1067,Aug.2008.
[22]G.Piella and H.Heijmans,“A new quality metric for image fusion,”Proc.Int.Conf.on Image Processing,Barcelona,Spain,vol.3,pp.173-176,Sep.2003.
[23]C.S.Xydeas and V.Petrovic,“Objective image fusion performance measure,”Electron.Lett.,vol.36,no.4,pp.308-309,Feb.2000.