神经网络在图像压缩中的应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
数字图像处理技术在多媒体、因特网、电视、传真等领域的应用越来越广泛。图像压缩是数字图像处理中最重要的关键技术之一,传统的图像压缩方法有预测编码、变换编码和矢量量化等。近二十年来,基于神经网络、分形理论、小波变换的现代压缩方法已成功地用于图像压缩。
     论文研究神经网络在静止图像压缩中的应用,主要内容包括三个部分,一是将对向传播神经网络(CPN)用于矢量量化图像压缩,并对标准CPN矢量量化器模型进行改进,提出一种新的快速码书设计算法FCLECA和一个基于改进CPN的快速矢量量化器模型;二是在研究连续Hopfield神经网络(CHNN)优化功能的基础上,提出一个实现码书设计的竞争性CHNN模型,设计了相应的能量函数和神经元动力学方程,并给出基于该模型的码书设计算法;三是讨论神经网络在KL变换编码中的应用,提出一个求图像矢量协方差矩阵(对称矩阵)全部特征值及特征矢量的神经网络模型SEVNN,设计了相应的学习规则,并使用基于SEVNN的KL变换算法来实现图像压缩。
     仿真试验结果表明,以上三种基于神经网络的图像压缩算法在训练速度、压缩质量、健壮性等方面具有明显的优势。
Digital image processing techniques have been being used more and more widely in many fields such as multimedia, the Internet, television and fax, etc. Image compression is one of the most important key techniques in image processing. Traditional compression methods include prediction coding, transform coding and vector quantization(VQ). In past twenty years, modern techniques based on neural networks, fractal theory and wavelet transform have been successfully used to image compression.
    This dissertation focuses on the applications of neural networks to static image compression. It has three main parts. First, the Counterpropagation Network (CPN) is used to VQ image compression. A quantizer based on the standard CPN is proposed and then modified. A new codebook designing algorithm, referred to as the Fast Competitive Learning and Error Correction Algorithm (FCLECA), and a model of fast vector quantizer based on the modified CPN, are presented. Secondly, based on the optimization functions of the continuous Hopfield neural networks (CHNN), a competitive CHNN model for codebook designing is presented. The corresponding energy function, the dynamic equations for neurons, and the codebook algorithm based on the model are also designed. Finally, the dissertation discusses the applications of neural networks to KL transform coding. A neural network model for solving the symmetric eigenvalue problem, referred to as the SEVNN, is proposed. And the learning rule of the SEVNN is designed and then used to KL transform coding image compression.
    The results of simulating experiments show that the three above-mentioned algorithms based on neural networks have many remarkable advantages. They have higher learning speed, are able to obtain higher-quality compressed images and have great robustness in image compression.
引文
[1]Martin T. Hagan, Howard B. Demuth, Mark H. Beale. Neural Network Design[M]. Boston:PWS Publishing Company,1996.
    [2]Fredric M. Ham, Ivica Kostanic. Principles of Neurocomputint for Science & Engineering[M]. BeiJing: China Machine Press, 2003.
    [3]J. Jiang. Image compression with neural networks-A survey[J]. Signal Processing: Image Communication, 14 (1999) 737-760.
    [4]M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks—a review[J]. Pattern Recognition 35 (2002) 2279-2301.
    [5]N. M. Nasrabadi, Y. Feng. Vector Quantization of Images Based upon the Kohonen Self-Organizing Feature Maps[J]. International Coference on Neural networks, 1988, 1: 101-108.
    [6]O.T.C. Chen et al., Image compression using self-organisation networks, IEEE Trans. Circuits Systems For Video Technol. 4 (5) (1994) 480} 489.
    [7]J, Dangman, Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression, IEEE Trans. ASSP 36 (1988) 1169} 1179.
    [8]T.K. Denk, V. Parhi, Cherkasky, Combining neural networks and the wavelet transform for image compression, in: IEEE Proc. ASSP, Vol. Ⅰ, 1993, pp. 637}640.
    [9]Y.Linde, A.Buzo, R.M Gray. An algorithm for vector quantization design. IEEE Trans on Communications. 1980,COM-28(1):84-95.
    [10]M. Grana, A. D'Anjou, A. I. Gonzalez, F. X. Albizuri, M. Cottrell. Competitive stochastic neural networks for Vector Quantization of images[J]. Neurocomputing, 1995(7) 187-195.
    [11]S. C. Ahalt, A.K. Krishnamarthy, D. E. Melton, P.Chen. Competitive Learning Algorithms for Vector Quantization[J]. Neural Networks, 1990, 3: 277-290.
    [12]A.K. Krishnamarthy, S. C. Ahalt, D. E. Melton, P. Chen. Neural Networks for Vector Quantization of Speech and Images[J].IEEE Journal on Selected Areas Communications, 1990, 8(8): 1449-1457.
    [13]E. Yair, K. Zeger, A. Gersho. Competitive Learning and Soft Competition for Vector Quantizer Design[J]. IEEE Transactions on Signal Processing, 1992,40(2): 294-309.
    [14]Benbenisti et al., New simple three-layer neural network for image compression, Opt. Eng. 36 (1997) 1814}1817.
    [15]S. Costa, S. Fiori. Image compression using principal component neural networks[J]. Image and Vision Computing, 2001, 19:649-668.
    [16]M.F. Barnsley, L.P. Hurd, Fractal Image Compression, AK Peters Ltd, 1993, ISBN: 1-56881-000-8.
    [17]D. Butler, J. Jiang, Distortion equalised fuzzy competitive learning for image data
    
    vector quantization, in: IEEE Proc. ICASSP'96, Vol. 6, 1996, pp. 3390}3394, ISBN: 0-7803-3193-1.
    [18]Robert Hecht-Nielsen. Counterpropagation Networks[J]. Applied Optics, 1987,12, Vol. 26: 4979-4984.
    [19]Robert Hecht-Nielsen. Applications of Counterpropagation Networks[J]. Neural Networks, 1988, Vol. 1, No. 2: 131-139.
    [20]Hojja Adeli and Hyo Seon Park, CPN in engineering[J]. Journal of Structural Engineering, 1995, 8: 1205-1212.
    [21]Dong-Hark Lee and Young Hwan Kim. Image vector quantization using a twostage self-organizing[J]. Int.J.Electronics, 1996, Vol.80, No.6, 703-716.
    [22]Chaur-Heh Hsieh, Wei-Yang Shao, Ming-Haw Jing. Image Compression Based on Multistage Vector Quantization[J]. Joumal of Visual Communication and Image Representation, 2000, 11 : 374-384.
    [23]J. J. Hopfield, Neural Networks and Physical Systems With Emergent Collective Computational Abilities[J]. Proceedings of the National Academy of Sciences, USA 1982, 79: 2554-2558.
    [24]J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons proc. Proceedings of the National Academy of Sciences, USA 1984, 81:3088-3092
    [25]N. Ahmed, K. R. Rao, Orthogonal Transform for Digital Signal Processing[J]. Springer-Verlag, 1975.
    [26]E. Oja, A simplified neuron model as a principal component analyzer, J. Math. Biol. 15 (1982) 267-273.
    [27]T. D. Sanger. Optimal Unsupervised Learning in a Singler-Layer Linear Feedforward Neural Network[J]. Neural Networks, Vol. 2, 1989, pp. 53-58.
    [28]S. Y. Kung. Adaptive Principal Component Analysis Via an Orthogonal Learning Network[J]. Processing of the International Symposium on Circuits and Systems, 1990, pp. 719-722
    [29]E. Oja, J. Karhunen. On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectations of a Random Matrix[J]. Journal of Mathematical Analysis and Applications, vol. 104, 1985: 69-84.
    [30]戴彦群,王茂芝.基于改进CPN的快速矢量量化[J].计算机应用,2004,Vo.24,No.5:64-66.
    [31]戴彦群,卢玉蓉,王茂芝.用改进的SOFM算法实现快速矢量量化[J].四川师范大学学报,2003,Vo.26:69-71.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700