服装疵点检测的数字化表征
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
我国是世界最大的服装出口大国,服装生产中质量控制与检测非常重要,服装疵点检测是其中最为重要的部分。人工服装疵点检测不仅劳动强度高、心里负担大,而且效率低、误差大,能否借助于计算机图像处理与分析技术和现代数学的疵点自动识别技术成为人们关注的课题。本文研究了适用于服装疵点快速、准确的检测方式和方法的多个环节的解决方案,并尝试了部分服装疵点的在线检测与分析。这在目前大多服装疵点的在线检测还处于探索阶段,与实用存在距离的现状下,对解决实用服装生产中疵点的自动检测具有学术和实用价值。本文主要的研究贡献如下:
     (1)双图像融合方法仿真实现
     本文给出了小波多尺度分解的图像融合方案。提出了图像的混合多分辨率分析设想,将不同多尺度几何分析方法之间具有互补特性的不同图像变换方法以串联的形式结合,获得图像的混合多分辨率分解。构建了混合多分辨率分析图像融合框架,在混合多分辨率分解域内对分解系数进行融合,然后通过逆变换得到融合图像。结合小波变换与Curvelet变换的互补特性构造了混合小波与Curvelet变换。传统多聚焦图像的融合图像往往会损失源图像部分清晰特征,本文通过模拟手工剪与贴方法,构建了分割合并相结合的多聚焦图像区域特征融合框架,提出了依据图像清晰测度直接对图像进行分割,进而实现区域级多聚焦融合的设想,运用图像空间频率和形态小波变换系数区域清晰度标准设计区域级图像算法。采用像素邻域空间频率作为其清晰度测量,结合形态学算子直接得到清晰区域与模糊区域的划分。并以两幅不同角度拍摄的有两个印花疵点和织物疵点的多聚焦图像为对象,进行了双图像融合方法效果的实际验证。
     (2)服装疵点图像的预处理仿真实现
     对浅易纹理疵点(纹理的灰度值为服装疵点的灰度值的70%以下,含70%)图像,通常进行预处理过程有消噪、图像增强等。为了凸现图像中疵点区域的结构特征,采用对图像二值化的处理,使疵点区域与背景分离。并以浅易纹理缝线为例详细介绍了浅易纹理疵点图像预处理过程。图像处理结果显示缝线线迹非常清晰的提取出来了,疵点比较完整的保留在线迹上。对繁杂纹理疵点(纹理的灰度值为服装疵点的灰度值的70%以上到85%,含85%)图像,进行的图像预处理过程有消噪、图像增强等。这一类的疵点图像最重要图像处理方法是边缘检测。并以繁杂纹理缝线为例详细介绍了繁杂纹理疵点图像预处理过程。图像处理结果显示虽然提取的线迹没有原缝线图像那么清晰,但大致的疵点特征也都完整的保留了。对超繁纹理疵点(纹理的灰度值为服装疵点的灰度值的85%以上)图像,进行的图像预处理过程有消噪、图像增强等。为了将疵点部分凸现出来,需要进行疵点部分的局部增强。由经向、纬向类疵点在灰度图像上表现。对于超繁纹理疵点,以缝料皱褶疵点图像为例介绍了超繁纹理疵点图像预处理过程。图像处理结果显示虽然提取的疵点特征没有原图像那么清晰,但大致的疵点特征也都完整的保留了。
     (3)服装疵点图像的特征提取仿真实现
     为了使用尽可能少的特征量达到最优的分类能力,对八种纹理特征彼此之间的相关性进行了考察,两个特征的相关系数越小,即信息内容差别大,说明两个特征间相互独立;对于图像来说,相似性越少,则图像的信息冗余量越小,对疵点的分类越有利,且计算量减少。同时,选择特征相关系数之和较小的六个特征值:均值、标准偏差、平滑度、三阶矩、一致性和熵6个纹理特征参数来描述区域,作为最终的描述不同类型的纹理特征。并对图像处理后浅易纹理缝线疵点图像、繁杂纹理疵点图像、超繁杂纹理疵点图像为对象作了均值、标准偏差、平滑度、三阶矩、一致性和熵6个纹理特征计算分析,提取的参数值比对发现,浅易纹理缝线疵点图像(如重线疵点)均值、标准偏差、三阶矩明显的区别于标准线迹。繁杂纹理疵点图像(如重线疵点)均值、标准偏差、三阶矩明显的区别于标准线迹。超繁杂纹理疵点图像(如缝料皱褶疵点)均值、标准偏差、三阶矩明显的区别于标准线迹,这些参数可作为识别服装疵点和服装疵点分类的重要数据。将它们输入到神经网络进行训练就可以得到关于这几种疵点的识别和分类器。
     (4)服装疵点图像的模式分类仿真实现
     本研究使用BP、径向基(RBF)神经网络构成服装疵点图像分类器,并重点给出了BP网络及其改进。从分别对结合形态算子与空间频率的双(多)图像融合算法图像融合后的2000份服装浅易纹理疵点(正常线、断线、重线、跳线、平纹面料、平纹面料疵点、水印疵点、染色疵点各250份)、繁杂纹理疵点(标准线、重线、针织物面料、针织物疵点、梭织物面料、梭织物疵点、机织物面料、机织物疵点各250份)、超繁纹理疵点(缝料标准线迹、缝料皱褶疵点、经向疵点、浮经疵点、吊经疵点、纬向疵点、纬纱疵点、破洞疵点各250份)样品分类结果看,经典BP算法无法对服装疵点图像进行分类。对BP网络进行了改进,引入动量及自调节1rBP梯度递减训练函数和Levenberg-Marquardt优化算法。动量法可抑制网络陷于局部极小。L-M优化算法比BP及其它改进算法叠代次数少,收敛速度快,("trainlm"的训练步数只要19步)精确高。从实验结果看,训练函数"traingdx"的正确率约75%,训练函数"trainlm"的正确率约92%。结果令人满意。从分别对结合形态算子与空间频率的双(多)图像融合算法图像融合后的2000份服装浅易纹理疵点、繁杂纹理疵点、超繁纹理疵点样品分类结果看,基网络算法可以实现对服装疵点分类。实验得知:正确率约85%,结果也令人满意。说明不仅BP网络可以对服装疵点图像模式进行识别,其它网络也行,只是存在一个最优的问题。
     (5)双图像获取系统方案设计
     进行了双图像获取系统的自由度和选取基线的基本计算过程,对矩阵视觉平台的机械驱动控制实现的关键问题作了详细的介绍。设计了双图像获取系统主要装置硬件结构,用两个摄像头模拟人眼,通过其背后的十字机构实现上下和左右的转动。双图像获取系统主要装置机构的设计可实现水平扫视,俯仰,摄像头转向,三者能实现快速、动态地交互调整。采用了四个直流直接驱动电机驱动实现四个自由度的控制。采用89C51单片机作为主CPU,给出了软件流程图。采用PID控制算法实现单片机对目标速度的反馈控制,在精度和实现性上都比较完整和成熟。要在工业上实现服装疵点检测,不需要很复杂的步骤和苛刻的条件。
     客观、精确地实现对服装生产中自动疵点检测,使服装生产中实现优质自动化质量控制与检测控制成为可能。是本文研究的最主要的动因。本文提出的适用于服装生产中疵点检测的服装疵点视觉检测系统各个环节的解决方案,为实现服装疵点的在线测量和检测打开了一个新的思路。对于今后研发此类系统具有一定的参考价值。
China is the largest garment exporter in the world. Quality control and inspection of garment production are very important. Among them, clothing defect detection is the most important part. Artificial defect detection of clothing can cause not only intensive laboring, but also heavy psychological burden, low efficiency, and big error. Due to above facts, automatic detection of defects by means of image processing, analysis techniques and modern mathematics becomes a hot research topic of international attention in recent years. In this thesis, design of a visual, online detection system used in the identification of clothing production defects is proposed. Currently, most online detection for clothing defects is still in the exploratory stage, far from application. Solving the problem of automatic identification of clothing defects has academic and practical significance. The main work and contributions of this thesis are as following:
     (1) Dual image fusion method simulation
     This thesis also gives image fusion program of wavelet multi-scale decomposition and presents design of picture mixed multi-resolution analysis. For different multi-scale geometric analysis with complementary features, different image transformation methods of complementary characteristics were combined to obtain mixed multi-resolution image. We constructed a mixed multi-resolution image fusion analysis framework. In mixed multi-resolution decomposition domain, we fused decomposition coefficients. And then the fusion image is obtained by inverse transform. Combining complementary nature of wavelet transform and Curvelet transform, we constructed a mixed wavelet and Curvelet transform. Fusion image of traditional multi-focus image fusion tends to partially lose clarity of the source images. However, in this paper, through simulating handwork of cut-paste method, a framework of split-merge combination with features of integration of multi-focus image area was built. We also proposed a design of image segmentation on the basis of clarity of images measured, and achieved regional multi-focus vision fusion. According to the standards of image spatial frequency and morphological wavelet transform coefficients, regional image algorithm was design. In the basis of the pixel neighborhood spatial frequency domain as its sharpness measurement, and with the combination of morphological operators, division of clear regional and fuzzy area was directly obtained. And by the multi-focus image that was taken in two different angles and had two printing defects as object image, the actual effect of the double image fusion method was verified.
     (2) Clothing defect image preprocessing Simulation
     For simple texture defects (i.e., textured gray value is equal to or below70%of garment defects value) images, pretreatment process typically includes noise canceling, image enhancement, and so on. In order to highlight image structure feature of defect region, we used binary image processing to separate defect area and background. Taking simple texture sutures as an example, we described simple texture defects image preprocessing in details. Image processing results showed very clearly that the suture stitches were extracted, the defects were completely reserved on line track. For complex texture defects (textured gray value is between70%and85%, including85%) images, pretreatment process is also composed of noise canceling, image enhancement, and so on. Edge detection is the most important image processing method of this type for defect image. Based on complex texture sutures, complex texture defects image preprocessing was described in detail. Image processing results show that although extracted sutures trace is not as clearly as the original images, general defect characteristics are still completely reserved on line track. For super complex texture defects (textured gray value is higher than85%of garment defects value) images, pretreatment process comprises noise canceling, image enhancement, and so on. In order to highlight image defect region, it is needed to enhance local defect image by warps defects and zonal defects performance on gray images. For super complex texture defects, taking sewing material folds as an example, we described in detail the super complex texture defects image preprocessing. Image processing results show that although extracted defect characteristics was no as clearly as the original images, general defect characteristics was relatively complete reserved on line track.
     (3) Clothing defect image feature extraction simulation
     In order to use feature quantity as little as possible to achieve the best classification ability, we study the correlation of eight texture features. The smaller the correlation coefficients of two features, the less similar the two features, and the greater difference the information in contained content shows. For images, the less similarity means the smaller redundancy of image information. In the image classification process, we often choose small image information redundancy in classification in order to facilitate image classification, which can reduce the amount of computation. We selected six smaller correlation characteristics coefficient to describe areas, including mean, standard deviation, smoothness, third moment, consistency and entropy, and used these texture features as the ultimate types of texture features. After image processing, taking the simple suture defect images, complex texture defect images, super complex texture defect image as objects, we calculated and analyzed these six texture features of the object. We compared the value of extracted parameters and found that the mean, standard deviation, third moment was significantly different from that of the standard stitches of simple suture defect images(for example, heavy line defects), the mean, standard deviation, third moment is significantly different from that of the standard stitches of complex texture defect images (for example, heavy line defects), and the mean, standard deviation, third moment is significantly different from that of the standard stitches of super complex texture defect images (for example, sewing material folds defects). These parameters can be used to identify clothing defect classification. Inputting them into the neural network for training, we can get information concerning these types of defect identification and classification.
     (4) Clothing defect image pattern classification Simulation
     In this thesis, we use BP and radial basis neural network to constitute clothing defect images classifier, especially focusing on the BP neural network and its improvement. From the classification results of2,000garment samples after dual (multiple) image fusion algorithm combined morphological operators and spatial frequency of simple texture defects(normal lines, break lines, heavy lines, jumpers lines, plain weave fabric, plain weave fabric defects, watermark defects, dyeing defects, each250parts), complex texture defects(standard lines, heavy lines, knitted fabrics, knitted fabric defects, woven fabrics, woven fabrics defects, woven fabrics, woven fabrics defects, each250parts),ultra-complicated texture defect (Standard stitch sewing materials, sewing material folds defects, warp defect, floating through the defect, hanging by the defect, zonal defects, weft defects, hole defects each250parts), we can see the classical BP algorithm can not realize classification of clothing defect image. In the course of this investigation, BP network has been improved, and introduction of momentum and adaptive lrBP gradient descend training function and Levenberg-Marquardt optimization algorithm has been performed. Momentum method can suppress network trapped in local minima. LM optimization algorithm has fewer iterations, faster convergence, and higher precision than the BP algorithm ("trainlm" The training steps take as long as19steps). From the experimental results, the training function "traingdx" has the correct rate of75percent and the training function "trainlm" has correct rate of92%. These experimental results are satisfactory. From the classification results of2,000garment samples after dual (multiple) image fusion algorithm combined morphological operators and spatial frequency of simple texture defects, complex texture defects, super complex texture defects, respectively, we can see Radial Basis Function network algorithm can achieve clothing defect classification. From the experimental results, Radial Basis Function network algorithm has the correct rate of85percent. These experimental results are satisfactory, too. It means that in addition to BP network which can realize clothing defect image pattern recognition, the other networks are also realizing clothing defect image pattern recognition, but there is a problem of optimum.
     (5) Binocular image system design
     This paper illustrates a basic calculation process for acquiring the freedom of binocular image system and baseline selection. The key issues of matrix visual platform achieved for mechanical drive control are introduced in details. Main body of binocular image system and hardware structure were designed, with two cameras simulating the human eyes, and through their cross components up-down and left-right rotation was achieved. Main body of binocular image acquisition system was designed to achieve horizontal saccades, Span, tilt, and camera steering, among which rapid, dynamic interactive adjustment was achieved. The camera rotation was driven by the motor. Four DC direct drive motors were adopted to realize the control of four freedoms. SCM (Single Chip Microcomputer) control system, a89C51microcontroller, was used as the main CPU. And software flow chart was given. Target feedback speed was adjusted and controlled through SCM PID control algorithm, with complete and mature implementation and accuracy. It needs not very complicated steps and harsh conditions to achieve the defect detection in clothing industry.
     Objective, accurate automatic defect detection in garment production makes garment production to achieve high-quality automated quality control and possible detecting control, which is the main motivation for this paper. In this paper, solutions of defects visual detection system for garment production in all aspects are proposed and a new thinking for the clothing defects online measurement and detection is offered. This provides certain references for the future research and development of similar systems.
引文
[1]张凤涛.中国纺织产业集群竞争力研究[D].2012,东北师范大学.博士学位论文.1-3.
    [2]谢立仁.上海业昌:提升牛仔布生产装备水平[J].纺织服装周刊,2008, (11):41.
    [3]R.S.Dunn,杨光.在线质量监测系统[J].国外纺织技术,2004,(8):38-40.
    [4]常卫.基于机器视觉的染色品色差检测系统的关键技术研究[D].2012,浙江理工大学.硕士学位论文.5-6.
    [5]周志金.织物疵点的计算机软件识别方法研究[D].2013,南京理工大学.硕士学位论文.3-7.
    [6]周绚丽,成玲.计算机图像处理技术在纱线质量检测中的应用[J].纺织科技进展,2008,(1):32-34.
    [7]王三武,祁林.织物疵点自动识别方法的研究及应用现状[J].纺织科技进展,2005, (6):67-69.
    [8]努尔顿,左保齐.图像识别与纺织品检验[J].苏州大学学报,2003,23(2):49-54.
    [9]刘明,李政.梳棉棉网中棉结检测图像识别方法探讨[J].上海工程技术大学学报,1991,5(4):29-33.
    [10]黄健,许鹤群,刘燕.棉网质量检测的计算机辅助分析系统[J].中国纺织大学学报,1996,22(3):58-63.
    [11]Dewaele P, Gool L Van, Wambacq P et al. Texture inspection with self-adaptive convo-lution filters[C]. In:Proceedings of the Ninth International Conference on Pattern Recognition, Rome,1998:56-60.
    [12]Mark Bradshaw. The application of machine vision to the automated inspection of knitted fabrics[J]. Mechatronics,1995,5(2):233-243.
    [13]Tobin K W. How Oakridge National Labfosters automated inspection[J]. Adv Imaging. 1993:8(7):48-51.
    [14]Norton-Wayne L, Bradshaw M, Jewell A J. Machine vision inspection of web textile fabric[C]. In:Proceedings of the British Machine Vision Conference, Leeds, UK:BMVC, 1992:217-226.
    [15]HEM F, FAN Z, ATTALI S. Automated inspection of textile fabrics using texturalmodels[J]. IEEE Trans Pattern Anal Mach Intell,1991,13:803-808.
    [16]赵静.基于小波域特征织物瑕疵检测与识别的研究[D].2011,江南大学.硕士学位论文.1-3.
    [17]Ciamberlini C, Francini F, Sansoni P, et al. Defect detection in textured materials by optical figering with structured detectors and self-adaptable masks[J]. Optical Engineering,1996: 35(3):838-843.
    [18]李航.纺织品瑕疵现场检测与分类的研究[D].2012,西安工程大学.硕士学位论文.1-3.
    [19]何志贵(译),魏丽琼(校).EVS公司的I-TEX2000型织物自动检验系统[J].国外纺织技术,2001,(10):40-41.
    [20]M.R.Bieri(著),何志贵(译),黎明(校).策尔韦格·乌斯特公司的Fabriscan织物自动检测系统[J].国外纺织技术,2011,(11):41-42.
    [21]瑞士Zell weger Uster公司Rudof Meier,Rolf Leuenberger.Uster公司的织物质量自动检测检验系统[J].纺织导报,1999,(2):41-42.
    [22]Aura Conci, Claudia Belmiro, Proenca.A fractal image analysis system for fabric inspection based on a box-counting method[J].Computer Networks and ISDN System 30(1998)1987-1895.
    [23]L. NORTON-WAYNE.Automated Garment Inspection Using Machine Vision[J].Leicester Polytechnic,1900, p:374-377.
    [24]徐增波,贡玉南,黄秀宝.基于Wold纹理模型和分形理论的织物疵点检测[J].中国纺织大学学报,2000,(01):6-9.
    [25]徐晓峰,段红,魏俊民.基于二维小波变换和BP神经网络的织物疵点检测方法[J].浙江工程学院学报,2004,21(01):15-19.
    [26]姚桂国,钟小勇,梁金祥,左保齐.基于遗传算法的织物疵点特征选择[J].纺织学报,2009,30(12):41-44.
    [27]赵静,于凤芹.基于小波域差值系数的织物疵点分割与识别[J].计算机系统应用,2011,20(10):109-113.
    [28]崔月平,韩润萍.基于Log-Gabor滤波器组的织物疵点检测算法[J].北京服装学院学报,2011,31(3):47-52.
    [29]Goupillaud P, Grossman A, Morlet J. Cycle-octave and related transforms in seismic signal analysis[J]. Geoexploration,1984,23(l):85-102.
    [30]李立轻,黄秀宝.基于织物自调节正交小波的疵点检测[J].东华大学学报,2001,27(4):82-87.
    [31]W J JasPer, H PotlaPalli. Image analysis of misPickS in woven fabrie[J].Textile Research Journal,1995,65(11):683-692.
    [32]刘建立,左保齐.基于小波变换和阈值分割的织物疵点边缘检测[J].丝绸,2006,8:42-44.
    [33]石美红,张军英,段亚峰.基于改进型PCNN的织物疵点检测的研究[J].丝绸,2002,(06):14-17..
    [34]Yang Xuezhi, Pang G,,Yung N.Discriminative fabric defect detection using adaptive wavelets [JJ.Optical Engineering,2002,41(12):3116-3126.
    [35]李春梅,周骥平,颜景平,人工神经网络在机器人视觉中的应用[J].制造业自动化,2000,22(9):33-36,49.
    [36]王银年.遗传算法的研究与应用[D].2009,江南大学.硕士学位论文.4-13.
    [37]邓秀娟,赵亮.基于图像轮廓提取的模板匹配方法在机器人视觉中的应用[J].机器人技术与应用,2002,(5):27-29.
    [38]Varshn P K. Multisensor data fusion [J].Electronics and Communication Engineering Journal, 1997,9(6):245-253.
    [39]Llinas J, Edward W. Multisensor data fusion. Boston, MA:Artech House RadarLibrary,1990.
    [40]Sohn H G, Yun K, Chang H. Analysis of image fusion methods using 1KONOS imagery[J]. KSCE Journal of Civil Engineering,2003,7(5):577-584.
    [41]Roberts J W, Van A J, Ahmed F B. Medium resolution image fusion, does it enhance forest structure assessment[J], in:International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China,2008 vol. XXXVII(B8),3-11.
    [42]Muller A C, Narayanan S. Cognitively-engineered multisensor image fusion for military applications[J]. Information Fusion,2009,10(2):137-149.
    [43]Matsopoulos G K, Marshall S, Brunt J. Multiresolution morphologicalfusion of MR and CT images of the human brain[J]. In:Proc of IEE Conf Vision, Image and Signal Processing, 1994,141(3):137-142.
    [44]F. Nencini, A. Garzelli, S. Baronti, et al. Remote sensing image fusion using the curvelet transform[J].Information Fusion,2007,8(2):143-156.
    [45]Grewe L, Brooks R R. Atmospheric attenuation reduction through multi-sensor fusion[J].In: Proc of SPIE,1998,3376:102-109.
    [46]Editorial G. Image fusion:Advances in the state of the art[J].Information Fusion,2007,8(2): 114-118.
    [47]Burt P J, Adelson E H. The Laplacian pyramid as a compact image code[c]. IEEE Trans on Communications,1983,31(4):532-540.
    [48]Toet A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters,1989, 9(4):245-253.
    [49]Toet A. A morphological pyramidal image decomposition[J]. Pattern Recognition Letters, 1989,9(4):255-261.
    [50]Burt P J, A gradient pyramid basis for pattern selective image fusion[J]. In:Proc of the Society for Information Display conference,1992,467-470.
    [51]Goupillaud P, Grossman A, Morlet J. Cycle-octave and related transforms in seismic signal analysis[J]. Geoexploration,1984,23(1):85-102.
    [52]Li H, Manjunath B S, Mitra S. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Process,1995,57 (3):235-245.
    [53]Zhang Z, Blum R S. A categorization of multiscale-decomposition-based imagefusion schemes with a performance study for a digital camera application[c]. Proc of IEEE,1999, 87(8):1315-1326.
    [54]Pajares G, Cruz J M. A wavelet-based image fusion tutorial[J]. Pattern Recognition,2004, 37(9):1855-1872
    [55]Chen T, Zhang J, Zhang Y. Remote sensing image fusion based on ridgelet transform[J]. In: Proc of the Int Geoscience and Remote Sensing Symposium, Seoul, Korea,2005, 2:1150-1153.
    [56]Choi M, Kim R Y, Nam M, et al. Fusion of multispectral and panchromatic satellite images using the curvelet transform[J]. IEEE Trans Geoscience and Renote Sensing Letters,2005, 2(2):136-140.
    [57]Qu X, Yan J, Xie G, Zhu Z, Chen B. A novel image fusion algorithm based on bandelet transform[J]. Chinese Optical Letters,2007,5(10):569-572.
    [58]Do M N, Vetterli M, Contourlets, beyond wavelets[J]. Stoeckler and G.V. Welland eds. Academic Press,2002,1-27
    [59]刘刚,敬忠良,孙韶嫒.基于期望值最大算法的图像融合[J].激光与红外,2005,35(2):130-133.
    [60]Ge Z, Wang B, Zhang L M. Remote sensing image fusion based on Bayesian linear estimation[J]. Science in China Series F:Information Sciences,2007,52(2):227-240.
    [61]Wright W A. Fast image fusion with a Markov random field[J]. In:Proc of 7th Int Conf Image Processing and its Applications, Stevenage, UK,1999,557-561.
    [62]Sharama R K, Pavel M. Adaptive and statistical image fusion[J]. Society for Information Display Digest,1996,17(5):969-972.
    [63]Blum R S. Robust image fusion using a statistical signal processing approach[J]. Information Fusion,2005,6(2):119-128.
    [64]李士民,郭立,朱俊株.基于多层次MRF的多分辨率图像融合算法[J].系统工程与电子技术,2003,25(7):863-866.
    [65]Eckhorn R, Reiboeck H J, Arndt M, Dicke P. Feature linking via synchronization among distributed assemblies:simulations of results from cat visual cortex[J].Neural Computation, 1990,2(3):293-307.
    [66]Broussard R P, Rogers S K, Oxley M E, et al. Physiologically motivated image fusion for object detection using a pulse coupled neural network[J].IEEE Trans on Neural Networks, 10(3):554-563,1999.
    [67]Li S T, Kwok J T, Tsang I W H, Wang Y N. Fusing images with different focuses using support vector machines[J].IEEE Trans on Neural network,2004,15(6):1555-1561.
    [68]Liu C, Jing Z, Xiao G, Yang B. Feature-based fusion of infrared and visible dynamic images using target detection. Chinese Optics Letters,2007,5(5):274-277.
    [69]Fauvel M, Chanussot J, J6n A B.Decision fusion for the classification of decision fusion for the classification of urban remote sensing images[J].IEEE Trans on Geoscience and Remote Sensing,2006,44(10):2828-2838.
    [70]Seales W B, Dutta S.Everywhere-in-focus image fusion using controllable cameras[J],In: Proc of SPIE,1996,2905:227-234.
    [71]Daubechies I. The wavelet transform:Time-frequency localization and signal analysis[J]. IEEE Trans on Information Theory,1990,36(5):961-1005.
    [72]Mitianoudis N, Stathaki T. Optimal contrast correction for ICA-based fusion[J]. IEEE Sensors Journal,2008,8(12):2016-2026
    [73]Li S T, Kwok J T, Wang Y N. Combination of images with diverse focuses using the spatial frequency[J]. Information Fusion,2001,2(3):169-176.
    [74]Mukhopadhyay S, Chanda B. Fusion of 2D grayscale images using multi-scale morphology[J]. Pattern Recognition,2001,34(10):1939-1949.
    [75]Piella G. A general framework for multiresolution image fusion:from pixels to regions. [J] Information Fusion,2003,4(4):259-280.
    [76]Gonzalez, R.C., Woods, R.E,数字图像处理[M].2003,电子工业出版社,北京.44-49.
    [77]王兆华.计算机图像处理方法[M].1993,宇航出版社,北京.80-86.
    [78]何明一,卫保国.数字图像处理[M].2008,科学出版社,北京.86-89.
    [79]沈庭芝, 方子文.数字图像处理及模式识别[M].1998,北京理工大学出版社,北京.89-97.
    [80]朱虹.数字图像处理基础[M].2004, 科学出版社, 北京.92-99.
    [81]李新锋.一种基于小波变换的煤矿监控图像增强方法[J].黑龙江科技信息,2010,(4):8.
    [82]朱立新.基于偏微分方程的图像去噪和增强研究[D].2007, 南京理工大学, 博士学位论文.512-58.
    [83]伍银波.指纹图像增强技术综述[J].计算机安全,2009,(12):24-27.
    [84]夏勇.图像分割技术研究[D].2004, 西北工业大学, 硕士学位论文.22-28.
    [85]卢逢春,张殿伦,郭海涛.基于属性直方图的图像分割方法及其在声呐图像分割中的应用[J].哈尔滨工程大学学报,2002,23(3):1-3,19.
    [86]朱文婕.图像分割技术在医学图像分割中的应用[J].安徽科技学院学报,2011,25(3):39-42.
    [87]董鸿燕.边缘检测的若干技术研究[D].2013,国防科学技术大学.博士学位论文.9-21.
    [88]张闯,王婷婷,孙冬娇,葛益娴,常建华.基于欧氏距离图的图像边缘检测[J].中国图象图形学报,2013,18(2):176-183.
    [89]玉振明,毛士艺,袁运能,高飞.基于边缘检测小波变换的图像融合研究[J].电子学报,2005,33(8):1446-1450.
    [90]Mallat S,Zhong S.Characterization of signals form multiscaledges[J]. IEEE Trans PAMI-14,1992,PAMI-14(7):710-732.
    [91]王玉平,蔡元龙.多尺度样条小波边缘检测算子[J].中国科学,1995,25(4),426-437.
    [92]刘新春,陈仕东,邹谋炎,柴振明.基于局部直方图相关的造影图象边缘检测方法[J].中国图象图形学报,2000,5(9),750-754.
    [93]龚金云,全思博.基于灰度图像直方图的边缘检测[J].装备制造技术,2007,(2),50-52.
    [94]费浦生,王文波.基于小波增强的改进多尺度形态梯度边缘检测算法[J].武汉大学学报(信息科学版),2007,32(2),120-123.
    [95]张立东,毕笃彦.一种改进的形态学梯度边缘检测算法[J].计算机工程,2005,31(21),14-16,50.
    [96]刘春阁.梯度边缘检测算子综述[J].科技视界,2012,(28),213.
    [97]张建成,洪留荣.基于多尺度小波的Roberts边缘检测法[J].计算机应用与软件,2005,27(5),133-135,147.
    [98]赵月云,王波.基于Roberts边缘检测的面向对象建筑物信息提取[J].城市勘测,2012,(2),120-122,125.
    [99]野媛.Roberts边缘检测算法的C语言实现[J].办公自动化,2013,(8),51-52.
    [100]赵海燕.利用改进的Prewitt边缘算子进行车牌定位[J].长春理工大学学报,2005,28(1),50-51,46.
    [101]常侃,门爱东,罗娟.基于Sobel边缘算子的H.264/AVC码率控制方案[J].北京邮电大学学报,2009,32(4),20-24.
    [102]左颢睿,张启衡,徐勇,赵汝进.基于GPU的快速Sobel边缘检测算法[J].光电工程,2009,36(1),8-12.
    [103]吴捷,陈德智,郭成志.Sobel边缘检测算法的变异实现图像增强[J].激光与红外,2008,38(6),612-614.
    [104]严国萍,何俊峰.高斯-拉普拉斯边缘检测算子的扩展研究[J].华中科技大学学报(自然科学版),2006,34(10),21-23.
    [105]鲍占阔,杨玉珍,陈阳舟.一种改善高斯拉普拉斯算子零交叉方法的车辆边缘检测[J].微计算机信息,2006,22(10-3),252-254,306.
    [106]李雪,王普明.基于高斯-拉普拉斯算子的图像边缘检测方法[J].河南机电高等专科学 校学报,2009,17(6),81-82.
    [107]赵宏中,张彦超.基于Canny边缘检测算子的图像检索算法[J].电子设计工程,2010,18(2),75-77,80.
    [108]罗勇,张华.基于Canny边缘检测算子和去除小物体算法的熔池图像处理[J].焊接,2005,(5),17-20.
    [109]周同,邹丽新,尤金正,王海燕,杜伟巍.基于改进Canny边缘检测算子的电子稳像算法研究[J].计算机应用研究,2010,27(2),506-508.
    [110]曾发明,杨波,吴德文,唐攀科,张建国,张鸿键.基于Canny边缘检测算子的矿区道路提取[J].国土资源遥感,2013,25(4),72-78.
    [111]韩慧妍,韩燮.形态学和Otsu方法在Canny边缘检测算子中的应用[J].微电子学与计算机,2012,29(2),146-149.
    [112]张志顺,奚建清,刘勇.基于GCV准则与Otsu法的Canny算子研究[J].计算机科学,2013,40(6),279-282.
    [113]姜普泽田,张兴国,倪远征,王浩.数学形态学与Canny算法结合的禽蛋检测边缘提取[J].机电技术,2013,(2),65-68.
    [114]洪子泉,杨静宇. 用于图象识别的图象代数特征抽取[J].自动化学报,1992,18(2):232-238.
    [115]程永清,庄永明,杨静宇.基于矩阵相似度的图象特征抽取和识别[J].计算机研究与发展,1992,29(11):42-48.
    [116]Jasper W J, Gamier S J, Potlapalli H, Texture characterization and defect detection using adaptive wavelets[J]. Optical Engineering,1996,35(11):3140-3149.
    [117]刘维群,李元臣.基于遗传算法的个性化信息的特征提取[J]. 现代情报,2006,(6):71-72,75.
    [118]Hong Z Q, Algebraic feature extraction of image for recognition[J]. Pattern Recognition, 1991,24(3):211-219.
    [119]张明星,代永霞,张静.一种基于纹理的车辆阴影消除新算法[J].信息通信,2011,(6):18-19
    [120]刘丽,匡纲要.图像纹理特征提取方法综述[J].中国图象图形学报,2009,14(4):622-635.
    [121]Wu Gang, Yang Jing-an, W ang Hong-yan, An algorithm for segmentation of texture image based on image variogram function[J].Acta Electronica Sinica,2001,29(1):44-47.
    [122]Ma Xiao-chuan, Hou Zhao-huan, Zhao Rong-chun. New fsrf-Gibbs model for texture image[J]. Chinese Journal ofComputers,1998,21(1):303-307.
    [123]Zhang Zhi-long, Lu Xin-ping, Shen Zhen-kang,et al.On texture feature extraction based on local Walsh transform [J]. Signal Processing,2005,21(6):589-596.
    [124]庄军,李弼程.一种基于灰度共生矩阵的文本图像识别方法[J].计算机工程,2006,32(3):214-216.
    [125]姚宏宇.一种有效的文本图像识别方法[J].中国图象图形学报,2003,8(特刊):652-656.
    [126]刘秋菊,赵冬玲,刘素华.一种基于模糊理论的图像识别方法[J].微计算机信息,2006,22(1):213-214,268.
    [127]耿瑞芳,曹辉,马永华,张罡.基于机器视觉的图像识别方法研究[J].自动化博览, 2008,(6):50-52.
    [128]巨永锋,蔺广逢,蔡占华.基于遗传算法的图像识别方法[J].长安大学学报(自然科学版),2004,24(6):99-101.
    [129]冉鸿雁.人工神经网络BP算法视角下智能安防控制模型探索研究[J].电子技术与软件工程,2013,(14):28.
    [130]金龙,况雪源,黄海洪,覃志年,于业宏.人工神经网络预报模型的过拟合研究[J].气象学报,2004,62(1):62-69.
    [131]Dean.Andrew R,Brian H Fiedler. Forecasting warm-season burn-off low clouds at the San Froncisco international airport using linear regression and a neural network[J]. Appl Meteor,2002,41(6):629-639.
    [132]SaitoK, Nakano R. Second-order learning algorithm with squared penalty term[J].Neural Computation,2000,12(3):709-729.
    [133]Mirchandani Gagan, Gao Wei. On hidden nodes for neural nets[J].IEEE Transactions on circuits and systems,1989,36(5):661-664.
    [134]袁曾任.人工神经网络及其应用[M].北京:清华大学出版社,1999.118-131.
    [135]陶文源,卢衍桐.专家系统与人工神经网络在决策支持系统中的集成[J].计算机仿真,1998,15(3):21-25.
    [136]许宜申,顾济华,吴迪,朱明诚.基于改进BP神经网络的手写字符识别[J].通信技术,2011,44(5):10-109,118.
    [137]卢金秋.人工神经网络在海关风险管理中的应用研究[J].计算机工程与应用,2006,(27):208-211.
    [138]纪惠军,张小栋,杨建民.基于机器视觉的织物疵点检测技术探讨[J].现代纺织技术,2011,(2):11-15.
    [139]张少伟.基于机器视觉的边缘检测算法研究与应用[D].2013,上海交通大学.硕士学位论文.1-7.
    [140]王俊修,孔斌.计算机视觉在机器人目标定位中的应用[J].微机发展.2003,13(12):7-10.
    [141]赵光辉,金洪翔,张恒龙,交流电机调速原理及变频技术的应用[J].黑龙江科技信息,2009,(8):30.
    [142]李俊泓.浅谈功率控制电机调速原理[J].电源技术应用,2012,(10):92-93.
    [143]Paul R P, Shimano B E, Mayer G. Kinematic control equations for simple manipulations [J]. IEEE Trans. SMC,1981,11(6):449-455.
    [144]Corke Peter I, Good Malcolm C. Dynamic Effects in Visual Closed Loop System [J]. IEEE Trans. Robotics and Automation,1996,12(5):671-683.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700