灌装自动化生产线上视觉检测机器人研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
灌装自动化生产线上视觉检测机器人是以机器视觉技术为基础,光机电一体化的智能检测设备,它是现代制造业的重要自动化装备之一。它可以代替人工对灌装自动化生产线上的空瓶和瓶内液体质量进行智能检测。所以研究和开发具有自主知识产权的智能检测机器人是国民经济飞速发展和市场不断扩大的需要。
     论文首先介绍了课题研究的背景和意义,机器视觉技术的概况,它在工业智能检测中的应用以及灌装自动化生产线上视觉检测机器人的研究现状。然后根据生产中对灌装前的空瓶质量检测和灌装后瓶内液体质量检测的要求,分别设计了灌装自动化生产线上的空瓶视觉检测机器人和实瓶视觉检测机器人。并在文中对其基本结构作了具体说明。比较了智能检测机器人的几种视觉检测和控制系统设计方案。在此基础上,为了满足高速检测的需要,设计了基于DSP的视觉检测和控制系统。因为图像采集系统是智能检测机器人的核心之一,所以论文中对图像采集系统的各项关键技术进行了深入研究,设计了可以为检测机器人提供清晰图像的图像采集系统。在智能检测机器人完成检测后,对不合格产品需要从生产线上分离,为此论文开发出新的柔性击出器,可以保证在高速流水线上稳定地分离对象产品。论文还开发了模块化的智能检测软件平台,能够稳定可靠地在智能检测机器人上完成检测和控制任务。根据这些研究,研制出灌装自动化生产线上视觉检测机器人的实验样机,验证了设计方案。
     本文中着重对用于灌装自动化生产线上的智能检测方法进行了研究,并提出了以运用支持向量机为主的对瓶身、瓶底、瓶口和瓶内液体的不同检测方法。
     瓶口质量对灌装后的产品质量有着重要影响,需要在灌装前对瓶口质量进行检测。由于在线检测中需要实时确定图像中处理对象位置,所以论文首先探讨了瓶口图像处理区域定位算法。经过对比分析确定了基于改进Hough变换的快速瓶口处理区域定位算法。为了满足生产线上高速检测的需求,提出了基于经验规则的瓶口质量检测方法。这种方法通过经验规则对圆形扫描法得到的瓶口平均灰度曲线进行判决,能快速确定瓶口质量。此方法虽然简洁,但要依赖于专家经验,检测准确率不够理想。因此采用了具有良好推广能力的支持向量机来判断瓶口质量。但是经过实验对比发现支持向量机的性能与选择的核函数及参数有很大关系。为此特提出基于支持向量机神经网络的瓶口质量检测算法。它结合了支持向量机和神经网络的优点,能更好地得到全局最优解。实验证明采用这种方法来检测瓶口能达到最高的准确率。
     在灌装前同样需要对瓶身和瓶底质量进行检测。对瓶身图像,提出了中心概率算法来快速确定处理区域。而在瓶底图像上采用了改进Hough变换算法来定位处理区域。经过分析缺陷特征,提出了分区快速检测算法来检测瓶身和瓶底质量。这种算法为了减少噪声干扰将处理区域进一步细分,然后采用综合的专家规则在这些小区域中检测是否存在缺陷。不过这种算法由于可能割裂缺陷,检测准确率不是很高。针对此种算法的不足,本文提出了基于多核函数支持向量机集成的瓶身和瓶底质量检测方法。这种方法先采用改进的分水岭变换分割出完整的可能缺陷区域,然后提取这些区域的特征,由多核函数支持向量机集成来分类决策。论文中提出的多核函数支持向量机集成运用蚁群算法优选具有不同核函数的支持向量机进行选择集成,可以保证其分类性能。通过对比实验证明基于多核函数支持向量机集成的瓶身和瓶底质量检测方法具有更高的检测准确率。
     在灌装后瓶内液体中可能还存在有杂质,会危害到消费者,必须对液体质量进行检测。为了区分瓶内液体杂质和瓶体上的痕迹,需要采用运动分析的方法处理序列图像。在序列图像中,瓶内液体杂质都会表现为一些高亮度的区域。因此提出一种二值图像差分算法,先采用基于聚类的二值化算法将图像二值化,再把连续的二值图像进行差分,据此分割出图像中的运动区域。并提出一种基于关联匹配的运动目标跟踪算法对分割出的运动区域进行跟踪。这种算法由卡尔曼滤波器预测运动目标在下一帧图像中可能的位置,然后以此为中心建立跟踪窗口,将位于窗口中的各运动区域和运动目标进行关联匹配运算,确定最佳的匹配目标,从而建立起目标跟踪链。然后根据这些运动目标所在区域的特征及其运动特性提取出特征量,运用论文中提出的模糊支持向量机算法来进行分类。这种模糊支持向量机利用了模糊理论,进一步提高了复杂问题处理能力和抗噪性能,可以更准确地对液体质量进行检测。同时根据瓶内液体体积检测的需要本文提出了一种液位检测算法。在液位检测中提出了边缘检测和边缘连接相结合的方法,可以快速准确地寻找到液面边缘,然后确定液位高度来检测液体体积。实验表明这些算法都是有效的。
The vision-based inspectors for liquid filling line are the vision based intelligent inspection equipments integrated of optical, mechanical and electronic functions. And they are automatic equipment in modern manufacturing industry. The intelligent inspectors can inspect empty bottle and liquid in bottle instead of human. So studying the intelligent inspector for filling line conforms to the requirements of develops of national economy and the market demand.
     This article firstly introduces the background and significance of the researches, the survey of machine vision technology, and its application in intelligent inspection. Then the bottle vision-based inspector and the liquid vision-based inspector for filling line are designed respectively according to the bottle inspection demands before liquid filling and the liquid inspection demands after liquid filling. The structures of these inspectors are explained in this article. The several designs of the inspection and control system are compared. The inspection and control system base on DSP is designed on the basis of the comparisons, which can realize high speed inspection. Because the image captured system is one of core system in inspectors, the crucial technology and equipments of image captured system are studied deeply. Then the image captured system which can provide clear image is developed. After inspection the fault production need be separated from production line. This article designs the flexible rejector, which can be sure to winkle out production stably from high speed product line. The inspection software is developed, which can accomplish inspection and control task on inspector. According to studies in this article, the prototypes of the vision-based inspectors for filling line have already been developed,which prove the validity of these designs.
     This paper mainly researches intelligent inspection methods which are used for liquid filling line. And the different methods which are used to inspect the bottle finish, bottom, wall and liquid in bottle are proposed based on using support vector machines mainly.
     The quality of bottle finish is important for filled production, so the finish inspection is necessary. Because the position of bottle in captured image need be located in online inspection, the location methods are firstly discussed. The paper presents the location methods based on modified Hough transform for bottle finish after comparing. The inspection method based on experiential rules algorithm is put forward for high speed inspection. This method inspects bottle finish with experiential rules according to the mean gray curve of finish which is got by rounded scan. The quality of bottle finish is judged quickly by this method. This method is concision, but it depends on manual rule and accurate rate of this method is not very high. So the support vector machines(SVM) whose generalization is good are used for finish inspection. But the experiments show the ability of SVM relates to the kernel function and its parameters. And then the inspection method based on support vector machine neural network is put forward to inspect bottle finish. The support vector machine neural network synthesizes the support vector machines and neural networks, which can be optimized effectively. The experiments prove the inspection result of this method is better.
     The bottle wall and bottom also need be inspected before liquid is filled. The location method based on center probability is presented for bottle wall. The position of region of interest of bottle bottom is confirmed by location methods based on modified Hough transform. After analyzing defection characteristic, the divisional inspection methods are presented. This method divides the region of interest into some smaller region for reducing the effect of noise, and afterward uses expert rules to detect if there are defect in these small regions. But because the defect may be also divided in this method, the accurate of inspection is not very satisfied. Then aiming at shortage of divisional inspection methods, the inspection method based multi-kernel support vector machines ensemble is put forward. The whole defection regions are segmented by modified watershed transform. And then this method can get the features of these regions, and uses the multi-kernel function support vector machines ensemble to classify these features. The multi-kernel function support vector machines ensemble uses ant colony optimization to optimized SVM in ensemble, which can use different kernel function, so its classification ability is good. The accurate rate of this inspection method is proved to be higher.
     There may be some impurities in liquid after filling, which may harm the consumer. Hence quality of liquid must be inspected. For distinguishing the impurities in liquid and marks on bottle, the motion analysis are applied to process image sequences. The impurities are shown as bright regions in images. Hence a binary image difference is presented. The images are firstly converted to binary image by the methods based on clustering, and the sequent binary images are subtracted to segment the motive regions. After that a matching algorithm is put forward to track these motive regions. The possible position of motive object in next frame is predicted by Kalman filter, and then the tracking window is set up according to this position. The motive regions in this tracing window are associated with the motive object to confirm the object which is matched for this track object, and the track chain is build at the same time. The region and motive features of object are exacted, and these features are classified by fuzzy support vector machines. The fuzzy support vector machines utilize fuzzy theory, which improves the ability of processing complex problem and anti-noise ability. So it can inspect liquid more exactly. At the same time the detection method of filling level is put forward to scale the liquid volume. The method which synthesizes the edge detection and edge linking is used to detect filling level. The filling level is found exactly and quickly by this method, and the high of filling level is calculated to detect liquid volume. The experiments show these methods are effective.
引文
[1] 中 国 玻 璃 包 装 工 业 回 顾 与 展 望 .[2006.12]. http://www.cmrn.com.cn/html/60790.htm.
    [2] 啤酒瓶国家标准[S]. 中国轻工业联合会, 1996.
    [3] 中国药典[S]. 中国, 中国药典委员会, 2005.
    [4] 马项德, 张正友. 计算机视觉-计算理论与算法基础[M]. 北京: 科学出版社, 1997.
    [5] Sonka M, Hlavac V, Boyle R. Image Processing, Analysis, and Machine Vision[M]. Philadelphia: Thomson-Engineering, 1998.
    [6] Machine Vision Markets.[2006.12]. http://www.machinevisiononline.org/public/articles/details.cfm?id=2432.
    [7] aleixos N, blasco J, molto E. Assessment of citrus fruit quality using a real-time machine vision system[C].IN: International Conference on Pattern Recognition. 2000, 482~485.
    [8] 卢朝阳,周幸妮,顾英. 用图像识别的方法检测集成电路的键合点[J]. 自动化学报. 1999, 25(4): 567-570.
    [9] Torres T, Sebastian J M, Aracil R, et al. Automated real-time visual inspection system for high-resolution superimposed printings[J]. Image and Vision Computing. 1998(16): 947-958.
    [10] E Z M, K G S, Rovithakis. A Bayesian framework for multilead SMD post-placement quality inspection[J]. IEEE Transactions on Systems, Man and Cybernetics. 2004, 34(1): 440-453.
    [11] Anard S, Mccord C, Sharma R. An integrated machine vision based system for solving the nonconvex cutting stock problem using genetic algorithms,[J]. 1999(18): 396-414.
    [12] Tsai D M, Machine A. Vision approach for detecting and inspecting circular parts[J]. International Journal of Advanced Manufacturing Technology. 1999(15): 217-221.
    [13] Jimenez A R, Jain A K, R. Ceres J L P. Automatic fruit recognition: a survey and new results using range/attenuation images[J]. Pattern Recognition. 1999(32): 1719-1736.
    [14] Shiau Y R, Jiang B C. Study of a measurement algorithm and the measurementloss in machine metrology[J]. Journal of Manufacturing Systems. 1999(8): 22-34.
    [15] R U, RodríguezF, M B. A machine vision system for seeds quality evaluation using fuzzy logic[J]. Computers and Electronics in Agriculture. 2001, 32(1): 1-20.
    [16] Jimenez A R, Ceres R, Pons J L. A vision system based on a laser range-finder applied to robotic fruit harvesting[J]. Machine Vision and Applications. 2000(11): 321-329.
    [17] 沙毅, 曹英禹, 王经武, et al. 基于图像处理技术的工件长度在线测量[J]. 东北大学学报:自然科学版. 2005, 26(10): 957-959.
    [18] 鲁镇恶 , 谢勇 . 印刷品外观缺陷机器视觉的检测与识别 [J]. 包装工程 . 2002, 23(5): 10-11.
    [19] 刘曙光,刘明远,何钺. 基于小波分析的仪表板质量检测[J]. 信号处理. 1999, 15(3): 285-288.
    [20] A K, C S H. Texture inspection for defects using neural networks and support vector machines[C].IN: International Conference on Image Processing. 2002, 353-356.
    [21] Boyer K L, Ozguner T. Robust online detection of pipeline corrosion from range data[J]. Machine Vision and Applications. 2001(12): 291-304.
    [22] 李炜, 黄心汉, 等. 基于机器视觉的带钢表面缺陷检测系统[J]. 华中科技大学学报:自然科学版. 2003, 31(2): 72-74.
    [23] Tsai D M, Hsieh C Y. Automated surface inspection for directional textures[J]. Image and Vision Computing. 1999(18): 49-62.
    [24] Bahlmann C, Heidemann G, Ritter H. Artificial neural networks for automated quality control of textile seams[J]. Pattern Recognition. 1999(32): 1049-1060.
    [25] 马永军, 方凯, 等. 基于支持向量机和方差的管道内表面粗糙度等级识别[J]. 信息与控制. 2002, 31(5): 437-440.
    [26] K A, H P G K. Defect detection in textured materials using Gabor filters[J]. IEEE Transactions on Industry Applications. 2002, 38(2): 425-440.
    [27] K S S, F K. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm[J]. IEEE Transactions on Neural Networks. 2002, 13(2): 393-401.
    [28] 陈路, 李小东. 基于数字图像处理的印刷品网点面积率检测研究[J]. 包装工程. 2005, 26(6): 45-47.
    [29] Velten J, Kummert A, Maiwald D. Image processing algorithms for video-based real-time railroadtrack inspection[C].IN: 42nd Midwest Symposium on Circuits and Systems. 1999, 530-533.
    [30] Jiang B C, Tasi S, Wang C. Machine Vision-Based Gray Relational TheoryApplied to IC Marking[J]. IEEE Transactions on Semiconductor Manufacturing. 2002, 15(4): 531-539.
    [31] 周正干, 赵胜, 安振刚. 航空发动机叶片实时成像自动检测技术研究[J]. 机械工程学报. 2005, 41(4): 180-184.
    [32] LINATRONIC 713 M1 In-line Empty Bottle Inspector.[2003.12]. http://www.krones.de/pdf/713m1_e.pdf.
    [33] The Linear Empty Bottle Inspector.[2003.12]. http://www.heuft.com/eng/ produkte/linear.htm.
    [34] Automatic Inspection Machines.[2006.6]. www.seidenader.de.
    [35] Inspection Machines.www.eisai-mc.co.jp/engligh/machines.
    [36] Inspection Machines for Pharmaceutical Industy.[2006.6]. http://www.brevetti-cea.com/.
    [37] Firmin C, Hamad D, Postaire J G, et al. Fault detection by a Gaussian neural network with reject options in glass bottle production[C]. 2785, 1996. 152-162.
    [38] Ma H, Su G, Wang J, et al. A Glass Bottle Defect Detection System without Touching[C].IN: The First International Conference on Machine Learning and Cybernetics. Beijing: 2002, 628-632.
    [39] Shafait F, imran S M, S K. Fault detection and localization in empty water bottles through machine vision[C].IN: E-tech 2004. 2004, 30-34.
    [40] Ishii A, Mizuta T, Todo S. Detection of foreign substances mixed in a plastic bottle of medicinal solution using real-time video image processing[C].IN: Fourteenth International Conference on Pattern Recognition. Australia: 1998, 1646-1650.
    [41] Krones Linatronic 713-M2 Empty bottle inspector.[2003.12]. http://www.krones.de/pdf/linatronic-713-m2_e.pdf.
    [42] Soini A. Machine vision technology take-up in industrial applications[C].IN: 2nd International Symposium on Image and Signal Processing and Analysis. 2001, 332-338.
    [43] SIMATIC VS 710 Quick Reference Guide.[2003.12]. http://www.ad.siemens.com.cn/products/as/isensor/mv/download/vs710_4.pdf.
    [44] In-Sight Product Guide.[2003.12]. www.cognex.com/pdf/download/ in-sight-brochure.pdf.
    [45] Kopparapu S K. Lighting design for machine vision application[J]. Image and Vision Computing. 2006, 24(7): 720-726.
    [46] kane J, 贡树行. 光学设计是机器视觉系统的关键[J]. 红外. 1999(8): 37-39.
    [47] LED 光 源 产 品 应 用 实 例 .[2003.12]. http://www.daheng-image.com/indexc.htm.
    [48] LED Advantages.[2003.12]. http://www.advancedillumination.com/category /primer_2.html.
    [49] LED灯及其发光原理.[2003.12]. http://www.songye.cc/html/led/fgyl.htm.
    [50] Comparison of modern CCD and CMOS image sensor technologies and systems for low resolution imaging[C].IN: IEEE Sensors. 2002, 171-176.
    [51] A Roadmap for Building a Machine Vision System.[2003.12]. http://www.imagenation.com/pdf/roadmap.pdf.
    [52] 摄像机的选择和主要参数.[2003.12]. http://www.bjtac.com/jscamery.htm.
    [53] 镜头的选择和主要参数.[2003.12]. http://www.bjtac.com/jslens.htm.
    [54] TM-6703 Progressive Scandouble Speed Shutter Camera.[2003.12]. http://www.ftk-image.com/webfiles/pdfs/TM_6703.pdf.
    [55] Choosing a Frame Grabber for Performance and Profitability.[2003.12]. http://www.imagenation.com/pdf/choose.pdf.
    [56] Matrox Corona-II Installation and Hardware Reference Manual.[2003.12]. http://www.matrox.com/imaging/products/corona2/corona2_inst_hw_manual.pdf.
    [57] Glasbey C A. An analysis of histogram-based thresholding algorithms[J]. Graphical Models and Image Processing. 1993, 55(6): 532-537.
    [58] 韩思奇, 王蕾. 图像分割的阈值法综述[J]. 系统工程与电子技术. 2002, 24(6): 91-94.
    [59] Sezgin M, BülentSankur. Survey over image thresholding techniques and quantitative performance evaluation[J]. Journal of Electronic Imaging. 2004, 13(1): 146-168.
    [60] Otsu N. A threshold selection method from gray-level histograms[J]. IEEE Trans. System, Man Cybernetics. 1979, 9(1): 62-66.
    [61] Ballard D H. Generalizing the Hough transform to detect arbitrary shapes[J]. Pattern Recognition. , 13(2): 111-122.
    [62] Xu L, Oja E. Randomized Hough transform(RHT): Basic mechanisms, algorithms, and computational complexities[J]. Image Understanding. 1993, 57(2): 131-154.
    [63] Mallat S. A Wavelet Tour of Signal Processing[M]. Academic Press, 1998.
    [64] Vapnik V N. The Nature of Statistical Learning Theory[M]. 1 ed. Springer, 1998.
    [65] Vapnik V N. Statistical Learning Theory[M]. Wiley-Interscience, 1998.
    [66] Vapnik V, Levin E, Cun Y L. Measuring the VC-dimension of a learningmachine[J]. Neural Computation. 1994, 6(5): 851-876.
    [67] 张学工. 关于统计学习理论与支持向量机[J]. 自动化学报. 2000, 26(1): 33-41.
    [68] 张小云, 刘允才. 高斯核支撑向量机的性能分析[J]. 计算机工程. 2003, 29(8): 22-25.
    [69] Amari S, Wu S. Improving support vector machine classifiers by modifying kernal functions[J]. Neural Networks. 1999, 12(6): 783-789.
    [70] Tan Y, Wang J. A Support Vector Machine with a Hybrid Kernel and Minimal Vapnik-Chervonenkis Dimension[J]. IEEE Transactions on Knowledge and Data Engineering. 2004, 16(4): 385-395.
    [71] Hsu C, Chang C, Lin C. A Practical Guide to Support Vector Classification.2006[2006.8]. http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
    [72] 王耀南. 计算智能信息处理技术及其应用[M]. 长沙: 湖南大学出版社, 1999.
    [73] Smola A J, Scholkopf B, Klaus-robertM ü ller. The connection between regularization operators and support vector kernels[J]. Neural Networks. 1998, 11(4): 637-649.
    [74] Péter, András. The equivalence of support vector machine and regularization neural networks[J]. Neural Processing Letters. 2002, 15(2): 97-104.
    [75] 张铃. 基于核函数的SVM机与三层前向神经网络的关系[J]. 计算机学报. 2002, 25(7): 696-700.
    [76] Basu M. Gaussian-based edge-detection methods-a survey[J]. IEEE Transactions on Systems, Man and Cybernetics. 2002, 32(3): 252-260.
    [77] Canny J F. Finding Edges and Lines in Images[R]. Cambridge, MA: Massachusetts Institute of Technology, 1983.
    [78] Canny J F. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986, 8(6): 679-698.
    [79] Soille P J, Ansoult M M. Automated basin delineation from digital elevation models using mathematical morphology[J]. Signal Processing. 1990, 20(2): 171-182.
    [80] Band L E. Topographic partition of watersheds with digital elevation models[J]. Water Resources Research. 1986, 22(1): 15-24.
    [81] V L, S P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence. 1991, 13(6): 583-598.
    [82] Nguyen H T, Worring M, Van Den Boomgaard R. Watersnakes: energy-driven watershed segmentation[J]. 2003, 25(3): 330-342.
    [83] Kim J B, Kim H J. A Wavelet-based Watershed Image Segmentation[C].IN: 16th International Conference on Pattern Recognition. Québec, Canada: 2002, 505-508.
    [84] Jung C R. Multiscale image segmentation using wavelets and watersheds[C].IN: Brazilian Symposium on Computer Graphics and Image Processing. Brazil: 2003, 278-284.
    [85] Mallat S, Zhong S. Characterization of signals from multiscale edges[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1992, 14(7): 710-732.
    [86] Hansen L K, Salamon P. Neural network ensembles[J]. IEEE Transaction on Pattern Analysis and Mach ine Intelligence. 1990, 12(10): 993-1001.
    [87] Breiman L. Bagging predictors[J]. Machine Learning. 1996, 24(2): 123-140.
    [88] R E, Schapire. The Strength of Weak Learnability[J]. Machine Learning. 1990, 5(2): 197-227.
    [89] Freund Y. Boosting a weak learning algorithm by majority[J]. Information and Computation. 1995, 121(2): 256-285.
    [90] Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an application to boosting[J]. Journal of Computer and System Sciences. 1997, 55(1): 119-139.
    [91] Bradley, Tibshirani R J E. An Introduction to the Bootstrap[M]. Chapman & Hall, 1993.
    [92] Breiman L. Arcing classifiers[J]. The Annals of Statistics. 1998, 26(3): 801--849.
    [93] Opitz D, Maclin R. Popular ensemble methods: An empirical study[J]. Journal of Artificial Intelligence Research. 1999, 11: 169-198.
    [94] Bauer E, Kohavi R. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants[J]. Machine Learning. 1999, 36(1-2): 105-139.
    [95] Hyun-chul K, Shaoning P, Hong-mo J, et al. Constructing support vector machine ensemble[J]. Pattern recognition. 2003, 36: 2757-2767.
    [96] Kim H, Pang S, Je H, et al. Pattern classification using support vector machine ensemble[C].IN: 16th International Conference on Pattern Recognition. 2002, 160-163.
    [97] Hu Z, Cai Y, Li Y, et al. Support vector machine based ensemble classifier[C].IN: The 2005 American Control Conference. 2005, 745-749.
    [98] Smith R S, Kittler J, Hamouz M, et al. Face Recognition Using Angular LDA and SVM Ensembles[C].IN: The 18th International Conference on Pattern Recognition. 2006, 1008-1012.
    [99] Hu Z, Li Y, Cai Y, et al. An empirical comparison of ensemble classification algorithms with support vector machines[C].IN: 2004 International Conference on Machine Learning and Cybernetics. 2004, 3520-3523.
    [100] Dorigo M, Maniezzo V, colorni A. The ant system: optimization by a colony of cooperating agents[J]. IEEE Transactions on Systems, Man, and Cybernetics. 1996, 26(2): 29-41.
    [101] Verbeeck K, Nowe A. Colonies of learning automata[J]. 2002, 32(6): 772-780.
    [102] 段海滨, 王道波. 一种快速全局优化的改进蚁群算法及仿真[J]. 信息与控制. 2004, 33(2): 241-244.
    [103] 马军建, 董增川, 王春霞, et al. 蚁群算法研究进展[J]. 河海大学学报:自然科学版. 2005, 33(2): 139-143.
    [104] Tsai C, Wu H, Tsai C. A New Data Clustering Approach for Data Mining in Large Databases[C].IN: The 2002 International Symposium on Parallel Architectures, Algorithms and Networks. 2002, 278-283.
    [105] Zhou Z, Wu J, Tang W. Ensembling neural networks: many could be better than all[J]. 2002, 137(1-2): 239-263.
    [106] 吴建鑫, 周志华, 沈学华, et al. 一种选择性神经网络集成构造方法[J]. 计算机研究与发展. 2000, 37(9): 1039-1044.
    [107] Karmakar G C, Dooley L S. A generic fuzzy rule based image segmentation algorithm[J]. Pattern Recognition Letters. 2002, 23(10): 1215-1227.
    [108] Tolias Y A, Panas S M. On applying spatial constraints in fuzzy image clustering using afuzzy rule-based system[J]. IEEE Signal Processing Letters. 1998, 5(10): 245-247.
    [109] Pham D L. Spatial models for fuzzy clustering[J]. Computer Vision and Image Understanding. 2001, 84(2): 285-297.
    [110] Pal N R, Bezdek J C. On cluster validity for the fuzzy c-means model[J]. IEEE Transactions on Fuzzy Systems. 1995, 3(3): 370-379.
    [111] 刘钢, 刘明, 匡海鹏, et al. 多目标跟踪方法综述[J]. 电光与控制. 2004, 11(3): 26-29.
    [112] Giachetti A, Campani M, Torre V. The use of optical flow for road navigation[J]. IEEE Transactions on Robotics and Automation. 1998, 14(1): 34-48.
    [113] Fayolle J, Ducottet C, Schon J -. Application of multiscale characterization of edges to motiondetermination[J]. IEEE Transactions on Signal Processing. 1998,46(4): 1174-1178.
    [114] 李红艳, 吴成柯. 一种基于小波变换的序列图像中小目标检测与跟踪算法[J]. 电子与信息学报. 2001, 23(10): 943-948.
    [115] Paragios N, Deriche R. Geodesic active contours and level sets for the detection andtracking of moving objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000, 22(3): 266-280.
    [116] Castaud M, Barlaud M, Aubert G. Tracking video objects using active contours[C].IN: Workshop on Motion and Video Computing: 2002, 90-95.
    [117] Yeh I. Modeling chaotic two-dimensional mapping with fuzzy-neuron networks[J]. Fuzzy sets and systems. 1999, 105: 421-427.
    [118] Michalewicz Z. Genetic Algorithms + Data Structures = Evolution Programs[M]. Springer, 1998.
    [119] Chapelle O, Vapnik V, Bousquet O, et al. Choosing multiple parameters for support vector machines[J]. Machine Learning. 2002, 46(1-3): 131-159.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700