机器视觉标定与目标检测跟踪方法及其应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着现代社会的发展,对产品质量、生产效率、劳动条件、环境等要求不断提高,从大型生产线,高楼大厦窗户的自动擦洗,恶劣环境下的清洗工作,到国防武器装备制造等民用及军事应用的各种领域,智能化、自动化、机器人化是时代发展的必然趋势,而机器视觉技术是促进其发展的核心技术。研究机器视觉理论方法及其关键技术具有十分重要的理论意义及社会经济意义。为此,本论文开展了机器视觉系统标定,视觉目标信息检测跟踪方法,及其在大型冷凝器清洗机器人作业中的应用研究,全文的主要工作如下:
     论文首先分析了立论背景及意义,回顾了机器视觉理论、关键技术的发展现状,介绍了机器视觉的应用研究现状,总结和归纳出待研究的难点问题及研究主线,简要阐述了射影几何,机器视觉系统的视觉成像模型、成像视点,视觉几何的理论基础。
     提出了基于扩展摄像机成像模型的自标定算法,给出扩展成像模型,采用扩展模型在一幅图像中同时运用不同方向透视投影分析,建立不同方向的单应关系进而建立内参数约束方程,实现单幅图像标定。与以往由三幅图像基于单应阵的标定算法相比,克服了由于多幅图像的像点不对应而造成精度低的问题。
     标定成像模型外,提出了一种基于场景中单个目标点的手眼标定方法,精确控制机械手平台作五次以上平移及两次以上旋转运动,提取场景中单个目标点的像点,通过视差及景深反映摄像机的运动,建立机械手平台与摄像机两坐标系之间相对位置关系的约束方程组。线性求得摄像机内参数K的五个元素及手眼关系R、t,同时,可求得场景目标点深度值。算法中机械手平台作平移运动,无需正交,使得对机械手运动控制操作方便,易实现。
     提出了基于矢量差分的未标定摄像机的P5P问题的线性求解法,由5个控制点构成矢量差分,利用R的正交性,逐步建立摄像机姿态及相机矩阵的约束方程,从线性理论的角度给出未标定P5P的解析解;提出了基于平行线段对应的运动分析线性算法,在运动恢复结构(SFM)框架下,将线段表示为两要素:点,直线,利用平行性,由像线段恢复空间线段。根据运动学理论的螺旋定理,建立基于空间线段两要素的运动参数的线性约束方程,用四元数法线性求解运动参数,并建立PSO非线性优化算法对运动参数进行优化。姿态及运动参数的约束方程是线性的,有解析解,求解方便。
     讨论了基于MS迭代算法的视觉图像信息的检测、跟踪方法,目标的检测、跟踪相辅相成,为了提高目标模型颜色特征表征的抗噪性与匹配迭代的有效性,提出了通过MS聚类的方法进行检测,并用聚类模式点来表达目标模型;提出了分层MS匹配搜索的思想,给出分层MS匹配迭代跟踪算法,先将目标参考模型与目标候选模型的聚类模式点、聚类块匹配,再块内的像素匹配,分层执行估计出跟踪序列帧中目标质心模式点的位置。实验结果表明,与传统MS跟踪算法相比,分层MS确定性梯度迭代算法可取得较好的跟踪性能。
     跟踪单目标情况相对简单些,而多目标跟踪由于目标数目、交互运动等诸多的不确定因素,需在概率推论框架下进行状态估计来跟踪。提出一种基于RJMCMC的分层MS视觉多目标跟踪算法,多目标跟踪问题建模为贝叶斯推理下的最大似然估计,设计了四种可逆运动方式构造马氏链,并给出基于关联匹配阵的有效的先验建议分布,提高了目标的抽样置信度,进而提高算法迭代效率,基于分层匹配思想,给出像素级与聚类块级两级分层的似然度量。实验结果表明,分层跟踪在单目标、多目标跟踪中具有较强的鲁棒性。
     针对冷凝器清洗机器人作业的应用,开展了清洗机器人的视觉系统及其关键技术的应用研究,实现清洗机器人的自主移动及在线清洗大型冷凝器。为此:
     构建了视觉系统,由引导机器人定位导航的子系统和引导机械臂喷枪定位冷凝器管口的子系统组成,共四路信息通过图像采集卡连接传输给主控制柜,经视觉关键技术算法处理后,给出对机器人控制的决策信息。
     提出了移动机器人视觉三维SLAM定位导航算法,使机器人自主移动到当前待清洗的局部位置,执行清洗任务。利用3D相机获取二维三维信息作为观测量的两个属性值,耦合ICP,BA算法优化数据匹配及求解机器人任意时刻的运动量。根据视觉理论求解SLAM,实现机器人的6DOF定位,与三维地图的创建。视觉理论为三维空间下6DOF 3DSLAM过程分析提供了关键理论依据,视觉SLAM求解过程较传统的运动学KF, PF滤波更简洁,无须预测步,定位与地图创建为一个过程,且利用三维相机与三维激光相比,二维数据引导三维数据匹配,减小了三维数据的搜索范围,提高了三维SLAM的计算效率。
     设计了基于视觉的管口定位算法,辅助机器人确定冷凝器管口的位置。根据作业场地面积的大小及摄像机的有效视场,离线人工计算将工作面划分区域,分块粗定位,控制机器人移动,在机器人移动到粗定位的某一确定位置,利用机械臂视觉系统检测分割冷凝器管的管口,计算管口的中心像点位置,再根据视觉理论计算出管口的空间位置,实现对当前局部范围内每个管口的位置的精确定位。
With the developing of science and technology of modern society, the demand is increasely improving on product quality and production efficiency and working conditions and the environment. Intelligence, robotics and automation is the inevitable trend of development for civil and military applications in the field from big production lines, automatic wash of buildings window, and cleaning work in bad environment, to defense weaponry and equipment manufacturing. It has a very important theoretical and economic significance to research machine vision theory method and its key technology. This paper has investigated vision calibration and visual object detection and tracking, and its application in the large condenser cleaning robots. Main results and contributions of this dissertation are as following.
     The background of the subject is analyzed, the vital theory and technique in the vision field are reviewed, and machine vision applications are presented, then the difficulty problems to be handled is discussed. For the sake of further research projective geometry, the imaging model and perspective, fundamental vision geometry are introduced briefly.
     An approach to self-calibration is proposed based on extended imaging model. Extended imaging models are described and three different homography between space plane and image plane are obtained from one image simultaneously under different directions perspective projection, further, constraints equation in intrinsic parameters are established. So a single image completed the process of calibration. Comparison with traditional method, the precision of calibration is improved without the step of matching image points in multi-view.
     Besides the calibration of camera imaging model, A self-calibration approach to hand-eye relation of manipulator is proposed based on a single point in the scene. The motions of manipulator are accurately controlled and read, then camera is required to observe one point in the scene at five (or more) pure translational motions and two (or more) pure rotational motions. The motions of camera are estimated from the disparity and depth value of the point. Thus, constrained equations are set up between the manipulator and the camera coordination. The five elements of intrinsic parameters of camera and hand-eye relation are determined linearly, and depth value of scene point is also solved. It is characteristic by conveniently controlling motions of manipulator and succinct implement of algorithm due to the utility of a single point in the scene, requiring neither matching, nor orthogonal motions.
     An algorithm is proposed for solving linearly the P5P problem with an un-calibrated camera based on vector difference. Vector difference is set up with five control points. Constraint equations in camera pose and intrinsic parameters are set up according orthogonal relation of rotation R. the analytic solutions of P5P with un-calibrated camera is determined in terms of linear theory.
     Motion Estimation from Image Sequences feature is investigated here, a linear algorithm for motion estimation is proposed based on parallel line segments (PLS) correspondences. Under the framework of structure from motion (SFM), Line segment is represented by two elements, point and line. The space line segment structure is reconstructed gradually by image lines under the help of parallelism. Then the two elements of space line segment based the motion parameters equations are established according to screw theory and solved using quaternion. Further, the motion parameters are optimized by PSO optimization algorithm. It is characteristic by linear constraint equations and analytic solutions.
     The method for visual object detection and tracking is discussed based on mean shift iteration. They are complementary, but the classical mean shift tracking algorithm has poor robustness in represent of color feature and complex iterations in matching. So the detection algorithm is proposed based on mean shift cluster and object model is represented by clustered modal points. Then a hierarchical mean shift(HMS)iteration for object tracking is proposed. The tracking match between object reference model and candidate model is performed at two levels, first in the clustered blocks, then in pixels within blocks. Finally, the centroid of tracking object is got layer by layer in the consecutive frames. Relatively, single object tracking is simpler and better performance is obtained using deterministic gradient algorithm of modified mean shift, however, multi-object visual tracking is done with probabilistic reasoning due to factors of unknown number of the object, and inter-acting each other. So a new approach to multi-object visual tracking is proposed based on Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling. The tracking problem is formulated as computing the MAP (maximum a posteriori) estimation given image observation Four types of reversible and jump moves are designed for Markov Chains dynamics, and the prior proposal distribution of objects is developed with the aid of association match matrix to improves the confidence of sampling and perform the iteration effectively. The joint likelihood distribution measurement is presented at two levels of clustered blocks subsets (CBS) and pixels. Comparisons with other two MS algorithms demonstrate the validity, robustness, and performance of hierarchical mean shift(HMS) algorithm used for single and multi-object.
     For the application in mobile robot for cleaning condenser, the paper research vision system of robot and key vision technique to implement autonomous movement of robot and online cleaning of the condenser.
     Therefore, we construct vision system, including the subsystem of guiding robot navigation positioning, and the subsystem of guiding blowtorch to position the condenser pipes. Altogether four vision signals are transmitted to control cabinet via image acquisition card. Then decision is made to control robot after the process of visual information.
     A visual SLAM for robot is proposed based on 3D camera sensor, so that robot autonomously moves to the current place to be cleaned in the environment of condenser. SwissRanger SR3000 Camera used for sensing 3D natural environment provides mobile robot with image and 3dimension data. Theses data are regarded as two property of environment observation. Observation at time k is matched with observation at time k-1 under the constraint of the coupled BA and ICP, and the estimation of movement is performed. A solution to SLAM is obtained with the respect of visual theory, containing implementation of the 6 DOF location of mobile robot and 3demension mapping of landmarks. The solution is easier than traditional kinematics Kalman or particle filter without prediction process. The robot location and mapping is solved simultaneously. Comparison with laser 3d data match, the proposed algorithm carries out match process by using 2Dimage to guide 3D data so that search range is reduced and the computation efficiency of 3D SLAM is improved.
     An approach is presented based on vision to help robot with the positioning of condenser tubes. The work place is partitioned manually into blocks off-line according to the size of the area and view filed of camera. Course position is performed firstly by counting blocks, then precise position of each tube in current block is conducted by mechanical arm visual system when robot move to a certain course position. With the help of visual theory, the captured image is converted into tubes spatial position via tube detecting, circle fitting, the center calculating.
引文
[1]Vision inspection and measurement, www.visionxinc.com/index.html
    [2]Nuske, Stephen Roberts, Jonathan, etal. Robust outdoor visual localization using a three-dimensional-edge map. Journal of Field Robotics,2009, 26(9):728-756
    [3]P. Modi, E. Rodriguez, and W. R. Chitwood Jr. Robot-assisted cardiac surgery. Interactive CardioVascular and Thoracic Surgery,2009,9(3):500-505
    [4]P. Abolmaesumi, S.E. Salcudean and W.H. Zhu. Real-Time visual servoing of a robot using three-dimensional ultrasound. In:Proc. of IEEE Int. Conf. on Robotics and Automation, Roma, Italy,2007,2655-2660
    [5]Visual understanding,vue.tufts.edu/index.cfm
    [6]D. Marr, T. Poggio. Cooperative computation of stereo disparity. Science, New Series,1976,194(4262):283-287
    [7]Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. London: Cambridge University Press,2004
    [8]Andrew Zisserman, Visual Geometry Group, www.robots.ox.ac.uk/-az/
    [9]Andrew Blake. Computer Vision at MSR Cambridge, research.microsoft.co m/en-us/groups/vision/
    [10]WilliamT.Freeman. CSAIL computational photography research. www.ai.m it.edu/people/wtf/
    [11]Thomas Binford. Stanford Computer Graphics Laboratory. vision.stanford.edu
    [12]Olivier. Faugeras. www-sop.inria.fr/odyssee/team
    [13]Abdel-Aziz YI, Karara H M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Proceedings of the Symposium on Close-Range Photogrammetry,1971:1-18
    [14]Tsai R.Y. An efficient and accurate camera calibration technique for 3D machine vision. In:Proc. of IEEE int. Conf. on Computer Vision and Pattern Recognition, Miami Beach:1986:364-374
    [15]Hartley R. Estimation of relative camera positions for uncalibrated cameras. In:Sandini G, ed. In:Proceedings of the European Conference on Computer Vision. NLCS 588, Springer-Verlag,1992,579-387
    [16]Maybank SJ, Faugeras OD. A theory of self-calibration of a moving camera. International Journal of Computer Vision,1992,8(2):123-151
    [17]Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334
    [18]C. Zeller and O. Faugeras. Camera self-calibration from video sequences:the Kruppa equations revisited. Research Report 2793, INRIA, Feb.1996
    [19]Wu Fu-Chao, Hu Zhan-Yi. Linear determination of the infinite Homography and camera self-calibration. Acta Autornatica Sinica,2002,28(4):488-496
    [20]祝海江,吴福朝.基于一组对应消失线的度量重建.软件学报,2004,15(5):666-675
    [21]孟晓桥,胡占义.一种新的基于圆环点的摄相机自标定方法.软件学报,2002,13(5):957-965
    [22]吴福朝,王光辉,胡占义.由矩形确定摄像机内参数与位置的线性方法.软件学报,2003,14(3):703-712
    [23]Ma S D. A self-calibration technique for active vision system. IEEE Transactions on Robotics and Automation,1996,12(1):114-120
    [24]Yang C J, Wang W, Hu Z Y. An active vision based self-calibration technique. Chinese Journal of Computers,1998,21(5):428-435
    [25]R.Hartley. Self-calibration of stationary cameras. International Journal of Computer Vision,1997,22(1):5-23
    [26]吴福朝,李华,胡占义.基于主动视觉系统的摄像机自定标方法研究.自动化学报,2001,27(6):736-746
    [27]Hu Z Y, Wu F C. Recent progress in active vision based camera calibration. Chinese Journal of Computers,2002,25(11):1149-156
    [28]Nirmal Baran Hui, Dilip Kumar Pratihar. Camera calibration using a genetic algorithm. Engineering Optimization,2008,40(12):1151-1169
    [29]Philippe Guermeur, Jean Louchet. An Evolutionary Algorithm for Camera Calibration. Proceedings of ICRODIC, Rethymno, Greece.2003,799-804
    [30]Nicholas Fung and Philip David. Implementation of efficient Pan-Tilt-Zoom camera calibration. ARL-TR-4799, Tech report,2009
    [31]Lu wang, Suya you. PTZ camera calibration for augmented virtual environments. In:Proc. of IEEE int.conf. on Multimedia and Expo.2009, 1366-1369
    [32]Sudipta N. Sinha, Marc Pollefeys. Pan-tilt-zoom camera calibration and high-resolution mosaic generation. Computer Vision and Image Understanding, 2006,103(3):170-183
    [33]Deng Xiao-Ming, Wu Fu-Chao, Duan Fu-Qing, etal. Catadioptric camera calibration with one-dimensional objects.Chinese Journal of Computers, 2007,30(5):737-746
    [34]Hen Cheng-I, Chen Yong-Sheng. Central catadioptric camera calibration using planar objects. In:Proc. ofThe 5th Int. Conf. on Computer Vision systems, Bielefeld University, Germany:2007,125-135
    [35]Xianghua Ying, Zhanyi Hu, Catadioptric camera calibration using geometric invariants, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004,26(10):1260-1271
    [36]Tsai R Y, Lenz R K. A new technique for fully autonomous and efficient 3d robotics hand-eye calibration. IEEE Transactions on Robotics and Automation, 1989,5(3):345-358
    [37]Shiu Y C, Ahmad S. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX= XB. IEEE Transactions on Robotics and Automation,1989,5(1):16-29
    [38]Lee S, Ro S. A self-calibration model for hand-eye systems with motion estimation. Mathl. Comput. Modelling,1996,24(5/6):49-77
    [39]Strobl Klaus H, Hirzinger G. Optimal hand-eye calibration. In:Proc. ofIEEE Int. Conf. on Intelligent Robots and Systems, Beijing, China:2006,4647-4653
    [40]Malm H, Heyden A. Extensions of plane-Based Calibration to Case of Translation Motion in a robot vision setting. IEEE Transactions on Robotics, 2006,22(2):322-333
    [41]Hartley R. In Defense of 8-point algorithm. In:Proc. of IEEE Int. Conf. on Computer Vision, Boston, MA, USA:1995,1064-1070
    [42]Weng J Y, Huang T S, Ahuja N. Motion and structure from line correspondences; closed-form solution, uniqueness, and optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(3): 318-336
    [43]Liu Y C, Huang T S. A linear algorithm for motion estimation using straight line Correspondences. Computer Vision, Graphics, and Image Processing,1988, 44(3):35-57
    [44]Liu Y C, Huang T S, Faugeras O D. Determination of camera location from 2-D to 3-D line and point correspondences. IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(1):28-37
    [45]Weng J Y, Huang T S, Ahuja N. Motion and structure from two perspective views:algorithms, error analysis, and error estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence,1989,11(5):451-476
    [46]Qin L J, Zhu F. A new method for pose estimation from line correspondences, Acta Automatica Sinica,2008,34(2):130-134
    [47]Zhang Z Y. Estimating motion and structure from correspondences of line segments between two perspective images. IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17(12):1129-1139
    [48]Taylor C J, Kriegman D J. Structure and motion from line segments in multiple images. IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17 (11):1021-1032
    [49]Weng J Y, Liu Y C, Huang T S, Ahuja N. Estimating motion/structure from line correspondences:a robust linear algorithm and uniqueness theorems. IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA:1988,387-392
    [50]Arun K S, Huang T S and Blostein S D. Least-squares fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence,1987, 9(5):698-700
    [51]Paul J Besl, Neil D McKay. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2): 239-256
    [52]Andreas Ess, Alexander Neubeck, Luc Van Gool. Generalised Linear Pose Estimation.British Machine Vision Conference, Warwick, UK:2007,10-13
    [53]Halvorsen K, Sderstrm T, Stokes V,etal. Using an extended kalman filter for rigid body pose estimation. Journal of Biomechanical Engineering,2005, 127(3):475-483
    [54]马颂德,张正友,计算机视觉--计算理论与算法基础.北京:科学学出版社,2003
    [55]Fishler M A and Bolles R C. Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM,1981,24(6):381-392
    [56]Gao X S, Hou X R, Tang J, and Cheng H F. Complete solution classification for the perspective-three-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(8):930-943
    [57]Horaud R, Conio B, Leboulleux O and Lacolle B. An Analytic Solution for the Perspective 4-Point Problem. In:Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition. Austin, Texas, USA:1989,47:33-44
    [58]Quan L, Lan Z. Linear N-Point Camera Pose Determination. IEEE Transactions on Pattern Analysis and Machine Intelligence,1999,21(7):774-80
    [59]Ansar A, Daniilidis K. Linear pose estimation from points or lines. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003, 25(5):578-589
    [60]Francesc M N, Vincent L and Pascal F. Accurate Non-Iterative O(n) Solution to the PnP Problem. In:Proc. of IEEE International Conference on Computer Vision, Janeiro, Brazil:2007,1-8
    [61]Bujnak M, Kukelova Z and Pajdla T. A general solution to the P4P problem for camera with unknown focal length. In:Proceedings of International Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA: 2008,1-8
    [62]Wu F C, Hu Z Y. A Note on the P5P problem with an un-calibrated camera. Journal of Computer,2001,24(11):1-6
    [63]Guo Y, Xu X H. An analytic solution for the P5P problem with an un-calibrated camera. Journal of Computer,2007,30(7):1195-1200
    [64]Sidenbladh H. Detecting human motion with support vector machines. In:Proc. IEEE Conf. on Pattern Recognition. Cambridge:2004,188-191
    [65]Hogg D. Model-based vision:A program to see a walking person. Image and Vision Computing,1995,1:5-20
    [66]A Elgammal, R Duraiswami, DHarwood, etal. Background and foreground modeling using non-parametric kernel density estimation for visual surveillance. Proceeding of IEEE,2002,90(7):1151-1163
    [67]A.Elgammal, D.Harwood, L.Davis, Non-parametric model for background subtraction. In:Proc. IEEE European Conference on Computer Vision, Dublin, Ireland:2000,751-767
    [68]C. Stauffer and E. Grimson. Learning Patterns of Activity Using Real-Time Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000,22(8):747-757
    [69]Kenji Okuma, Ali Taleghani, Nando De Freitas,etal. A boosted particle filter: Multitarget detection and tracking. Proc.In:Proc. of ECCV,2004,28-39
    [70]薛建儒,郑南宁.多目标跟踪的序贯分层抽样信任传播算法.中国科学E辑,2005,35(10):1049-1063
    [71]Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shift. In:Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition, Hilton Head, SC, USA:2000,142-149
    [72]Fukanaga K, Hostetler LD. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory,1975,21(1):32-40
    [73]Comaniciu D, Ramesh V, Meer P.Kernel-Based Object Tracking. IEEE Trans-actions on Pattern Analysis and Machine Intelligence,2003,(25)5:564-575
    [74]Bircheld S T, Rangarajan S. Spatiograms versus histograms for region-based tracking. In:Proc. IEEE Conf. on Computer Vision and Pattern Recognition,: 2005.1158-1163
    [75]Peihua Li. A clustering Based Color Model and Fast Algorithm for Object Tracking. The 18th International Conference on Pattern Recognition,2006 (1):671-674
    [76]李培华.一种改进的Mean Shift跟踪算法.自动化学报,2007,33(4):347-354
    [77]Yang C, Duraiswami R, Davis L. Efficient mean-shift tracking via a new similarity measure. In:Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Diego,USA:2005,176-183
    [78]Yang C, Duraiswami R, Davis L. Real-time kernel-based tracking in joint feature-spatial spaces. Tech Report CS-TR-4567, Univ. of Maryland,2004
    [79]Elgammal A, Duraiswami R, Davis L. Probabilistic tracking in joint feature-spatial spaces. In Proc. In:Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Madison, WI, USA:2003,781-788
    [80]A.P. Leung and S. Gong, Mean shift tracking with random sampling. In Proc. Machine Vision Conference, British:2006,729-738
    [81]A. Naeem, S. Mills, and T. Pridmore, Structured Combination of Particle Filter and Kernel Mean-Shift. Tracking.In Proc. IVCNZ, Gt. Barrier Island:2006
    [82]M. Thomas and C. Kambhamettu。An approximation to mean-shift via swarm intelligence. In Proc.of 18td International Conference on Tools with Artificial Intelligence,2006,583-590
    [83]D Reid. An algorithm for tracking multiple targets. IEEE Transactions on Automat and Contr,1979,24 (6):84-90
    [84]Ding Zhen, Vandervies David. A modified Murty algorithm for multiple hypothesis tracking. Signal and Data Processing of Small Targets,2006, 6236:62360V.1-62360V.10
    [85]Andreas Krauβling. Tracking Multiple Objects Using the Viterbi Algorithm. Lecture Notes Electrical Engineering,2008,15:233-247
    [86]Songhwai Oh, S Russell, and S Sastry. Markov chain monte carlo data association for general multiple-target tracking problems. In Proceedings of the 43rd In:Proc. of IEEE Conf. on Decision and Control,2004,735-741
    [87]Songhwai Oh, S Russell, and S Sastry. Markov Chain Monte Carlo Data Association for Multi-Target Tracking. IEEE Transactions on Automatic Control,2009,54(3):481-497
    [88]Qian Yu, Gerard Medioni. Multiple-Target tracking by spatiotemporal monte carlo markov chain data association. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(12):2196-2210
    [89]C. Kreucher, M. Morelande, K. Kastella, etal. Particle filtering for multitarget detection and tracking. IEEE Transactions on Aerospace and Electronic Systems,2005,4(1):1396-1414
    [90]Zia Khan, Tucker R. Balch, Frank Dellaert:MCMC-Based particle filtering for tracking a variable number of interacting targets. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(11):1805-1918
    [91]Kevin Smith, D Gatica-Perez, and J M Odobez. Using particles to track varying numbers of interacting people. In:Proc. IEEE Conf. on Computer Vision and Pattern Recognition,San Diego,USA:2005,962-969
    [92]Tao Zhao, Ram Nevatia. Tracking multiple humans in complex situations. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004, 26(9):1208-1221
    [93]Tao Zhao, Ram Nevatia, Bo Wu. Segmentation and tracking of multiple humans in crowded environments. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(7):1198-1211
    [94]Xu L, Oja E. Randomized hough transform (RHT):basic mechanism, algorithms, and computational complexities. CVGIP:Image Understanding, 1993,57(2):131-154
    [95]A. J. Lacey, N. Pinitkarn, N. A. Thacker. An evaluation of the performance of RANSAC algorithms for stereo camera calibration. In: Proceedings of the British Machine Vision Conference,2000
    [96]Tuzel O, Subbarao R and Meer P. Simultaneous multiple 3D motion estimation via mode finding on lie groups. In:Proceedings of International Conference on Computer Vision, Beijing, China:2005,18-25
    [97]Bayro-Corrochano E, Daniilidis K, Sommer G. Motor Algebra for 3D kinematics:The case of the hand-eye calibration. Journal of Mathematical Imaging and Vision,2000,13(2):79-100
    [98]Daniilidis K. Hand-Eye Calibration using dual quaternions. International Journal of Robotics Research,1999,18 (3):286-298
    [99]Yizong Cheng. Mean shift, mode seeking, and clustering. IEEE transactions on Pattern Analysis and Machine Intelligence,1995,17(8):790-799
    [100]Miguel A, Carreira-Perpinan. Fast nonparametric clustering with Gaussian blurring mean-shift. The 23rd International Conference on Machine Learning, 2006,153-160
    [101]Kai Zhang, M. Tang, J.T. Kwok. Applying neighborhood consistency for fast clustering and kernel density estimation. In the International Conference on Computer Vision and Pattern Recognition,San Diego,USA:2005,1001-1007
    [102]Comaniciu D, Ramesh V, Meer P. The variable bandwidth mean shift and data-driven scale selection. In Proc. of the 8th IEEE Conf. on Computer Vision, 2001,438-445
    [103]Collins R, Liu Y. On-Line selection of discriminative tracking features. In Proc. In:Proc. IEEE Conf. on Computer Vision,2003,346-352
    [104]E. Parzen.On estimation of a probability density function and mode. Ann. Math. Stat.1962,33(3):1065-1076
    [105]A. Rangarajan, H. Chui, and F.L. Bookstein, The Softassign Procrustes Matching Algorithm. In Proc. of the 15th International Conference on Information Processing in Medical Imaging,1997,29-42
    [106]P.J. Green. Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika,1995,82(4):711-732
    [107]Hugh Durrant-Whyte, Tim Bailey. Simultaneous Localisation and Mapping (SLAM):Part Ⅰ The Essential Algorithms. Robotics and Automation Magazine,2006(2):99-110
    [108]Tim Bailey, Hugh Durrant-Whyte. Simultaneous Localisation and Mapping (SLAM):Part Ⅱ-State of the Art. Robotics and Automation Magazine, 2006(3):108-117
    [109]Alex Brooks and Tim Bailey. HybridSLAM:Combining FastSLAM and EKF-SLAM for Reliable Mapping. In:Workshop on the Algorithmic Fundamentals of Robotics,2008,1:16
    [110]Joaquim Salvi, Yvan R Petillot, Elisabet Batlle. Visual SLAM for 3D Large-Scale Seabed Acquisition Employing Underwater Vehicles. In:Proc. IEEE/RSJ Conf. on Intelligent Robots and Systems. IROS 2008:1011-1016
    [111]E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd. Monocular Vision Based SLAM for Mobile Robots. In:Proc.18th IEEE Conf. on Pattern Recognition, Hong Kong,Chian:2006:1027-1031
    [112]A. J. Davison, I. D. Reid, N. D. Molton, etal. MonoSLAM:Real-Time Single Camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067
    [113]Andreas Nuchter, Kai Lingemann, Joachim Hertzberg, and Hartmut Surmann, 6D SLAM - 3D Mapping Outdoor Environments. Journal of Field Robotics (JFR),2007(24) 8-9:699-722
    [114]Andreas Nuchter, Jan Elseberg, Peter Schneider, etal. Linearization of Rotations for Globally Consistent n-Scan Matching, In:Proc. of IEEE Int. Conf. Robotics and Automation (ICRA 10), Anchorage, Alaska:2010,1373-1379
    [115]Martin Magnusson, Henrik Andreasson, Andreas Niichter, etal. Appearance-based loop detection from 3D laser data using the normal distributions transform. In:Proc. of int. Conf.Robotics and Automation, Kobe: 2009,23-28
    [116]Andreas Nuchter, Jan Elseberg, Peter Schneider, etal. Study of parameterizations for the rigid body transformations of the scan registration problem. Journal Computer Vision and Image Understanding,2010,114(8): 963-980
    [117]Stefan May, David Droeschel, Dirk Holz, Stefan Fuchs, and Andreas Nuchter. Robust 3D-Mapping with Time-of-Flight Cameras. In:Proc. of IEEE int. Conf. on Intelligent Robots and Systems (IROS'09), St. Louis, MO, USA,2009, 1673-1678
    [118]Stefan May, David Droeschel, Dirk Holz, etal. Three-dimensional mapping with time-of-flight cameras. Journal of Field Robotics,2009,(26)11-12: 934-965
    [119]Carlos Lara, Leonardo Romero, Felix Calderon. A robust Iterative Closest Point algorithm with augmented features. Lecture Notes in Computer Science, 2008:605-614
    [120]Lowe. distinctive image features from scale-invariant keypoints. International Journal of Computer Vision,2004,60(2):91-110
    [121]M I A Lourakis, A A Argyros. SBA:A software package for generic sparse Bundle Adjustment. ACM Transactions on Mathematical Software,2009,36 (1): 1-30

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700