UP6机械手的双目视觉伺服控制研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
机器人视觉伺服是机器人研究领域所涉学科众多、最富有挑战性的课题之一,其综合了控制理论、机器人学、计算机视觉、数字图像采集处理、嵌入式系统、网络通信以及科学计算等不同领域的知识。目前,机器人视觉伺服主要分为基于位置的视觉伺服和基于图像的视觉伺服两个较大的研究类别。基于位置的视觉伺服根据已知的像机内、外参数对目标进行三维建模,对摄像机标定和建模精度要求比较高;与之相比基于图像的视觉伺服则对像机、机器人和环境建模误差具有较强的鲁棒性,但多数文献都桎梏于如何解决深度的估计问题。
     本文在系统总结了目前机器人视觉伺服研究领域发展状况的基础上,主要针对双目视觉系统研究了以下几个方面内容:
     首先,使用基于C/C++语言的计算机视觉开放函数库OpenCV完成双目视觉系统中左右眼摄像机的单独标定,进而给出两摄像外参数与它们的相互位置关系即结构参数的变换关系,完成双目视觉系统标定工作。
     其次,在实现特征点匹配任务中,将RANSAC算法应用于SIFT特征匹配,进而提出了一种基于RANSAC算法的双向纠错SIFT特征匹配改进算法;给出了一种基于角点检测的特征点匹配算法,该方法成功应用于视觉伺服控制实验中的目标特征点实时匹配。
     再次,将双目视觉伺服控制模型应用于实际的机器人视觉伺服控制系统中,在以MOTOMAN-UP6机械手为被控对象的双目视觉伺服系统硬件平台上,采用VC6.0软件编程,通过调用机械手控制器串行通信接口函数控制机械手运动,编制了图像采集与处理程序、机器人运动与控制程序、控制算法程序,搭建了双目视觉伺服控制的实验研究平台。
     最后,针对实验过程中出现的机器人末端平移运动与旋转运动相互耦合的问题,提出了一种开关控制方法。该开关控制方法借鉴滑模变结构的控制思想,将整个伺服控制过程分为调姿—定位—精确定位三个阶段,有效的解决了采用单一结构形式的图像雅可比矩阵进行伺服控制时可能出现的耦合问题。最后进行了基于MOTOMAN-UP6机械手的双目视觉伺服控制对比实验,结果表明:该开关控制方法具有更好的稳定性和定位轨迹,较好的解决了在实际控制过程中机器人末端出现的平移运动与旋转运动相互耦合问题。
Robot visual servoing is one of the most active and challenging topics in robotics, and is also involved in many diverse research fields, such as control theory, robotics, computer vision, digital image acquisition and processing, embedded systems, network communications, scientific computing and so on. Currently speaking, the research of visual servoing could be divided into two major categories: Position-Based Visual Servoing(PBVS) and Image-Based Visual Servoing(IBVS). PBVS calculates the relative position and orientation, which are known as the 3D modeling of the object according to the known camera internal and external parameters. However, it needs perfect target geometric model and largely depends on accurate camera calibration. Comparing with PBVS, the IBVS is object model free, and robust to camera modeling and hand-eye calibration errors.
     In this paper, on the basis of giving a tutorial introduction to the development of robot visual servoing control system nowadays, the main work is summarized as follows:
     First of all, a camera calibration method of binocular visual system is proposed in this paper, the method finishes the two separate camera calibration using the function of the Open Source Computer Vision Library at first, and then calculates the conversion relationship between the relative pose of the two cameras using the camera external parameters.
     Secondly, in the feature points matching tasks, a bidirectional matching algorithm based RANSAC algorithm is proposed in this paper to improve the image matching accuracy of the SIFT algorithm, and also a matching algorithm based on corner detection function of OpenCV is introduced, in which we extracts the feature points of the target object using strong Harris corner detection algorithm and matching them using location constraint.
     Thirdly, Based on the hardware and operation environment of the robot MOTOMAN-UP6, an experiment operation platform is established and the programs of image sampling and processing, robot motion and control, controller algorithm are developed. An experiment is carried to verify the visual servoing model irrespective of depth. The analysis of the error shows that the model is validity and practicability.
     Finally, a switching control method is proposed to decouple the rotational and translational motion control of the robot end-effector and avoid the inherent drawbacks of the origin model in practical application. The whole switching control method consists of three steps: adjusting posture; positioning; precision positioning (posture and position). The method is implemented and validated on a MOTOMAN-UP6 based eye-in-hand platform and the experimental comparison shows that it can enhance the system stability and have a better positioning trajectory.
引文
1高隽,谢昭.图像理解理论与方法.北京:科学出版社, 2009:4-10
    2张秀彬,应俊豪.视感智能检测.北京:科学出版社, 2009:8-15
    3 P. Corke. Dynamic Effects in Visual Closed-loop Systems. IEEE Trans. on Robotics and Automation, 1996,12(5):671-683
    4刘金琨.机器人控制系统的设计与MATLAB仿真.北京:清华大学出版社, 2008:21-30
    5 S. Hutchinson, G. Hager, P. I. Corke A Tutorial Introduction to Visual Servo Control. IEEE Trans. on Robotics and Automation, 1996,12(5):651-670
    6 E. R. Davies. Machine Vision: theory, algorithm, practicalities.北京:人民邮电出版社, 2009:34-45.
    7 P. I. Corke. Dynamic effects in visual closed-loop systems. IEEE Trans. on Robotics and automation, 1996,12(5):671-683
    8 W. J. Wilson, C. C. Williams Hulls, G. S. Bell Relative End-Effector Control Using Cartesian Position Based Visual Servoing. IEEE Trans. on Robotics and Automation, 1996,12(5):684-696
    9 P. Corke. Visual Control of Robot Manipulators a Review. Visual Servoing, 1994: 1–32
    10 G. Chesi, A. Vicino. Visual Servoing for Large Camera Displacements. IEEE Trans. on Robotics and automation, 2004,20(4):724-735
    11 O. Nasisi, R. Carelli. Adaptive Servo Visual Robot Control. Robotics and Autonomous System, 2003,43:51-78
    12 S. Carsten, U. Markus, W. Christian.机器视觉算法与应用.杨少荣,吴迪靖,段德山译.北京:清华大学出版社, 2008:284-294
    13 K. Hashimoto, T. Ebine, H. Kimura. Visual Servoing with Hand-Eye Manipulator Optimal Control Approach. IEEE Trans. on Robotics and Automation, 1996,12(5):766-774
    14 V. Lippiello, B. Siciliano. Position-Based Visual Servoing in Industrial Multirobot CellsUsing a Hybrid Camera Configuration. IEEE Trans. on Robotics and Automation, 2007,23(1):73-86
    15郝淼.基于图像的免标定机器人视觉伺服研究. [清华大学硕士论文]. 2007:2-3
    16 L. E. Weiss, A. C. Anderson, C. P. Neuman Dynamic Sensor-Based Control of Robots with Visual Feedback. IEEE Trans. on Robotics and Automation, 1987,3(5): 404-417
    17 F. Chaumette, S. Hutchinson. Visual Servo Control. I. Basic Approaches. IEEE Robotics & Automation Magazine, 2006,11:82-90
    18 F. Chaumette, S. Hutchinson. Visual Servo Control. II. Advanced Approaches. IEEE Robotics & Automation Magazine, 2007,3,109-118
    19 E. Malis, F. Chaumette, S. Boudet. 2D 1/2 Visual Servoing. IEEE Trans. on Robotics and Automation, 1999,15(2):234-246
    20徐德,谭民,李原.机器人视觉测量与控制.北京:国防工业出版社, 2008:100-120
    21 E. Malis, F. Chaumette, S. Boudet. Positioning a Coarse-Calibrated Camera with Respect to an Unknown Object by 2D 1/2 Visual Servoing. IEEE International Conference on Robotics and Automation, 1998:1352-1359
    22 J. Pomares, F. Chaumette, F. Torres. Adaptive Visual Servoing by Simultaneous Camera Calibration. In: Proceedings of the 2007 IEEE international Conference on Robotics and Automation. Roma, Italy: IEEE, 2007:2811-2816
    23孟肖桥,胡占义.摄像机自标定方法的研究与进展.自动化学报, 2003,29(1): 110-124
    24 B. H. Yoshimi, P. K. Allen. Active, Uncalibrated Visual Servoing. In: Proceedings of the 2007 IEEE International Conference on Robotics and Automation. San Diego, CA: IEEE, 1994:156-161
    25 H. Hashimoto, T. Kubota, M. Kudou, et al. Self-Organizing Visual Servo System based on Neural Networks. IEEE Control Systems Magazine, 1992,12(2):31-36
    26 J. A. Walter, K. J. Schulten. Implementation of Self-Organizing Neural Networks for Visual-Motor Control of an Industrial Robot. IEEE Trans. on Neural Networks, 1993, 4(1):86-95
    27潘且鲁,苏剑波,席裕庚.基于神经网络的机器人手眼无标定平面视觉跟踪.自动化学报, 2001,127(2):87-91
    28 J. N. Bradley, K. K. Pradeep. Force and Vision Resolvability for Assimilating Disparate Sensory Feedback. IEEE Trans. on Robotics and Automation, 1996,12(5):714-731
    29 Li Xiaodong, Huang Xinhan. Fuzzy Adaptive Kalman Filtering based Estimation of Image Jacobian for Uncalibrated Visual Servoing. IEEE Int. Conf. on Intelligent Robots and Systems, 2006:2167-2172
    30 P. J. S. Goncalves, L. F. Mendonca, J. M. C. Sousa, et al. Uncalibrated Eye-to-Hand Visual Servoing Using Inverse Fuzzy Models. IEEE Trans. on FUZZY SYSTEMS, 2008,4(2):341-353
    31 Hao Miao, Sun Zengqi. Image based Visual Servoing Using Takagi-Surgeon Fuzzy Neural Network Controller. In: 22nd IEEE International Symposium on Intelligent Control Part of IEEE Multi-conference on Systems and control. Singapore: IEEE, 2007:53-58
    32 G. W. Kim, B. H. Lee. Efficient Regulation of Joint Velocity in Uncalibrated Visual Servoing. In: Proc. IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics: IEEE, 2003:993-998
    33 J. A. Piepmeier, G. V. Mcmurray, H. Lipking. Uncalibrated Dynamic Visual Servoing. IEEE Trans. on Robotics and Automation, 2004,20(1):143-147
    34崔彦平,林玉池,张晓玲.基于神经网络的双目视觉摄像机标定方法的研究.光电子·激光, 2005,19(6):1097-1100
    35赵征,张广军,魏振忠.一种基于共面圆的摄像机自标定算法.光电子·激光, 2009, 20(12):1576-1579
    36 K. Kiihnlenz, M. Buss. Towards Multi-Focal Visual Servoing. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2005:3289-3294
    37 A. I. Comport, E. Marchand, F. Chaumette. Statistically Robust 2-D Visual Servoing. IEEE Trans. on Robotics and Automation, 2006,22(2):415-420
    38 J. Aloimonos, D. P. Tsakiris. On the Mathematics of Visual Tracking. Image and Vision Computing, 1999,1(2):235-251
    39 J. Pages, C. Collewet, F. Chaumette, et al. Plane-to-Plane Positioning fromImage-Based Visual Servoing and Structured Light. In: Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems. 2004:1004-1009
    40武波,徐鹏,李惠光.具有深度自适应估计的视觉伺服变结构控制.机床与液压, 2007,35(4):160-162
    41 G. Chesi, Y. S. Hung. Image Noise Induced Errors in Camera Positioning. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007,29(8):1476-1480
    42 C. Collewet, F. Chaumette. Visual Servoing Based on Structure from Controlled Mmotion or on Robust Statistics. IEEE Trans. on Robotics, 2008,24(2):318-330
    43 Hu Yonghui, Zhao Wei, Wang Long. Vision-Based Target Tracking and Collision Aavoidance for Two Autonomous Robotic Fish. IEEE Trans. on Industrial Electronics, 2009,56(5):1401-1410
    44杨晓敏,吴炜,卿粼波,等.图像特征点提取及匹配技术.光学精密工程, 2009, 17(9):2276-2282
    45 D. Lowe. Distinctive Image Features from Scale-Invariant Key points. International Journal of Computer Vision, 2004,60(2):91-110
    46 B. J. Nelson, P. K. Khosla. Force and Vision Resolvability for Assimilating Disparate Sensory Feedback. IEEE Trans. on Robotics and Automation, 1996,12(5):714-731
    47 Li Huiguang, Jin Mei, Zou Liying. A new Binocular Stereo Visual Servoing Model. In: 2008 Pacific-Asia Workshop on Computational Intelligence and Industrial Application. 2008:461-465
    48 Xu De, Li Youfu, Tan Min, et al. A new active Visual System for humanoid Robots. IEEE Trans. on Systems, Man, and Cybernetics-Part B: Cybernetics, 2008,38(2): 320-330
    49 J. H. Kimand, C. H. Menq. Visually Servoed 3-D Alignment of Multiple Objects with Subnanometer Precision. IEEE Trans. on Nanotechnology, 2008,7(3):321-330
    50杨少荣,吴迪靖,段德山.机器视觉算法与应用.北京:清华大学出版社, 2008:257-261
    51李仁举,钟约先,由志福.三维测量系统中的摄像机标定技术.清华大学学报, 2002,42(4):481-483
    52祝海江,吴福朝,郭占义.基于两条平行线段的摄像机标定.自动化学报, 2005, 31(6):853-863
    53邱茂林,马颂德,李毅.计算机视觉中摄像机标定综述.自动化学报, 2000, 26(1):43-55
    54刘瑞帧,于仕琪. OpenCV教程基础篇.北京:北京航空航天大学出版社, 2007:249-250

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700