基于改进型抓取质量判断网络的机器人抓取研究
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Research on robot grasping based on improved grasp quality convolutional neural network
  • 作者:成超鹏 ; 张莹 ; 牟清萍 ; 张东波 ; 薛亮
  • 英文作者:Cheng Chaopeng;Zhang Ying;Mou Qingping;Zhang Dongbo;Xue Liang;College of Information Engineering of Xiangtan University;National Engineering Laboratory for Robotic Visual Perception and Control Technology;
  • 关键词:双臂机器人 ; 布尔沙坐标转换模型 ; 抓取质量判断网络
  • 英文关键词:dual-arm robot;;Bursa coordinate transformation mode;;grasp quality convolutional neural network
  • 中文刊名:DZIY
  • 英文刊名:Journal of Electronic Measurement and Instrumentation
  • 机构:湘潭大学信息工程学院;机器人视觉感知与控制技术国家工程实验室;
  • 出版日期:2019-05-15
  • 出版单位:电子测量与仪器学报
  • 年:2019
  • 期:v.33;No.221
  • 基金:国家自然科学基金(61773330);; 湖南省自然科学基金(2017JJ2251)资助项目
  • 语种:中文;
  • 页:DZIY201905011
  • 页数:8
  • CN:05
  • ISSN:11-2488/TN
  • 分类号:85-92
摘要
针对ABB公司的YuMi双臂机器人在非结构化环境下的自主抓取问题,研究基于Kinect-2.0深度相机的可靠抓取算法。首先建立摄像机坐标系与机器人坐标系的布尔沙坐标转换模型,利用迭代最近点算法求解;然后将采集到的深度信息依照梯度大小变化阈值筛选像素点,并根据拒绝采样将像素点生成抓取候选点,通过改进型抓取质量判断网络(GQ-CNN)得到抓取质量度最高的抓取点姿态;最后将抓取点坐标转换到机器人坐标系实现物体抓取。实验结果表明,该方法能可靠的检测出物体最佳抓取点,实现对不同物体进行抓取。
        The reliable grasping algorithm based on Kinect-2.0 camera is proposed to deal with the autonomous grasping of YuMi dualarm robot of ABB company in unstructured environment. Firstly,the Bursa coordinate transformation model between camera coordinate system and robot coordinate system is established,which is solved by the nearest iteration point algorithm. Then,the depth information collected is transformed into grabbing candidate points according to the gradient change,and the grabbing quality is obtained by grasp quality convolutional neural network( GQ-CNN). Finally,the coordinates of the grab point are converted to the robot coordinate system to achieve object grabbing. The experimental results demonstrate that the proposed method can reliably detect the best grasping point of the object and realize the grasping of different objects.
引文
[1]杨扬.基于机器视觉的服务机器人智能抓取研究[D].上海:上海交通大学,2014.YANG Y.Study on the machine vision based intelligent grasping for service robot[D].Shanghai:Shanghai Jiaotong University,2014.
    [2]SAHBANI A,EL-KHOURY S,BIDAUD P.An overview of 3D object grasp synthesis algorithms[J].Robotics and Autonomous Sys-tems,2012,60(3):326-336.
    [3]BOHG J,MORALES A,ASFOUR T,et al.Data-driven grasp synthesis-a survey[J].IEEE Transactions on Robotics,2014,30(2):2 89-309.
    [4]FERRARI C,CANNY J.Planning optimal grasps[C].IEEE International Conference on Robotics and Automation,1992:2290-2295.
    [5]LIU G,XU J,WANG X,et al.On quality functions for grasp synthesis,fixture planning,and coordinated[J].IEEE Transactions on Automation Science and Engineering,2004,1(2):146-162.
    [6]SAXENA A,DRIEMEYER J,NG A Y.Robotic grasping of novel objects using vision[J].The International Journal of Robotics Research,2008,27(2):157-173.
    [7]JIANG Y,MOSESON S,SAXENA A.Efficient grasping from rgbd images:Learning using a new rectangle representation[C].IEEE International Conference on Robotics and Automation(ICRA),2011:3304-3311.
    [8]LENZ I,LEE H,SAXENA A.Deep learning for detecting robotic grasps[J].The International Journal of Robotics Research,2015,34(4-5):705-724.
    [9]REDMON J,ANGELOVA A.Real-time grasp detection using convolutional neural networks[C].IEEEInternational Conference on Robotics and Automation(ICRA),2015:1316-1322.
    [10]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[C].Advances in Neural Information Processing Systems,2012:1097-1105.
    [11]MAHLER J,LIANG J,NIYAZ S,et al.Dex-net 2.0:Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics[J].Robotics,2017,ar Xiv:1703.09312.
    [12]BESL P J,MCKAY N D.Method for registration of 3-Dshapes[C].Sensor Fusion IV:Control Paradigms and Data Structures,International Society for Optics and Photonics,1992,1611:586-607.
    [13]BIBER P,STRAER W.The normal distributions transform:A new approach to laser scan matching[C].IROS,2003,3:2743-2748.
    [14]JOHNS E,LEUTENEGGER S,DAVISON A J.Deep learning a grasp function for grasping under gripper pose uncertainty[C].IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS),2016:4461-4468.
    [15]SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2015:1-9.
    [16]王飞,张莹,张东波,等.基于捷径的卷积神经网络在人脸识别中的应用研究[J].电子测量与仪器学报,2018,32(4):80-86.WANG F,ZHANG Y,ZHANG D B,et al.Research on application of convolutional neural networks in face recognition based on shortcut conne[J].Journal of Electronic Measurement and Instrument,2018,32(4):80-86.
    [17]YANG L,ZHANG L,DONG H,et al.Evaluating and improving the depth accuracy of Kinect for Windows v2[J].IEEE Sensors Journal,2015,15(8):4275-4285.
    [18]MAHLER J,MATL M,LIU X,et al.Dex-Net 3.0:computing robust robot vacuum suction grasp targets in point clouds using a new analytic model and deep learning[J].Robotics,2017,ar Xiv:1709.06670.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700