SpaceMocap:在轨人体运动捕捉系统
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:SpaceMocap: Space Motion Capture System
  • 作者:李由 ; 王春慧 ; 严曲 ; 张小虎 ; 谢良
  • 英文作者:LI You;WANG Chun-hui;YAN Qu;ZHANG Xiao-hu;XIE Liang;National Key Laboratory of Human Factors Engineering,China Astronaut Research and Training Center;College of Aerospace Science and Engineering,National University of Defense Technology;
  • 关键词:人体姿态 ; 运动捕捉 ; 计算机视觉 ; 航天员 ; 在轨
  • 英文关键词:Human posture;;Motion capture;;Computer vision;;Astronaut;;On-orbit
  • 中文刊名:YHXB
  • 英文刊名:Journal of Astronautics
  • 机构:中国航天员科研训练中心人因工程重点实验室;国防科技大学空天科学学院;
  • 出版日期:2019-06-30
  • 出版单位:宇航学报
  • 年:2019
  • 期:v.40
  • 基金:中国载人航天工程资助项目;; 载人航天预先研究基金项目(030602)
  • 语种:中文;
  • 页:YHXB201906014
  • 页数:8
  • CN:06
  • ISSN:11-2053/V
  • 分类号:119-126
摘要
SpaceMocap是一套基于多RGB-D相机的计算机视觉航天员运动捕捉系统。地面准备阶段,扫描航天员模型,并分别标定彩色相机的内参数。在轨采集阶段,3~4台相机布置在舱内角落,同步采集航天员任务视频。地面处理阶段,通过相机外参数标定和ICP方法实现点云融合,采用深度神经网络对人体关节点位置进行检测并初始化位姿参数,再用改进的ICP方法进行位姿求精,实现序列图像中关节角度跟踪。本系统搭载TG-2升空,对SZ-11航天员的任务视频进行了采集和处理,首次获取了在轨航天员的姿态(包括中性体位)、占位空间、运动参数等重要数据。结果表明,运动捕捉的模型与点云具有良好的重合度,关节点位置与关节角度具有较高的跟踪精度。SpaceMocap是我国首个在轨运动捕捉系统,它小型、轻质,具有计算机视觉特有的非接触测量、直观、高精度优势,无需在人体上粘贴任何标志,具有良好的抗遮挡能力,完全适用于微重力、狭小空间环境下的在轨应用。
        A multiple RGB-D cameras based computer vision astronaut motion capture system,Space Mocap,is proposed. In the ground preparation phase,the astronaut model is scanned and the internal parameters of the color camera are calibrated separately. In the on-orbit acquisition phase,3 to 4 cameras are placed in cabin corners to simultaneously capture the astronaut mission video. In the ground processing phase,the cloud point fusion is achieved by calibrating the external parameters of the cameras and the ICP method. The joints position of the human body is detected by a deep neural network and the human pose is initialized. The improved ICP method is used for pose refinement,and the joint angle tracking is achieved in sequence images. The system was launched with TG-2,and collected and processed the mission video of the SZ-11 astronauts. For the first time,it acquired the important data of the on-orbit astronauts' posture( including the neutral body posture),occupation space,and motion parameters. The experimental results show that the model of motion capture has good coincidence with the point clouds; the joint position and joint angle have high tracking accuracy.Space Mocap is the first on-orbit motion capture system in China. It is small and lightweight,and has the unique advantages of non-contact measurement,intuitive,and high-precision in computer vision. It does not need to stick any markers on the human body and has good anti-occlusion performance. It is totally suitable for on-orbit applications in microgravity and small space environments.
引文
[1]林泰明,李东旭,陈浩.基于约束运动理论的航天员舱外救生中的姿态运动控制[J].宇航学报,2010,31(2):602-607.[Lin Tai-ming,Li Dong-xu,Chen Hao. An attitude control method based on constrained motion theory for extravehicular activity rescue by astronaut[J]. Journal of Astronautics,2010,31(2):602-607.]
    [2] Ferrigno G,Pedotti A. ELITE:a digital dedicated hardware system for movement analysis via real-time TV signal processing[J]. IEEE Transactions on Bio-medical Engineering,1985,32(11):943-950.
    [3] Neri G,Cotronei V,Mascetti G,et al. Elite S2-An in-strument for motion analysis on board the international space station[C].60th International Astronautical Congress,Daejeon,Republic of Korea,Oct 12-16,2009.
    [4] Pons-Moll G,Rosenhahn B. Model-based pose esti-mation In:Moeslund T,Hilton A,Krüger V,et al.(eds)Visual Analysis of Humans[M]. London:Springer,2011.
    [5] Campbell R A A,Eifert R W,Turner G C. Openstage:A lowcost motorized microscope stage with sub-micron positioning accuracy[J]. Plo S One,2014,9(2):e88977.
    [6] Gall J,Stoll C,Aguiar E D. Motion capture using joint skeleton tracking and surface estimation[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR)2009,Florida,USA,June 20-25,2009.
    [7] Liu Y,Stoll C,Gall J. Markerless motion capture of interacting characters using multi-view image seg-mentation[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR)2011,Colorado Springs,USA,June 20-25,2011.
    [8] Cao Z,Simon T,Wei S E. Realtime multi-person 2D pose estimation using part affinity fields[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR)2017,Hawaii,USA,July 21-26,2017.
    [9] Güler R A,Neverova N,Kokkinos I. DensePose:Dense human pose estimation in the wild[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR)2018,Salt Lake City,USA,June 18-22,2018.
    [10] Fang H S,Xie S,Tai Y W. RMPE:Regional mul-ti-person pose estimation[C]. IEEE Computer Society International Conference on Computer Vision(ICCV)2017,Venice,Italy,Oct 22-29,2017.
    [11] Shotton J,Fitzgibbon A,Cook M. Real-time human pose recognition in parts from single depth images[C]. IEEE Computer Society Conference on Computer Vi-sion and Pattern Recognition(CVPR)2011,Colorado Springs,USA,June 20-25,2011.
    [12] Dou M,Khamis S,Degtyarev Y,et al. Fusion4D:real-time performance capture of challenging scenes[J]. Acm Transactions on Graphics,2016,35(4):114.
    [13] Newcombe R A, Fox D, Seitz S M. Dynamic Fusion:Reconstruction and tracking of non-rigid scenes in re-al-time[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR),Boston,USA,June 7-12,2015.
    [14] Baran I, Jovan P. Automatic rigging and animation of 3D characters[J]. Acm Transactions on Graphics,2007,26(3):72.
    [15] Zhang Z. A flexible new technique for camera calibra-tion[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [16] Lepetit V,Moreno-Noguer F,Fua P. EPnP:An accurate O(n)solution to the PnP problem[J]. International Journal of Computer Vision,2009,81(2):155-166.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700