基于Kinect深度信息的虚拟手驱动算法研究
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Research on driving virtual hand algorithm based on Kinect depth data
  • 作者:孙农亮 ; 王伟志
  • 英文作者:SUN Nongliang;WANG Weizhi;College of Electronics and Information Engineering,Shandong University of Science and Technology;
  • 关键词:人机交互 ; Kinect ; 贝叶斯算法 ; Aiolos库 ; 虚拟手 ; 驱动算法
  • 英文关键词:human-computer interaction;;Kinect;;Bayes algorithm;;Aiolos library;;virtual hand;;driving algorithm
  • 中文刊名:SDKY
  • 英文刊名:Journal of Shandong University of Science and Technology(Natural Science)
  • 机构:山东科技大学电子信息工程学院;
  • 出版日期:2019-07-09 13:22
  • 出版单位:山东科技大学学报(自然科学版)
  • 年:2019
  • 期:v.38;No.183
  • 基金:国家“863”项目子课题(2015AA016404-4);; 山东科技大学领军人才计划项目
  • 语种:中文;
  • 页:SDKY201904012
  • 页数:8
  • CN:04
  • ISSN:37-1357/N
  • 分类号:97-104
摘要
为提升人机交互真实性与沉浸感,实现人手直接参与交互的目的,采用Aiolos库与朴素贝叶斯算法追踪的方法,获取手部数据驱动虚拟手。通过Kinect获取深度图像与骨架信息等原始数据,以校准方程和最小二乘法对原始数据标定,结合Aiolos库与朴素贝叶斯算法追踪手部数据,经指数平滑后跨平台传输的数据流与TransformBone控制器绑定虚拟手骨骼,最终获得利用手部追踪数据驱动虚拟手的结果。提出的虚拟手驱动方式与受限的传统交互方式相比,可发挥人手多自由度优势,对于虚拟现实研究中交互性与沉浸感的提高具有重要意义。
        In order to improve the authenticity and immersion of human-computer interaction and realize the direct participation of human hand,Aiolos library and Naive Bayes algorithm were utilized to track data and to drive virtual hand based on Kinect depth data.Raw data such as depth image and skeleton information were obtained through Kinect,which were calibrated by the calibration equation and least square method.By combining Aiolos library with naive Bayes algorithm,the hand data were tracked and exponentially smoothed.The data stream bounded to the skeleton with the Transform Bone controller and transmitted across the platforms.Finally,the result of using hand tracking data to drive virtual hand was obtained.Compared with the traditional interaction constraints,the method of driving virtual hand proposed in this paper can provide complete flexibility to the multi-degree freedom of human hand,which is of great significance to the improvement of interactivity and immersion especially in the area of virtual reality.
引文
[1]周苏,王文.人机交互技术[M].北京:清华大学出版社,2016.
    [2]张菁,张天驰,陈怀友.虚拟现实技术及应用[M].北京:清华大学出版社,2011.
    [3]BURDEAG C.Force and touch feedback for virtual reality[M].New York:John Wiley & Sons,1996.
    [4]GRIMES G J.Digital data entry glove interface device:US 4414537 A[P].1981-09-15.
    [5]DAVIS J,SHAH M.Visual gesture recognition[J].IEE Proceedings:Vision,Image and Signal Processing,1994,141(2):101-106.
    [6]GUESS T M,RAZU S,JAHANDAR A,et al.Comparison of 3D joint angles measured with the Kinect 2.0 skeletal tracker versus a marker based motion capture system[J].Journal of Applied Biomechanics,2016,33(2):176-181.
    [7]NGUYEN D D,LE H S.Kinect gesture recognition:SVM vs.RVM[C]// Seventh International Conference on Knowledge and Systems Engineering.IEEE,2016:395-400.
    [8]邓瑞,周玲玲,应忍冬.基于Kinect深度信息的手势提取与识别研究[J].计算机应用研究,2013,30(4):1263-1265.DENG Rui,ZHOU Lingling,YING Rendong.Gesture extraction and recognition research based on Kinect depth data[J].Application Research of Computers,2013,30(4):1263-1265.
    [9]石曼银.一种基于SVM向量机的手势识别算法[J].电子测试,2013(16):24-25.SHI Manyin.The gesture recognition based on one algorithm SVM vector machine[J].Electronic Test,2013(16):24-25.
    [10]黄文静,马力.基于Kinect手势识别的研究与应用[J].电子设计工程,2017(24):166-169.HUANG Wenjing,MA Li.A gesture recognition research and application based on Kinect[J].Electronic Design Engineering,2017(24):166-169.
    [11]张乐,戴广军,朱凯,等.基于Kinect的动态手势识别交互算法[J].自动化应用,2017(1):23-25.ZHANG Le,DAI Guangjun,ZHU Kai,et al.Interactive algorithm for dynamic gesture recognition based on Kinect[J].Automation Application,2017(1):23-25.
    [12]吴彩芳,谢钧,俞璐,等.连续隐马尔科夫的静态手势识别法[J].计算机系统应用,2016,25(8):115-119.WU Caifang,XIE Jun,YU Lu,et al.Static gesture recognition arithmetic based on CHMM[J].Computer Systems and Applications,2016,25(8):115-119.
    [13]张毅,张烁,罗元,等.基于Kinect深度图像信息的手势轨迹识别及应用[J].计算机应用研究,2012,29(9):3547-3550.ZHANG Yi,ZHANG Shuo,LUO Yuan,et al.Gesture track recognition based on Kinect depth image information and its applications[J].Application Research of Computers,2012,29(9):3547-3550.
    [14]何超,胡章芳,王艳.一种基于改进DTW算法的动态手势识别方法[J].数字通信,2013,40(3):21-25.HE Chao,HU Zhangfang,WANG Yan.A dynamic gesture recognition method based on improved DTW algorithm[J].Digital Communication,2013,40(3):21-25.
    [15]RAHEJA J L,MINHAS M,PRASHANTH D,et al.Robust gesture recognition using Kinect:A comparison between DTW and HMM[J].Optik-International Journal for Light and Electron Optics,2015,126(11/12):1098-1104.
    [16]郑斌珏.基于Kinect深度信息的手势识别[D].杭州:杭州电子科技大学,2014.
    [17] KIRAC F,KARA Y E,AKARUN L.Real time hand pose estimation using depth sensors[C]//Consumer Depth Cameras for Computer Vision.Springer London,2011:1228-1234.
    [18]RAHEJA J L,CHAUDHARY A,SINGAL K.Tracking of fingertips and centres of palm using KINECT[J].Computer Science,2013,204(4956):248-252.
    [19]BRAUN A,ALEKSEEW M,KUIJPER A.Exploring machine learning object classification for interactive proximity surfaces[C]//Distributed,Ambient and Pervasive Interactions,2016,9749:157-167.
    [20]FURSATTEL P,PLACHT S,BALDA M,et al.A comparative error analysis of current time-of-flight sensors[J].IEEE Transactions on Computational Imaging,2016,2(1):27-41.
    [21]METRILUS GMBH.Metrilus Aiolos finger tracking[CP/OL].(2015-05-17)[2018-05-31].http://www.metrilus.de/blog/portfolio-items/aiolos/.
    [22]刘阳.基于Kinect的手势识别技术研究[D].重庆:重庆大学,2014.
    [23]ZHANG Z.Microsoft Kinect sensor and its effect[J].IEEE Multimedia,2012,19(2):4-10.
    [24]RAFI U,GALL J,LEIBE B.A semantic occlusion model for human pose estimation from a single depth image[C]// Computer Vision and Pattern Recognition Workshops.IEEE,2015:67-74.
    [25]FRATI V,PRATTICHIZZO D.Using Kinect for hand tracking and rendering in wearable haptics[C]//World Haptics Conference.IEEE,2011:317-321.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700