摘要
针对健身者在健身过程中因缺乏监督指导而导致姿势不正确甚至危及健康的问题,提出了一种深蹲姿势实时检测的新方法。通过Kinect摄像头提取人体关节三维信息,对健身中最常见的深蹲行为进行抽象与建模,解决了计算机视觉技术对于细微动作变化难以检测的问题。首先,通过Kinect摄像头捕获深度图像,实时获取人体关节点的三维坐标;然后,将深蹲姿势抽象为躯干角度、髋部角度、膝部角度和踝部角度,并进行数字化建模,逐帧记录下角度变化;最后,在深蹲完成后,采用阈值比较的方法,计算一定时间段内非标准帧比率。如计算比率大于所给定阈值,则判定此次深蹲为不标准;如低于阈值则为标准深蹲姿势。通过对六种不同类型的深蹲姿势进行实验,结果表明,该方法可检测出不同类型的非标准深蹲姿势,并且在六种不同类型的深蹲姿势中平均识别率在90%以上,能够对健身者起到提醒指导的作用。
Concerning the problem that the posture is not correct and even endangers the health of body builder caused by the lack of supervision and guidance in the process of bodybuilding, a new method of real-time detection of deep squat posture was proposed. The most common deep squat behavior in bodybuilding was abstracted and modeled by three-dimensional information of human joints extracted through Kinect camera, solving the problem that computer vision technology is difficult to detect small movements. Firstly, Kinect camera was used to capture the depth images to obtain three-dimensional coordinates of human body joints in real time. Then, the deep squat posture was abstracted as torso angle, hip angle, knee angle and ankle angle, and the digital modeling was carried out to record the angle changes frame by frame. Finally, after completing the deep squat, a threshold comparison method was used to calculate the non-standard frame ratio in a certain period of time. If the calculated ratio was greater than the given threshold, the deep squat was judged as non-standard, otherwise judged as standard. The experiment results of six different types of deep squat show that the proposed method can detect different types of non-standard deep squat, and the average recognition rate is more than 90% of the six different types of deep squat, which can play a role in reminding and guiding bodybuilders.
引文
[1] CHIU L Z.Sitting back in the squat[J].Strength and Conditioning Journal,2009,31(6):25-27.
[2] YAO L Y,MING W D,Cui H.A new Kinect approach to judge unhealthy sitting posture based on neck angle and torso angle[C]// Proceedings of the 2017 International Conference on Image and Graphics,LNCS 10666.Berlin:Springer-Verlag,2017:340-350.
[3] FANG B,SUN F C,LIU H P,et al.3D human gesture capturing and recognition by the IMMU-based data glove[J].Neurocomputing,2017,277:198-207.
[4] FERRONE A,JIANG X,MAIOLO L,et al.A fabric-based wearable band for hand gesture recognition based on filament strain sensors:A preliminary investigation[C]// Proceedings of the 2016 IEEE Healthcare Innovation Point-of-Care Technologies Conference.Piscataway,NJ:IEEE,2016:113-116.
[5] WU D,SHAO L.Deep dynamic neural networks for gesture segmentation and recognition[C]// Proceedings of the 2014 European Conference on Computer Vision.Berlin:Springer,2014:552-571.
[6] LI Y,WANG X G,LIU W Y,et al.Deep attention network for joint hand gesture localization and recognition using static RGB-D images[J].Information Sciences,2018,441:66-78.
[7] 曾星,孙备,罗武胜,等.基于深度传感器的坐姿检测系统[J].计算机科学,2018,45(7):237-242.(ZENG X,SUN B,LUO W S,et al.Sitting posture detection system based on depth sensor[J].Computer Science,2018,45(7):237-242.)
[8] YAO L Y,MING W D,LU K Q.A new approach to fall detection based on the human torso motion model[J].Applied Sciences,2017,7(10):993.
[9] BACCOUCHE M,MAMALET F,WOLF C,et al.Sequential deep learning for human action recognition[C]// Proceedings of the 2011 International Workshop on Human Behavior Unterstanding,LNCS 7065.Berlin:Springer-Verlag,2011:29-39.
[10] NG J Y,HAUSKNECHT M,VIJAYANARASIMHAN S,et al.Beyond short snippets:deep networks for video classification[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2015:4694-4702.
[11] 吴亮,何毅,梅雪,等.基于时空兴趣点和概率潜动态条件随机场模型的在线行为识别方法[J].计算机应用,2018,38(6):1760-1764.(WU L,HE Y,MEI X,et al.Online behavior recognition using space-time interest points and probabilistic latent-dynamic conditional random field model[J].Journal of Computer Applications,2018,38(6):1760-1764.)
[12] 姬晓飞,左鑫孟.基于关键帧特征库统计特征的双人交互行为识别[J].计算机应用,2016,36(8):2287-2291.(JI X F,ZUO X M.Human interaction recognition based on statistical features of key frame feature library[J].Journal of Computer Applications,2016,36(8):2287-2291.)
[13] KALIATAKIS G,STERGIOY A,VIDAKIS N.Conceiving human interaction by visualising depth data of head pose changes and emotion recognition via facial expressions[J].Computers,2017,6(3):25-37.
[14] MAITI S,REDDY S,RAHEJA J L.View invariant real-time gesture recognition[J].Optik—International Journal for Light and Electron Optics,2015,126(23):3737-3742.
[15] 张全贵,蔡丰,李志强.基于耦合多隐马尔可夫模型和深度图像数据的人体动作识别[J].计算机应用,2018,38(2):454-457.(ZHANG Q G,CAI F,LI Z Q.Human action recognition based on coupled multi-hidden Markov model and depth image data[J].Journal of Computer Applications,2018,38(2):454-457.)
[16] 谈家谱,徐文胜.基于Kinect的指尖检测与手势识别方法[J].计算机应用,2015,35(6):1795-1800.(TAN J P,XU W S.Fingertip detection and gesture recognition method based on Kinect[J].Journal of Computer Applications,2015,35(6):1795-1800.)
[17] CHOUBIK Y,MAHMOUDI A.Machine learning for real time poses classification using Kinect skeleton data[C]// Proceedings of the 2016 International Conference on Computer Graphics,Imaging and Visualization.Piscataway,NJ:IEEE,2016:307-311.
[18] WINWOOD P W,CRONIN J B,BROWN S R,et al.A biomechanical analysis of the heavy sprint-style sled pull and comparison with the back squat[J].International Journal of Sports Science and Coaching,2015,10(5):851-868.
[19] STEVENS W R Jr,KOKOSZKA A Y,ANDEERSON A M,et al.Automated event detection algorithm for two squatting protocols[J].Gait and Posture,2018,59:253-257.