摘要
提出了一种室内无标志动态目标的视觉定位方式,采用卡尔曼滤波估计动态目标图像坐标,并利用分区定位方式对多相机视场内的动态目标进行定位,根据观测到动态目标的相机数目将多相机视场范围区域划分为单目区域、双目区域和多目区域,对不同的区域采用不同的定位方式。仿真和真实实验表明,本文算法可以达到10 cm左右的定位精度,运行效率可以达到50 ms/帧。提出的无标志动态目标定位方法具有更高的鲁棒性,基本可以满足实时性要求且定位精度较高。
A visual positioning method for indoor unmarked dynamic targets is proposed in this paper. Kalman filter is used to estimate the dynamic target image coordinates, and the dynamic target in the multi-camera field of view is located by using the partition positioning method. According to the number of cameras that observe the dynamic target, the field of view of multi-camera is divided into monocular area, binocular area, and multi-view area, and uses different positioning methods are used in different areas. Simulation and real experiments show that the positioning accuracy of the proposed algorithm can achieve about 10 cm, and the running efficiency of that can reach 50 ms/frame. The unmarked dynamic target localization method proposed in this paper has higher robustness, and can basically meet the real-time requirements with high positioning accuracy.
引文
[1] 邓中亮,尹露,唐诗浩,等.室内定位关键技术综述[J].导航定位与授时,2018,5(3):14-23.DENG Z L,YIN L,TANG S H,et al.Key technologies of indoor positioning[J].Navigation Positioning and Timing,2018,5(3):14-23.
[2] 李林.基于位置指纹的WiFi室内定位技术研究与实现[D].哈尔滨:哈尔滨工业大学,2017:14-17.LI L.The WiFi indoor location technology based on the fingerprint[D].Harbin:Harbin Institute of Technology,2017:14-17.
[3] 洪惠鹏.基于TDOA算法的UWB室内定位系统研究[D].海口:海南大学,2016:7-17.HONG H P.Research on UWB indoor positioning system based on TDOA algorithm[D].Haikou:Hainan Uniersity,2016:7-17.
[4] 郭兆华,刘雪峰,王琪,等.一种用于室内定位的智能终端及蓝牙室内定位系统:CN 105338489 A[P].2016.GOU Z H,LIU X F,WANG Q,et al.Intelligent terminal for indoor positioning and Bluetooth indoor positioning system:CN 105338489 A[P].2016.
[5] 李丽娜,马俊,徐攀峰,等.RFID室内定位技术研究综述[J].计算机应用与软件,2015(9):1-3.LI L N,MA J,XU P F,et al.Summary of study on RFID indoor localization technology[J].Computer Applications and Software,2015(9):1-3.
[6] PIZARRO D,MAZO M,SANTISO E,et al.Localization and geometric reconstruction of mobile robots using a camera ring[J].IEEE Transactions on Instrumentation & Measurement,2009,58(8):2396-2409.
[7] FUENTES PACHECO J,RUIZ ASCENCIO J,RENDóN MANCHA J M.Visual simultaneous localization and mapping:A survey[M].Boston:Kluwer Academic Publishers,2015:55-81.
[8] LEE J H,ANDO N,YAKUSHI T,et al.Adaptive guidance for mobile robots in intelligent infrastructure[C]// IEEE/RSJ International Conference on Intelligent Robots and Systems.Maui,USA,2001:90-95.
[9] STEINHAUS P,WALTHER M,GIESLER B,et al.3D global and mobile sensor data fusion for mobile platform navigation[C]// IEEE International Conference on Robotics and Automation.New Orleans,USA,2004:3325-3330.
[10] YAN J J,HE G G,HANCOCK C.Low-cost vision-based ositioning system[C]//14th International Conference on Location Based Services.Zurich,Switzerland,2018:44-49.
[11] 夏菠.基于双目视觉的动态目标定位与抓取研究[D].绵阳:西南科技大学,2016:55-78.XIA B.Research on dynamic target positioning and grabbing based on binocular vision[D].Mianyang:Southwest University of Science and Technology,2016:55-78.
[12] 聂烜,陆满君,席超敏,等.一种基于合作目标的视觉定位方法[J].制导与引信,2017,38(2):42-50.NIE H,LU M J,XI C M,et al.A visual positioning method based on cooperative target[J].Guidance & Fuze,2017,38(2):42-50.
[13] ZHANG Z Y.A flexible new technique for camera calibration[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
[14] 周良毅.面向视觉传感器网络的目标定位与追踪研究[D].杭州:浙江大学,2012:63-78.ZHOU L Y.Target positioning and tracking research for vision sensor networks[D].Hangzhou:Zhejiang University,2012:63-78.
[15] 刘智伟,李建胜,王安成,等.基于运动捕捉系统的UWB室内定位精度标定方法[J].测绘科学技术学报,2017,34(2):147-151.LIU Z W,LI J S,WANG A C,et al.UWB indoor positioning accuracy calibration method based on motion capture system[J].Journal of Geomatics Science and Technology,2017,34(2):147-151.