基于不同视场的无人机辅助着陆研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
为了提高无人机着陆自主性和智能性,本文针对固定翼无人机研究了基于视觉辅助的自主着陆方法。通过对无人机着陆过程的分析,将无人机利用视觉着陆的过程分为三个阶段,即远距离阶段、中距离阶段和近距离阶段。在每一阶段采用不同的视觉策略,识别不同的目标,达到不同的识别精度。
     在远距离阶段,无人机视觉识别的主要任务是通过对机场及其周围主要环境标识物的识别来确定机场的位置。对机场的识别主要是将待识别图像分割成16*16大小的子块,通过Contoulet变换对子块进行多尺度分解,在不同尺度下提取图像的矩特征和纹理特征,并将这些特征作为该子块的特征向量。将每个子块的特征向量与建立的机场环境标识的特征库相对比来识别该子块属于的环境类别,最终达到识别机场的目的。在中距离阶段,无人机识别的主要目标是跑道。对于跑道识别本文主要采用模板匹配的方法。利用改进的快速SIFT算法提取模板和背景的SIFT特征,并将模板的SIFT特征与背景中的所有SIFT特征相对比,寻找与模板匹配的特征点,当背景中存在一定量的匹配特征点时,就认为背景中有跑道,并确定跑道的具体位置。在近距离阶段,无人机视觉识别的主要任务是识别着陆标识。本文设计“H”型的着陆标识,并利用着陆标识与跑道颜色的差异提取出着陆标识。由于实际环境的复杂性,本文首先采用稳健的Hough变换检测标识边缘直线,然后通过求取直线方程交点的方法来提取着陆标识上的特征点。利用平面的单应性对摄像机进行在线标定,通过建立坐标系和解算摄像机的外参数矩阵估计无人机的姿态和位置。
     将无人机视觉着陆过程划分为三个阶段具有一定的科学性,每个阶段之间有一定的连续性,每个阶段的识别重点不同且识别任务较轻。实验结果表明,在远距离阶段的算法能够很好的识别地貌的主要环境标识,中距离的SIFT特征匹配能够很好的定位跑道的位置,近距离的在线标定和姿态估计技术能够满足无人的着陆需要。
In order to increase autonomy and intellectuality of the UAV landing, this paper studies the visual assistant landing method of fixed wings UAV. Through the analysis of UAV landing process, this paper divided the UAV landing process into three visual stages, which is remote stage, middle stage and close stage. In different stages, UAV uses different visual strategies, identify different goals, achieve different accuracies.
     In the remote stage, the main mission of the visual identification is confirming the orientation of the airport by identifying the airport and the main environmental objectives around it. The method of identifying airports is to divide the image into 16*16 sized subblocks firstly, making multi-scale decomposition of every subblock through the Contoulet transform, then extracting moment feature and texture feature under different scales, and combine these features as subblock’s feature vector. Comparing each subblock’s feature vector with the airport environmental feature database in order to identify which environment the subblock belongs to, finally determine the location of the airport in the view. In the middle stage, the runway is the main target that UAV need to identify from background. The template matching method can be used for the runway identification in this paper. The rapid improved SIFT algorithm is used to extract SIFT characteristics of the background and the template, then comparing the template’s SIFT features with the background’s, in order to look for the characteristic points matched with template in the background. There must be a runway in the background, when there are some matched feature points in the background, then getting the location of the runway. In the close stage, the main task of UAV visual recognition is identifying the landing logo. This paper designs landing logo as "H" and extracts it from background by using the difference of bright color between the landing logo and the runway. Because of the complexity of the actual environment, this paper uses the steady Hough transformation to detect line of logo’s edge and extract characteristic points on the logo by getting cross points of straight-line equation. Then making camera on-line calibration by using the homography of plane, getting attitude angle and location by establishing coordinate system and calculating external parameters matrix of the camera.
     There is scientificity in dividing UAV visual landing process into three stages, because each stage has certain continuity, the identifying key is different and recognition task is light in each stage. The experimental results show that the main environmental logo can be recognized well in the remote stage, the SIFT feature matching can locate the runway very well in the middle stage, the technology of on-line calibration and pose estimation can meet the need of the UAV landing in the close stage.
引文
[1] SAR IPALL I S, MONTGOMERY J F, SUKHATME G S. Vision-based autonomous landing of an unmanned aerial vehicle[C]. IEEE International Conference on Robotics and Automation, Washington D C, USA, 2002: 27992-2804.
    [2]陈磊,陈宗基.基于视觉的无人作战飞机自主着陆导航方案[J].北京航空航天大学学报. 2007, 33(2): 159-163.
    [3] SHARP C S, SHAKERNIA O, SASTRY S S. A vision system for landing an unmanned aerial vehicle[C]. International Conference on Robotics and Automation Robot, 2001: 1720-1728.
    [4] Lihong Zhi,Jianliang Tang. A Complete Linear 4-point Algorithm for Camera Pose Determination[C]. MMRC,AMSS,Academia,Sinica,Beijing,2002.
    [5] Saripalli, S.,Montgomery, J.F., and Sukhatme, G.S., Vision-based autonomous landing of an unmanned aerial vehicle[C]. IEEE Conference on Robotics and Automation, May 2002,. 2799-2804.
    [6] Srikanth Saripalli,James F.Montgomery,Gaurav S.Sukhatme. Visually-Guided Landing of an Unmanned Aerial Vehicle[J]. IEEE transaction on Robotics and Automation, 2003,19(3):371-381.
    [7] Ettinger S.M.Design And Implementation of Autonomous Vision-Guided Micro Air Vehicle[C]. Florida,USA,Electrical and Computer Engineering,University of Florida,2001.
    [8] Scott M.ettinger,Michael C.Nechyba and Peter D.lfju.Vision-Guided Flight Stability and Control for Micro Air Vehicles[J]. IEEE Int.Conf.on Intelligent Robots and Systems, 2002: 2134-2140.
    [9] Scott M.Ettinger,Michael C.Nechyba and Peter D.lfju,Towards Flight Autonomy:Vision-based Horizon Detection for Micro Air Vehicle[C]. 2002 Florida Conference on Recent Adbances in Robotics,2002.
    [10] http//:www.2d3.com..
    [11] Michael Calonder, Vincent Lepetit, Pascal Fua Kurt Konolige, James Bowman, Patrick Mihelich. Compact Signatures for High-speed Interest Point Description and Matching[C]. 2009 International Conference on Computer Vision, Kyoto, Japan, 2009.
    [12] Christian Barat, Benoit Lagadec. A corner tracker snake approach to segment irregular object shape [C]. 2008 IEEE International Conference on Acoustics, Speech, and Signal. Processing .Las Vegas ,2008.
    [13] Tiago F. Gon?alves, JoséR. Azinheira, Patrick Rives: Vision-based Autonomous Approach and Landing for an Aircraft using a Direct Visual Tracking Method[C]. the 6th International Conference on Informatics in Control, Automation and Robotics, Volume Robotics and Automation, Milan, Italy 2009: 94-101.
    [14] Miller, A.,Shah, M., Harper, D. Landing a UAV on a runway using image registration[J]. Robotics and Automation, 2008,5: 182– 187.
    [15] Cesetti E. Frontoni. Mancini Zingaretti S. Longhi. A Vision-Based Guidance System for UAV Navigation and Safe Landing using Natural Landmarks[J]. Intell Robot Syst (2010) 57:233–257.
    [16] Sungsik Huh,David Hyunchul Shim,A Vision-Based Automatic Landing Method for Fixed-Wing UAVS[J]. Intell Robot Syst (2010) 57:217–231.
    [17]何巍.飞机自主着陆中的图形图像识别算法研究[D].北京航空航天大学,2004.
    [18]刘新华.基于视觉的无人机着陆姿态检测和跑道识别[D].南京航空航天大学,2004.
    [19]张珍.无人机自主着陆的视觉识别与定位算法设计及仿真研究[D].南京航空航天大学,2008.
    [20]冯英.基于视觉的机场跑道识别与飞行参数估计[D].南京航空航天大学,2009.
    [21]白亮,陈楸,滕科嘉.计算机视觉辅助无人机自主着陆系统[J].弹箭与制导学报, 2006, 2(26): 320-324.
    [22] Jiajia Shang, Zhongke Shi. Vision-based Runway Recognition for UAV Autonomous Landing[C]. International Journal of Computer Science&Network Security, 2007.7(3): 112-117.
    [23]赵昊呈,李红,鹏嘉雄.基于视觉的飞行自主着陆导航[J].系统工程与电子技术. 2007.29(7): 1131-113.
    [24]张天序,曹杨,刘进.基于不变矩的前视红外图像机场目标识别[J].华中科技大学学报, 2007, 35(1): 17-19.
    [25]王洪群,彭嘉雄,李玲玲.基于视觉的无人机着陆时机场标记的检测与识别[J].模式识别与人工智能. 2006, 19(6): 764-770.
    [26]王兴国.基于视觉信息的无人飞行器自主着陆导航系统的关键技术研究[D].浙江大学,2007.
    [27]潘翔,马德强,吴贻军,张光富,姜哲圣.基于视觉着陆的无人机俯仰角与高度估计[J].浙江大学学报, 2009, 43(4): 692-696.
    [28]申为峰.基于视觉的无人机自主着陆及位姿估计[D].沈阳航空航天大学,2009.
    [29]李南南.无人机着陆点的地貌分类特征识别技术研究[D].沈阳航空航天大学,2009.
    [30]刘蒽蒽,桑农,曹治国,张天序.基于上下文的机场目标识别方法[J].红外与激光工程, 2004 , 32(6):67-70.
    [31]马春红,叶继昌,王小平,杨兵.前视红外图像中机场的自动识别[J].红外与激光工程, 2006, 35:229-334.
    [32]张天序,曹杨,刘进,李勐.基于不变矩的前视红外图像机场目标识别[J].华中科技大学学报(自然科学版), 2007, 35(1): 17-19.
    [33] Po D D Y. Do M N. Directional muhiscale modeling of images using the contourlet transform[J]. IEEE Transactions on Image Processing , 2006,15(6): 1610-1620.
    [34]孙亦南,刘伟军,王越超.基于几何不变量的图像特征识别[J].计算机应用与软件,2004,21(12):1-3.
    [35]严柏军.基于不变矩特征匹配的快速目标检测算法[J].红外技术,2001,23(6):8-12.
    [36]杜亚娟,张洪才,潘泉.基于矩特征的三位飞机目标识别[J].数据采集与处理,2009,15(3):390-394.
    [37]甘俊英,张有为.基于不变矩特征和神经网络的人脸识别模型[J].计算机工程与应用, 2002, 24(3): 53-56.
    [38] Minh N. Do, Martin Vetterli. The Contourlet transform :an eddicient directional multiresolution image representation[J]. IEEE Trans On Image Processing, 2005,14(12): 2091-2106.
    [39] M N Do, M Vetterli.Framing pyramids[J].IEEE Trans, Signal Proc-51,2003, (9):2329-2342.
    [40]李金莲,刘晓玫,李恒鹏. SPORTS影像纹理特征提取与土地利用信息识别方法[J].遥感学报, 2006, 10(6):926-928.
    [41]刘新,华舒宁.纹理特征在多光谱遥感影像分类中的应用[J].测绘信息与工程2006, 31(3): 31-32.
    [42] Ding Meng, Cao Yunfeng, Guo Lin. A Method to Recognize and Track Runway inthe Image Sequences Based on Template Matching[C]. In:Proceedings of 1st International Symposium on Systems and Control in Aerospace and Astronautics. Haerbin 2006, 1218-1221.
    [43]姚克明,宋利权,张金锁.基于复杂背景的红外机场目标自动识别算法研究[J].红外与激光工程, 2007, 36 (3): 398-402.
    [44]王洪群,彭嘉雄,李玲玲.基于视觉的无人机着陆时机场标记的检测与识别[J].模式识别与人工智能. 2006, 19(6): 764-770.
    [45] Andrea Cesetti, Emanuele Frontoni, Adriano Mancini, Vision-based autonomous navigation and landing of an unmanned aerial vehicle using natural landmarks[C].2009 17th Mediterranean Conference on Control and Automation..
    [46] Andrew Miller, Mubarak Shah, Don Harper. Landing a UAV on a Runway Using Image Registration[C]. IEEE International Conference on Robotics and Automation, Pasadena, May 2008, 182-187.
    [47] David G. Lowe. Distinctive image features from scale-invariant keypoints[J]. International Journal ofComputer Vision, 2004, 60(2): 91-110.
    [48] H.Bay, T.Tuytelaars, L.Van Gool. SURF: Speeded uprobust features[C]. 9th European Conference on Computer Vision, Graz, Austria, 2006.
    [49]刘立,彭复员,赵坤,万亚平.采用简化SIFT算法实现快速图像匹配[J].红外与激光工程, 2008, 37(1): 181-184.
    [50] Lindeberg, T. Scale-space theory: A basic tool for analysing structures at different scales[J]. Journal of Applied Statistics, 1994, 21(2): 224-270.
    [51]陈龙胜,陈谋,姜长生.基于视觉信息的无人机自主着陆过程姿态和位置估计[J].电光与控制, 2009, 16(5): 47-51.
    [52] Damien Dusha,Wageeh Boles,Rodney Walker. Fixed-Wing Attitude Estimation Using Computer Vision Based Horizon Detection [J]. In Proceedings 12th Australian International Aerospace Congress, 2007 12(15):1-19.
    [53] Thurrowgood, S;Soccol, D;Moore, R;Bland, D. A Vision based system for attitude estimation of UAVS[J]. Intelligent Robots and Systems, 2009, 12(15): 5725– 5730.
    [54]赵昊昱,李红,彭嘉雄.基于视觉的飞机自主着陆导航[J].系统工程与电子技术, 2007,29(7):1131-1133.
    [55]王晓剑,潘顺良,邱力为.基于双平行线特征的位姿估计解析算法[J].仪器仪表学报, 2008, 29(3): 600-604.
    [56]张广军,周富强.基于双圆特征的无人机着陆位置姿态视觉测量方法[J].航空学报, 2005,26(3):344-348.
    [57]邓红德,王丽君,金波.一种无人机自主着陆视觉跟踪方法[J].算机测量与控制, 2009,17(7):1387-1389.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700