摘要
为增强三维场景中物体的表面信息,提出一种基于彩色方格伪随机编码结构光的三维重建方法。通过在目标物体上投影彩色结构光图案,对相机采集的图像进行角点提取,利用FREAK特征描述子描述角点。在极线约束条件下,求取汉明距离最小值以实现特征匹配,并依据三角测量原理获得物体的三维信息,进而完成三维重建。结果表明,在只投影一幅结构光图案的前提下,对平面物体进行三维重建的均方根误差为0.36 mm。实验证明所提方法有较高精度,可应用于非彩色物体的三维重建。
This paper presents a three-dimensional reconstruction method based on color square pseudo-random coded structured light and designed to enhance the surface information of objects in three-dimensional scene. The study involves projecting a color structured light pattern on the target object and then extracting the corner points of the image captured by the camera; describing the corner points using the FREAK feature description; matching the feature points by finding the minimum value of hamming distance under the epipolar constraint; and obtaining three-dimensional information of the object using the principle of triangulation and ultimately completing the three-dimensional reconstruction. The results indicate that projecting only one structural light pattern means a 0. 36 mm RMSE for three-dimensional reconstruction of the planar objects. The experiment proves that the proposed method with a higher precision could work better for three-dimensional reconstruction of non-colored objects.
引文
[1]Song Z,Chung R C K.Grid point extraction and coding for structured light system[J].Optical Engineering,2011,50(9):1-11.
[2]Sun T Z,Bai B X,Hang C,et al.A color structured light coding and decoding method based on regional structured image with heterogeneous applications[J].International Journal of Signal Processing,Image Processing and Pattern Recognition,2016,7(9):127-136.
[3]Pagès J,Salvi J,Collewet C,et al.Optimised de bruijn patterns for one-shot shape acquisition[J].Image and Vision Computing,2005,23(8):707-720.
[4]王若曦.一种编码结构光三维重建方法及系统[D].深圳:深圳大学,2017.
[5]Tang S M,Zhang X,Song Z,et al.Three-dimensional surface reconstruction via a robust binary shape-coded structured light method[J].Optical Engineering,2017,56(1):1-14.
[6]何懂,刘晓利,殷永凯,等.结合条纹和伪随机结构光投影的三维成像[J].中国激光,2014,41(2):155-161.
[7]张广军.视觉测量[M].北京:科学出版社,2008.
[8]Vandergheynst P,Ortiz R,Alahi A.FREAK:fast retina keypoint[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Rhode Island,USA:IEEE,2012:510-517.