折反射全景立体成像
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
常规光学成像系统视场较小,而且只能记录场景的光强度信息,因此缺乏沉浸感和立体感。近年来,由于机器人导航技术、虚拟现实技术的发展,对大视场(甚至全视场)场景的立体感知和重现的需要日益增强,在光电子学、计算机视觉和计算机图形学发展的推动下,出现了一些全景立体成像技术和方法。
     论文的工作就是要研究折反射光学全景成像和双目视觉立体成像的理论、方法和技术,获取半球视场空间的场景三维信息,实现全景立体成像。研究内容分为三部分:第一部分由第二章至第四章组成,研究了折反射光学全景成像的理论和方法;第二部分由第五章至第八章组成,研究了双目立体视觉的基本理论和方法;最后一部分为第九章,将折反射全景成像和双目立体视觉技术结合获取空间场景的全景立体信息。取得的成果如下:
     1.系统地建立了单视点折反射全景成像系统的设计方法;完成了双曲面折反射全景成像系统、抛物面折反射全景成像系统的设计,建立了这二类折反射全景成像系统;推导了这二类系统的逆投影公式,建立了由原全景图像获得透视全景图像和柱面全景图像的方法。
     2.系统地建立了非单视点折反射全景成像系统的设计方法:完成了水平场景无畸变的折反射全景成像系统和柱面场景无畸变的折反射全景成像系统的设计,建立了这二类折反射全景成像系统;分析了透镜畸变和系统失调对水平场
    
    四川大学博士学位论文
    摘要
    景无畸变折反射全景成像系统成像的影响,提出了消透镜畸变的反射镜设计方
    法。
     3.研究了最小核值相似区(SUSAN)角点特征的立体图像对匹配。比较了
    以相关值为可信度测量和包含相关值和连续性的可信度测量这二种特征匹配技
    术的运算速度和误匹配发生概率。为立体图像对外极几何确定和立体图像对校
    正提供了获取匹配点集的较快速、准确的方法。
     4.研究了基本矩阵和外极几何变换之间的关系,为立体图像对外极几何确
    定的线性方法向非线性方法过渡建立了过渡公式。
     5.提出了一种无需相机标定的立体图像对平面投影变换校正方法。本方法
    利用了基本矩阵的指导作用和对应点坐标的决定性作用,有效地避免了非线性
    优化计算的局部最小值,而且不过分依赖基本矩阵的计算,是一种速度快、精
    度高的立体图像对校正方法。
     6.研究了基于归一化协方差区域相关技术快速获取浓密视差图的方法。采
    用金字塔图像结构匹配技术、Box一filte血g技术和视差范围约定加速匹配过程,
    匹配中候选匹配点的多义性采用双向匹配方法消除,为建立实时双目立体视觉
    系统奠定了理论和方法基础。
     7.提出了采用双曲面折反射全景成像系统实现全景立体视觉的方法,推导
    了距离测量的计算公式,建立了实验装置,获取了全景立体图像对,并从这些
    全景立体图像对中提取出了反映场景深度信息的浓密视差图。
The conventional optical imaging system has a small view of field , and can only record .light intensity information, so the image recorded by this imaging system lacks immerses feel and stereo feel too. Recent years, due to the development of automaton navigation technique and virtual reality technique, the demand of stereo apperceive and reappearance of scene in a large view of filed is boosting up increasingly. Under promoted by advancement of optoelectronics, computer vision and computer graphics, some omnidirectional stereo imaging technique and method appear.
    In this dissertation, we research the theory, method and technique on the catadioptric optical omnidirectional imaging and stereo imaging to obtain 3D information about the scene in a hemisphere view of field, and realize omnidirectional stereo imaging. This dissertation is composed of three important parts: the first part includes chapter 2 to chapter 4, in which the method and technique on the catadioptric optical omnidirectional imaging are researched. The second part includes chapter 5 to chapter 8, in which the fundamental method and technique are researched. The last part is chapter 9, in which catadioptric omnidirectional imaging is combined with stereo imaging to obtain panoramic stereo information about scene. The main results obtained can be summarized as follows:
    1. Methods of design for single-viewpoint catadioptric omnidirectional imaging systems are systematically established. According to our method, hyperboloid catadioptric omnidirectional imaging systems and paraboloidal catadioptric omnidirectional imaging systems have been designed and been set up. The re-projection formula about this two kinds of system are derived, which be used to obtain perspective omnidirectional image and cylinder panoramic image from original omnidirectional image.
    2. Methods of design for non-single-viewpoint catadioptric omnidirectional imaging systems are systematically established. According to our method, catadioptric omnidirectional distortionless imaging systems for horizontal scene and catadioptric omnidirectional distortionless imaging systems for cylinder scene have been designed and and been set up. The imaging affection of lens distortion to the catadioptric omnidirectional distortionless imaging system for horizontal scene has
    
    
    been analyzed. A improved method of mirror design to eliminate that affection has. been put forward.
    3. Stereo image matching based on SUSAN corner feature has been studied. We compare the operation cost and occurrent probability of mistake matching between two feature matching techniques that measuring support based on correlation score and based on correlation score and continuity, provide a fast and accurate method obtaining matched feature collection for determining the epipolar geometry and rectifying stereo pairs.
    4. Connection between the fundamental matrix and the epipolar geometry has been researched, in order to establish transition formula determining the epipolar geometry from linear method to non-linear method
    5. A method is present to rectify stereo pairs without calibration for cameras. The method calculates the initial values of perspective projective matrix; rotative matrix and translative matrix from the fundamental matrix, following these matrices have been optimized based on the coordinate value of corresponding points. The method can avoid the local least value in the optimal calculation effectively, and don't depend on the calculation precision of the fundamental matrix excessively, so is a fast and high precision rectification method.
    6. The fast dense disparity map extract method based on mean normalized cross-correlation technique has been research. The pyramid image frame, Box-filtering, and disparity search rang limiting are used to increase the speed of matching process. Bi-directional matching technique is adopted to disambiguate Matches. So the foundation of theory and method establishing real-time binocular stereo vision system have been s
引文
[1] S.Chen. Quicktime VR-an image-based approach to virtual environment navigation. Computer Graphics: Proc. of SIGGRAPH'95, August, 1995, 29-38.
    [2] S. K. Nayar and A. Karmarkar. 360×360 mosaics. Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2000, 388-395.
    [3] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision,1999, 35(2): 175-196.
    [4] S. Peleg and M. Ben-Ezra. Stereo panorama with a single camera. Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 1999, 395-401.
    [5] J.S.Chahl and M.V.Srinivasan. Reflective surfaces for panoramic imaging. Applied Optics, 1997, 36(31): 8275-8285.
    [6] S. Peleg, M. Ben-Ezra, and Y Pritch. Omnistereo: Panoramic Stereo Imaging. IEEE TRANS.PAMI, 2001, 23(3): 279-289.
    [7] J, Kumler and M, Baner. Fisheye lens designs and their relative performance. Proceedings of SPIE 4093, 2000, 360-369.
    [8] 贾云得,吕宏静,刘万春.鱼眼变形立体图像恢复稠密深度图的方法.计算机学报,2000,23(12):1332-1336.
    [9] I. Powell. Panoramic Lens. Applied Optics, 1994,33(31): 7356-7361.
    [10] 王道义,黄大为,邬敏贤,严瑛白和金国藩.全景环形透镜原理与特点剖析.光学技术,1998年1月,10-12.
    [11] S. Peleg, B.Rousso, A. Rav-Acha and A. Zomet. Mosaicing on adaptive manifolds. IEEE trans. PAMI, 2000, 22(10): 1144-1154.
    [12] R.Szeliski. Video mosaics for virtual environments. IEEE Computer Graphics and Applications, March 1996, 16(2): 22-30.
    [13] H.-C. Huang and Y.-P. Hung. Panoramic stereo imaging system with automatic disparity warping and seaming. Graphical Models and Image Processing, May 1998,60(3): 196-208.
    [14] H. Ishiguro, M. Yamamoto and S. Tsuji. Omni-directional stereo. IEEE Trans. Pattern Analysis and Machine Intelligence, Feb. 1992, 14(2): 257-262.
    [15] H-Y. Shum and R.Szeliski. Stereo reconstruction from multiperspective panoramas. Proc. IEEE Conf. Computer Vision, Sept. 1999, 14-21.
    
    
    [16] R. Benosman and J. Devars. Multidirectional stereovision sensor, calibration and scenes reconstruction. Proc. IEEE Conf. Pattern Recognition, 1996, 161-165.
    [17] K.Yamada, T.Ichikawa, T.Naemura, K.Aizawa and T.Saito. High quality stereo panorama generation using a 3 camera system. Proceedings of SPIE 4067, June 2000, 419-428.
    [18] F.Huang. Eplipolar Geometry in Concentric Panoramas. Research Report, 2000, Center of Machine Perception, Department of Cybernetrics Faculty of Electrical Engineering, Czech Technical University.
    [19] S. Seitz and J.Kim. The space of all stereo images. International Journal of Computer Vision, 2002,48(1): 21-38.
    [20] J. M. Gluckman, K. J. Thoresz, and S. K. Nayar. Real time panoramic stereo. Proc. of Image Understanding Workshop, IUW98, 1998, 299-303.
    [21] G. G, Barton, S. Feldman, and J. A. Beckstead. Static omnidirectional stereoscopic display system. Proceedings of SPIE 3835, Sep. 1999, 84-92.
    [22] 张茂军.虚拟现实系统.科学出版社,2001
    [23] Y. Xiong, Y. Turkowski. Creating Image-based VR using a self-calibration fisheye lens. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1997, 237-243.
    [24] 贾云得,吕宏静,徐岸,刘万春.一种鱼眼镜头成像立体视觉系统的标定方法.计算机学报,2000,23(11):1215-1219.
    [25] H. Ishiguro. Development of low-cost compact omnidirectional vision sensors and their application. Proc. Int. Conf. Information Systems, Analysis and Synthesis, 1998, 433-439.
    [26] S. Derrien, K.Konolige. Approximating a Single Viewpoint in Panoramic Imaging Devices. Proc. of Int. Conf. on Robotics & Automation, 2000, 3931-3938.
    [27] S.K.Nayar, Catadioptric omnidirectional camera. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1997, 482-488.
    [28] C. Pegard, E.M.Mouaddib. A mobile robot using a panoramic view. Proc. of Int. Conf. on Robotics & Automation, 1996, 89-94.
    [29] K. Kamazawa Y.Yagi, M.Yachida. Omnidirectional imaging with hyperboloidal projection. Proc. of Int. Conf. on Intelligent Robots and Systems, 1993, 1029-1034.
    [30] 李晓彤,岑兆丰.用丁全景成像系统的一种新型光学非球面.光电工程,2001,28(6):37-39.
    
    
    [31] R. A. Hicks, R. Bajcsy. Reflective Surfaces as Computational Sensors. Image and Vision Computing, 2001, 19(11) : 773-777.
    [32] R. A. Hicks, R. Bajcsy. Catadioptric Sensors That Approximate Wide-angle Perspective Projections. Proc. Conference on Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, 2000, 545-551
    [33] T. L. Conroy, T. B. Moore. Resolution invariant surface for panoramic vision systems. IEEE Conf. on Computer Vision, 1999, 392-397.
    [34] C. Pegard and E.M. Mouaddib. A mobile robot using a panoramic view. Proc. of Int. Conf. on Robotics & Automation, 1996, 89-94.
    [35] J. Hong, X .Tan, B. Pinerte, R. Weiss and E. M. Riseman. Image-based Homing. Proc. of Int. Conf. on Robotics & Automation. 1991, 620-625.
    [36] 魏芳,董再励,孙茂相,王晓蕾,用于移动机器人的视觉全局定位系统研究.机器人,2001,23(5) :400-403.
    [37] S. K. Nayar and V. N. Peri, Folded Catadioptric Cameras. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1999, 217-223.
    [38] V. N. Peri and S. K. Nayar. Genertion ofPerspective and Panoramic Video from Omnidirectional Video. Proc. of DARPA Image Understanding Workshop, 1997.
    [39] B. Hall, K. Huang, and M. Trivedi. A Televiewing System for Multiple Simultaneous Customized Perspectives and Resolutions. IEEE Int'l Conf. on Intelligent Transportation Systems, Aug 2001.
    [40] Y.Onoe, N. Yokoya, K.Ya mazawa, and H.Takemura. Visual surveillance and monitoring system using an omnidirectional video camera. Proc. IEEE Conf. on Pattern Recognition, 1998,588-592.
    [41] Y.Onoe,K.Yamazawa,H.Takemura,and N.Yokoya.Telepresence by real-time view-dependent image generation from omnidirectional video streams. Computer Vision and Image Understanding, 1998, 71(2) : 154-165
    [42] J.S.Chahl and M.V.Srinivasan. Range estimation with a panoramic sensor. J. Optical Soc. Amer.A, 1997,14:2144-2152.
    [43] D. Rees. Panoramic television viewing system. United States Patent, No. 3,505,465, April,1970.
    
    
    [44] 马颂德,张正友.计算机视觉:计算理论与算法基础.科学出版社,1998.
    [45] R.Y.Tsai. A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses. IEEE Journal of Automation, 1987, 3(4):323-344.
    [46] R.Y.Tsai. An efficient and accurate camera calibration technique for 3D machine vision. Proc. IEEE Conf. On Computer Vision and Pattern Recognition, 1986, 364-374.
    [47] J.Weng, P.Cohen, M.Herniou. Camera calibration with distortion models and accuracy evalution. IEEE Trans. PAMI, 1992,14(10): 965-980.
    [48] 王永仲.新光学系统的计算机设计.科学出版社,1998.10.
    [49] 杨勋,曾吉勇.光盘用光学头物镜性能的综合评估.光电工程,1994,21(2):49-52.
    [50] 游素亚,徐光祜.立体视觉研究的现状与进展.中国图象图形学报,1997,2(1):17-24.
    [51] 苏显渝,李继陶.信息光学.科学出版社,1999.
    [52] 金国藩,李景镇.激光测量学.科学出版社,1998.
    [53] Xiao-Xue Cheng, Xian-Yu Su, Lu-Rong Guo. Automated reconstruction method for 360° shape of 3-D objects. Appl. Opt., 1991, 30(10):1274-1278.
    [54] Jie-Lin Li, Xian-Yu Su. 3-D sensing using laser sheep projection: influence of speckle. Journal of Moderm Optics, 1994, 41(1): 89.
    [55] Jie-Lin Li, Xian-Yu Su, ect. Improved FTP for automatic measurement of 3-D object shapes. Opt. Eng.,1990, 29(24):1439-1444.
    [56] Xianyu Su, Wenjing Chert. Fourier transform profilomrtry: a review. Optics and Lasers in Engineering, 35(2001): 263-284.
    [57] Su Xianyu, Zhou wensen, ect. Automatical phase-measuring profilometry using defocused projection of a ronchi grating. Optics Communications, 1992, 94:561-571.
    [58] Xian-Yu Su, Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex obiects evaluation. Optics Communications, 1993, 98:141-150.
    [59] 郑南宁.计算机视觉与模式识别.国防工业出版社,1998.
    [60] 章毓晋.图像工程下册.清华人学出版社,2000.
    [61] 徐光祐.计算机视觉.清华大学讲义,2002.
    [62] M. Kass, A computational framework for the visual correspondence problem. Proc. ARPA Image Understanding workshop. Washington, D.C., 1983.
    
    
    [63] H. K. Nishihara, and T. Poggio. Stereo vision for robotics. In Robotics Research, Ed. By R. Brady and R. Paul, 1983.
    [64] C. Schmid, R. Mohr and C. Bauckhage. Evaluation of interest point detectors. International Journal of Computer Vision, 2000, 37(2) : 151-172.
    [65] D. Marr, and T. Poggio. A computational theory of human stereo vision, Proceedings of the Royal Society of London, B, 1979, 204: 301-328. 204:301-328,1979.
    [66] D. Marr And E. Hildreth. Theory of edge detection. Proceedings of the Royal Society of London,B, 1980,207:187-217.
    [67] C. Harris and M Stephens. A combined corner and edge detector. In Alvey Vision Conference, 1988, 147-151.
    [68] S. M. Smith, and J. M. Brady. SUSAN-A new approach to low level image processing. International Journal of Computer Vision, 1997,23(1) : 45-78.
    [69] 马尔.视觉计算理论,姚正国等译.科学出版社,1987.
    [70] L. Cohen. Hierachical region-based stereo machting. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1989,416-421.
    [71] S.B. Marapance, and M.M. Trivedi. Region-based stereo analysis for robotic applications. IEEE Trans. Systems, Man and Cybernetics, 1989,19(6) : 1447-1464.
    [72] Y.Ohta. Stereo by intra-and inter-scanline search using dynamic programming. IEEE Trans. On PAMI, 1985,7:139-154.
    [73] Y.C. Kim. Finding range from stereo image. Proc. of IEEE Conf. Computer Vision and Pattern Recognition, 1985, 289-294.
    [74] H. H. Baker. Edge-based stereo correlation, Proc. ARPA Image Understanding workshop, Univ. Of Maryland,, 1980, 168-175.
    [75] H. H. Baker, and T. O. Binford. Depth from edge and intensity based stereo, Proc. of Seventh Int. Joint Conf. Artif. Intell. Vancouver, British Collumbia, 1981,631-636.
    [76] 于起峰.陆宏伟,刘肖琳.基于图像的精密测量与运动测量.科学出版社.2002.
    [77] Zengyou Zhang. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Research Report, INRIA, 1994, No. 2273
    
    
    [78] P. Fua. A parallel stereo algorithm that produce dense depth maps that preserves image feature. Machine Vision Applications, 1993, 6(1) : 35-49.
    [79] P. Fua. Combining stereo and monocular information to compute dense depth maps that preserve discontinuities.12th.Intemational Joint Conference on Artificial Intelligence,1991, 1292-1298.
    [80] Q. T. Luong, and O. D. Faugera. The fundamental matrix: theory, algorithms, and stability analysis. International Journal of Computer Vision, 1996, 17(1) : 43-76.
    [81] Zhengyou Zhang. Determining the epipolar geometry and its uncertainty: a review. International Journal of Computer Vision, 1998, 27(2) : 161-195.
    [82] Q. T. Luong, O. D. Faugera. On the determining the fundamental matrix:analysis of different methods and experiment results. INRIA Report, 1993, No. 1894
    [83] R. I. Hartley. In defence of the 8-point algorithm. Proc. of the 5th Int. Conf. on Computer Vision, IEEE Computer Society Press: Boston, MA, 1995, 1064-1070.
    [84] Zhengyou Zhang. Estimating the fundumental matrix by transforming image points in projective space. Computer Vision and Image Understanding, 2001, 82(2) : 174-180.
    [85] O. D.Faugeras, Stratification of 3-D vision: Projective, affine, and metric representations. J. Opt. Soc. Am. (A), 1995, 12(3) : 465-484.
    [86] R. Hartley, Theory and practice of projective rectification. Int J Comput Vision, 1999, 35(2) : 1-16.
    [87] A. Fusiello, E. Trucco, and A. Verri. A compact algorithm for rectification of stereo pairs. Machine Vision Applications, 2000,12(1) : 16-22.
    [88] C. Loop, Zhengyou Zhang. Computing rectifying homographies for stereo vision. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1999, 125-131.
    [89] I.Francesco, E.Trucco. Projective rectification without epipolar geometry. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1999,94-99.
    [90] O. D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? Proc. 2nd European Conf. on Computer Vision, Santa Margherita Ligure, 1992, 563-578.
    [91] R. Hartley, R. Gupta and T. Chang. Stereo from uncalibrated cameras. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Champaign, 1992, 761-764.
    
    
    [92] D.Scharstein,R.Szeliski.Ataxonomyandevaluationof densetwo-framestereo correspondence algorithms. Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 2001, or MSR-TR-2001-81, Microsoft Research Technical Report. 2001
    [93] 钱曾波,邱振戈,张永强,计算机立体视觉研究的发展,测绘学院学报,2001, 18(4) :267-272.
    [94] C. L. Zitnick, and T. Kanade.A cooperative algorithm for stereo matching and occusion detection. IEEE Trans, on PAMI, 2000, 22(7) : 675-684.
    [95] T. Kanade, and M. Okutomi. A stereo matching algorithm with an adaptive window:Theory and experiment. IEEE Trans, on PAMI, 1994, 16(9) : 920-932.
    [96] O.Faugeras,andB.Hotz,etal.Realtimecorrelation-basedstereo:algorithm, implementations and applications. INRIA Report, 1993, No:2013.
    [97] Changming Sun. Fast stereo matching using rectangular Subregioning and 3D maximum-surface technique. International Journal of Computer Vision, 2002,47(1/2/3) : 99-117.
    [98] K. Konolige. Small vision systems: hardware and implementation. Eighth International Symposium on Robotics Research, Japan, October 1997, (http://citeseer.nj.nec.com/konolige97small.html)
    [99] Luigi Di Stefano, Stefano Mattoccia. Fast Stereo Matching for the VIDET System using a General Purpose Processor with Multimedia Extensions. 5thIEEE International Workshop on Computer Architecturesfor MachinePerception,PadovaItaly,September 11-13,2000, (http://labvisione.deis.unibo.it/~smattoccia/)
    [100] G. V. Meerbergen, M. Vergauwen, M. Pollefeys and L. V. Gool. A hierarchical symmetric stereo algorithm using dynamic programming. International Journal of Computer Vision, 2002, 7(1/2/3) : 275-285.
    [101] K. Muhlmann, D. Maier, J. Hesser, and R. M. Anner. Calculating dense disparity maps from color stereo images, an efficient implementation. International Journal of Computer Vision, 2002, 47(1/2/3) : 79-88.
    [102] S. Birchfield, and C. Tomasi. Depth discontinuities by pixel-to-pixel stereo. International Journal of Computer Vision, 1999,35(3) : 269-293.
    [103] A. F. Bobick. Large occlusion stereo. International Journal of Computer Vision,1999, 33(3) : 181-200.
    
    
    [104] H. Hirschmuller, P. R. Innocent and J. Garibaldi. Real-time correlation-based stereo vision with reduced border errors. International Journal of Computer Vision, 2002, 47(1/2/3): 229-246.
    [105] 张爱武,李明哲,胡少兴.基于立体视觉的二维曲面测量关键技术研究.系统工程与电子技术,2001,23(10):83-87.
    [106] 张爱武,李明哲,胡少兴.一种用于二维曲面视觉测量的立体精匹配方法.光学技术,2001,27(2):115-117.
    [107] 丁辉,陈果,左洪福,黄传奇.发动机孔探图像二维测量与立体重建的实现.航空计测技术,2002,22(2):6-12.
    [108] T.Kanade. Development of a video-rate stereo machine, Image Understanding Workshop, 1994, 549-557.
    [109] T.Kanade, et al A stereo machine for video-rate dense depth mapping and its new applications, Proc. of IEEE Conf. Computer Vision and Pattern Recognition, 1996, 196-202.
    [110] J.Burt and E.H.Adelson,The laplacian pyramid as a compact image code, IEEE on PAMI 1983, 31(4): 532-540.
    [111] P.J.Burt and E.H.Adelson, A multiresolution spline with application to image mosaics, ACM trans, on GRAPHICS, 1983, 2(4): 217-236.
    [112] Anandan. A computational framework and an algorithm for the measure of visual motion, International Journal of Computer Vision,1989, 2(3): 283-310.
    [113] D. Southwell, A. Basu, M. Fiala, and J. Reyda. Panoramic stereo. Proc. of the Int'l Conf. on Pattern Recognition, Aug, 1996.
    [114] M. Fiala and A. Basu. Panoramic stereo reconstruction using non-SVP optics. Proc. of the Int'l Conf. on Pattern Recognition, 2002, Aug 11-15, 2002.
    [115] M. Fiala and A. Basu. Feature extraction and calibration for stereo reconstruction using non-SVP optics in a panoramic stereo-vision sensor. Omnivis 2002, workshop with ECCV2002, June 2, 2002.
    [116] C. Geyer and K. Daniilidis. Catadioptric projective geometry. International Journal of Computer Vision, 2001, 45(3): 223-243.
    [117] T. Svoboda, and T. Pajdla. Epipolar geometry for central catadioptric cameras. International Journal of Computer Vision, 2002, 49(1): 23-37.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700