虚实混合中的风化材质模拟和光照一致性问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着计算机技术的快速发展,如何将真实世界和计算机所模拟的虚拟世界高度无缝地融合起来,引起了人们的广泛关注。经过多年的发展,研究者们已经在虚实融合的几何一致性上取得了重要进展,目前已经能够实时地恢复摄像机的参数甚至场景的三维几何结构,将虚拟物体正确地注册到拍摄的视频场景中。最新的研究主要集中在如何解决虚实混合中的光照一致性和虚实物体的相互作用的一致性上,以进一步提高虚实混合的视觉一致性程度,并改进人机交互的界面和方式,使之更为便捷高效。
     基于上述研究背景,本文主要针对虚实融合中的风化材质模拟技术和光照一致性问题进行了研究,在风化模拟模型、风化材质交互编辑、光照模型及注册等方面分别提出了解决办法,实现了高质量的虚实融合效果。我们的主要工作包括以下内容:
     ·提出了一个风化模拟通用模型,支持在虚拟物体上实现多种风化效果。该技术创见性地提出了能诱发风化的γ粒子的概念,通过用户指定简单的风化传递规则,然后在虚拟物体上进行γ粒子跟踪,为虚拟物体量身定制用户想要的风化效果。除此以外,本技术还能够模拟诸如污渍渗出、多重风化和大规模几何形变等过去技术无法实现的效果,大大拓展了风化模拟的范围。
     ·进一步提出了一个面向视频的交互式风化处理框架,能够对视频对象进行交互编辑,向其添加各种真实感的风化效果。在恢复出视频的深度信息后,为了保证视频编辑的时间一致性,提出了一个新的基于深度的交互式视频对象分割方法,有效地从视频中提取出感兴趣的视频对象。然后,提出了一个新的基于深度准确估计的点云采样算法,从恢复的深度信息中采样得到视频对象的点云模型,剔除大量地冗余点,然后重建其三维几何模型。接着,引入基于γ粒子跟踪的风化模型,对重建得到的三维几何进行γ粒子跟踪,得到风化信息,用于修改视频对象的自然状态。并且改进原有γ粒子跟踪风化模拟技术缺少交互功能的不足,提供了两种交互式风化处理工具,支持用户引导风化发生的区域和调整风化速度,提高了风化处理的灵活性,降低了编辑任务的复杂度。
     ·提出了一种基于特征自动匹配的环境映照对齐方法,采用Affine-SIFT算法和随机抽样一致性算法对环境映照和场景进行特征匹配并优化匹配结果,利用基于运动推断结构的摄像机定标算法求得匹配对的三维位置,从而计算出环境映照与真实场景的对应关系,将完整光照信息注册到场景中,保证了虚拟融合的光照一致性。在此基础上,搭建了一个高质量实时虚实融合系统,采用基于关键帧的相机跟踪技术,能够实时注册和编辑虚拟物体。采用重要性采样算法有效地利用对齐后的环境光照信息,采用阴影映射技术绘制阴影,实现了实时的高品质虚实融合效果。
Along with the rapid development of computing technology, the subject on how to seamlessly synthesis virtual and reality world together has drawn wide range of attentions. After many years of development, researchers have made important progress on the problem of geometry consistency in mixed reality. We can recover camera parameters and3D geometry of real scenes at real time, registering virtual objects into the video stream correctly. The up-to-date researches mainly focus on how to solve consistency on illumination and interactions between virtual and real objects. The key purpose is to improve visual consistency between virtual and reality scenes, and to provide more convenient and efficient interactive tools.
     For these reasons,this thesis will focus on the problems of weathered appearance simulation and illumination consistency in mixed reality. Several methods on weathering models, interactive weathered appearance editing and lighting model registration are proposed separately. With these techniques,high-quality augmented reality effect can be achieved. Our work's as follows:
     We present a general weathering simulation framework that is effective for modeling a wide variety of weathering effects on virtual objects. Our technique is based on a type of aging-inducing particles called γ-tons which are never mentioned before. With [-ton tracing through the scene controlled by the user-defined weathering transport rules, we can produce weathering effects that are customized to both the virtual object geometry and user's will. Besides, several effects, such as stain-bleeding,multi-weathering, and large-scale geometry changes that are challenging for existing techniques can be readily captured by our method, expanding the capability of handling different kinds of weathering processes.
     We propose a novel interactive weathering method for real-captured videos, which can in-teractively synthesis weathering effects onto video objects. We first recover the depth maps of the input video. In order to maintain the temporal coherence,we employ employ a new depth-based interactive video object cutout method to efficiently extract the target objects we interest. Then, we propose a novel point cloud sampling method to sample the3D point cloud from the recovered depth maps which can robustly reject outliers and simultaneously control point redundancy, and then use the point cloud to reconstruct the3D geometry mod-el.With the reconstructed3D model,γ-ton tracing is introduced into the pipeline to create the weathering map used to edit the video object. The problem of γ-ton tracing is that it does not provide any interactive visual feedback.Therefore, as an improvement, our system also provides two interactive weathering tools, which allows the user to specify the weathering area or tune the weathering speed to achieve the desired effects. With these interactive tools, weathering editing can be more flexible and much easier.
     We propose a feature-based alignment algorithm, which can automatically align the illumi-nation environment map with the captured real scene. Affine-SIFT is employed to extract and match the common features between the environment map and the captured video. RANSAC method is used to remove the outliers. Then the3D positions of the matched feature points can be computed by structure-from-motion technique. Finally.according to these3D match-es. the correlation between the environment map and the real scene can be obtained and the automatic alignment is accomplished. With this method,we can register the lighting informa-tion to the scene and the illumination consistency can be guaranteed.Based on this technique, we also present a high-quality real-time augmented reality system. The proposed system em-ploys a key-frame based real-time camera tracking technique to robustly register the virtual objects into the online video stream. The users are allowed to real-time edit the realistically inser(?)d virtual objects. During the rendering pass, with the auto-aligned illumination en-vironment, we use the importance sampling algorithm and shadow mapping technique to achieve high-quality seamless mixture of virtual and real objects.
引文
[1]BURDEA G C, COIFFET P. Virtual Reality Technology[M].2nd ed. New York, NY, USA: John Wiley & Sons, Inc.,2003.
    [2]FAUGERAS O, LUONG Q, MAYBANK S. Camera self-calibration:Theory and experi-ments[G]. SANDINI G. Computer Vision-ECCV'92. vol 588. Springer Berlin/Heidel-berg,1992:321-334.
    [3]HARTLEY R I. Self-calibration from multiple views with a rotating camera[C]. ECCV. vol 1. Secaucus, NJ, USA:Springer-Verlag New York, Inc.,1994:471-478.
    [4]HARTLEY R I. Euclidean reconstruction from uncalibrated views[C]. Proceedings of the Second Joint European-US Workshop on Applications of Invariance in Computer Vision. London, UK, UK:Springer-Verlag,1994:237-256.
    [5]SHASHUA A, WERMAN M. Trilinearity of three perspective views and its associated ten-sor[C]. ICCV'95:Proceedings of the Fifth International Conference on Computer Vision. Washington, DC, USA:IEEE Computer Society,1995:920.
    [6]HEYDEN A, ASTROM K. Euclidean reconstruction from image sequences with varying and unknown focal length and principal point[C]. Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR'97). Washington, DC, USA:IEEE Com-puter Society,1997:438-443.
    [7]TRIGGS B. Autocalibration and the absolute quadric[C]. Proceedings of the 1997 Confer-ence on Computer Vision and Pattern Recognition (CVPR'97). Washington, DC, USA: IEEE Computer Society,1997:609-614.
    [8]ZHANG Z. Determining the epipolar geometry and its uncertainty:A review[J]. Int. J. Comput. Vision,1998,27(2):161-195.
    [9]AVIDAN S, SHASHUA A. Threading fundamental matrices[G]. BURKHARDT H, NEU- MANN B. Computer Vision - ECCV'98. vol 1406. Springer Berlin/Heidelberg,1998: 124-140.
    [10]CORNELIS K, POLLEFEYS M, VAN GOOL L. Tracking based structure and motion re-covery for augmented video productions[C]. Proceedings of the ACM symposium on Virtual reality software and technology. New York, NY, USA:ACM,2001:17-24.
    [11]GIBSON S, COOK J, HOWARD T, et al. Accurate camera calibration for off-line, video-based augmented reality[C]. Proceedings of the 1st International Symposium on Mixed and Augmented Reality. Washington. DC, USA:IEEE Computer Society,2002:37-64.
    [12]YING X, HU Z. Catadioptric camera calibration using geometric invariants[J]. IEEE Trans. Pattern Anal. Mach. Intell.,2004,26(10):1260-1271. 3] POLLEFEYS M., VAN GOOL L, VERGAUWEN M, et al. Visual modeling with a hand-held camera[J]. Int. J. Comput. Vision.2004.59(3):207-232.
    [14]WU Y, HU Z. Geometric invariants and applications under catadioptric camera model[C]. Proceedings of the Tenth IEEE International Conference on Computer Vision-Volume 2. Washington. DC. USA:IEEE Computer Society,2005:1547-1554.
    [15]SNAVELY N. SEITZ S M, SZELISKI R. Photo tourism:exploring photo collections in 3d[J]. ACM Trans. Graph.,2006,25(3):835-846.
    [16]YING X, ZHA H. Geometric interpretations of the relation between the image of the absolute conic and sphere images[J]. IEEE Trans. Pattern Anal. Mach. Intell.,2006, 28(12):2031-2036.
    [17]ZHANG G, QIN X, HUA W, et al. Robust metric reconstruction from challenging video sequences[J]. CVPR,2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2007,0:1-8.
    [18]YING X, ZHA H. Identical projective geometric properties of central catadioptric line images and sphere images with applications to calibration[J]. International Journal of Com-puter Vision.2008.78:89-105.
    [19]WU Y, Ll Y, HU Z. Detecting and handling unreliable points for camera parameter estima-tion[J]. Int. J. Comput. Vision,2008,79(2):209-223.
    [20]ZHANG G, JIA J, WONG T T, et al. Consistent depth maps recovery from a video se-quence[J]. IEEE Trans. Pattern Anal. Mach. Intell.,2009,31(6):974-988.
    [21]DONG Z, ZHANG G, JIA J, et al. Keyframe-based real-time camera tracking[C]. ICCV, 2009 IEEE 12th International Conference on Computer Vision. Kyoto, Japan:IEEE,2009: 1538-1545.
    [22]ZHANG G, DONG Z, JIA J, et al. Efficient non-consecutive feature tracking for structure-from-motion[C]. Proceedings of the 1 lth European conference on Computer vision:Part Ⅴ. Berlin, Heidelberg:Springer-Verlag,2010:422-435.
    [23]ZHANG G, JIA J, HUA W, et al. Robust bilayer segmentation and motion/depth estimation with a handheld camera[J]. IEEE Transactions on Pattern Analysis and Machine Intelli-gence,2011,33(3):603-617.
    [24]HECHBERT P S. Tutorial:computer graphics; image synthesis[G]. New York, NY, USA: Computer Science Press, Inc.,1988:321-332.
    [25]LONGHURST P, LEDDA P, CHALMERS A. Psychophysically based artistic techniques for increased perceived realism of virtual environments[C]. Proceedings of the 2nd inter-national conference on Computer graphics, virtual Reality, visualisation and interaction in Africa. New York, NY, USA:ACM,2003:123-132.
    [26]EBERT D S, WORLEY S, MUSGRAVE F K, et al. Texturing and Modeling[M].2ndth ed. Orlando, FL, USA:Academic Press, Inc.,1998.
    [27]PERLIN K. An image synthesizer[J]. SIGGRAPH Comput. Graph.,1985,19(3):287-296.
    [28]LU J, GEORGHIADES A S, GLASER A, et al. Context-aware textures[J]. ACM Trans. Graph.,2007,26(1).
    [29]DORSEY J, HANRAHAN P. Modeling and rendering of metallic patinas[C]. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM,1996:387-396.
    [30]GLASSNER A S. Principles of Digital Image Synthesis[M]. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.,1994.
    [31]MERILLOU S, DISCHLER J M, GHAZANFARPOUR D. Corrosion:simulating and ren-dering[C]. Graphics interface 2001. Toronto, Ont., Canada. Canada:Canadian Information Processing Society,2001:167-174.
    [32]BLINN J F. Light reflection functions for simulation of clouds and dusty surfaces[J]. SIG-GRAPH Comput. Graph.,1982,16(3):21-29.
    [33]HSU S C, WONG T T. Simulating dust accumulation[J]. IEEE Comput. Graph. Appl., 1995,15(1):18-22.
    [34]DORSEY J, PEDERSEN H K, HANRAHAN P. Flow and changes in appearance[C]. Pro-ceedings of the 23rd annual conference on Computer graphics and interactive techniques. New York. NY. USA:ACM,1996:411-420.
    [35]MERILLOU S, DISCHLER J, GHAZANFARPOUR D. Surface scratches:measuring. modeling and rendering[J]. The Visual Computer,2001.17:30-45.
    [36]BOSCH C, PUEYO X, M e RILLOU S, et al. A physically-based model for rendering realistic scratches[J]. Computer Graphics Forum,2004,361-370. PAQUETTE E. POULIN P, DRETTAKIS G. Surface aging by impacts[C]. No description on Graphics interface 2001. Toronto, Ont., Canada, Canada:Canadian Information Process-ing Society,2001:175-182.
    [38]HIROTA K. TANOUE Y. KANEKO T. Generation of crack patterns with a physical mod-el[J]. The Visual Computer.1998,14:126-137.
    [39]GOBRON S, CHIBA N. Crack pattern simulation based on 3d surface cellular automata[J]. The Visual Computer.2001,17:287-309.
    [401 DESBENOIT B, GALIN E, AKKOUCHE S. Modeling cracks and fractures[J]. The Visual Computer,2005,21:737-726.
    [41]VALETTE G,PREVOST S, LUCAS L. A generalized cracks simulation on 3d-meshes[C]. Eurographics workshop on natural phenomena. Vienna, Austria:Eurographics Association, 2006:7-14.
    [42]PAQUETTE E, POULIN P, DRETTAKIS G. The simulation of paint cracking and peel-ing[C]. STUERZLINGER W, MCCOOL M. Proceedings of Graphics Interface. Calgary, Canada:Canadian Human-Computer Communications Society,2002:59-68.
    [43]LIU Y, ZHU H, LIU X, et al. Real-time simulation of physically based on-surface flow[J]. The Visual Computer,2005,21:727-734.
    [44]NAGASHIMA K. Computer generation of eroded valley and mountain terrains[J]. The Visual Computer,1998,13:456-464.
    [45]NOBBS J H. Kubelka-munk theory and the prediction of reflectance[J]. Review of Progress in Coloration and Related Topics,1985,15(1):66-75.
    [46]DORSEY J, EDELMAN A, JENSEN H W, et al. Modeling and rendering of weathered stone[C]. Proceedings of the 26th annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM Press/Addison-Wesley Publishing Co.,1999:225-234.
    [47]DESBENOIT B, GALIN E, AKKOUCHE S. Simulating and modeling lichen growth[J]. Computer Graphics Forum,2004,23(3):341-350.
    [48]WITTEN T A, SANDER L M. Diffusion-limited aggregation, a kinetic critical phe-nomenon[J]. Phys. Rev. Lett,1981,47:1400-1403.
    [49]BLINN J F. Simulation of wrinkled surfaces[J]. SIGGRAPH Comput. Graph.,1978, 12(3):286-292.
    [50]WU Y, KALRA P, MOCCOZET L, et al. Simulating wrinkles and skin aging[J]. The Visual Computer,1999,15:183-198.
    [51]BOISSIEUX L, KISS G, MAGNENAT-THALMANN N, et al. Simulation of skin aging and wrinkles with cosmetics insight[C]. Computer Animation and Simulation 2000:Proceedings of the Eurographics Workshop. Interlaken, Switzerland:Springer,2000:15-27.
    [52]RAMANATHAN N, CHELLAPPA R. Face verification across age progression[C]. Pro-ceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). vol 2. Washington, DC, USA:IEEE Computer Society,2005: 462-469.
    [53]WONG T T, NG W Y. HENG P A. A geometry dependent texture generation framework for simulating surface imperfections[C]. Proceedings of the Eurographics Workshop on Render-ing Techniques'97. London, UK, UK:Springer-Verlag,1997:139-150.
    [54]MANDELBROT B. The Fractal Geometry of Nature[M]. W. H. Freeman and Co.,1982.
    [55]GU J, TU C I, RAMAMOORTHI R, et al. Time-varying surface appearance:acquisition, modeling and rendering[J]. ACM Trans. Graph.,2006,25(3):762-771.
    [56]WANG J. TONG X. LIN S, et al. Appearance manifolds for modeling time-variant appear-ance of materials[J]. ACM Trans. Graph.,2006,25(3):754-761.
    [57]XUEY S, WANG J, TONG X, et al. Image-based material weathering [J]. Computer Graph-ics Forum,2008,27(2):617-626.
    [58]MERTENS T, KAUTZ J, CHEN J, et al. Texture transfer using geometry correlation[C]. AKENINE-MOLLER T, HEIDRICH W. Rendering Techniques. Eurographics Association 2006:273-284.
    [59]TURK G. Texture synthesis on surfaces[C]. Proceedings of the 28th annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM,2001:347-354.
    [60]HEEGER D J, BERGEN J R. Pyramid-based texture analysis/synthesis[C]. Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM,1995:229-238.
    [61]CHUANG Y Y, AGARWALA A, CURLESS B, et al. Video matting of complex scenes[J]. ACM Trans. Graph.,2002,21(3):243-248.
    [62]WANG J. BHAT P, COLBURN R A, et al. Interactive video cutout[J]. ACM Trans. Graph.. 2005,24(3):585-594.
    [63]LI Y, SUN J, SHUM H Y. Video object cut and paste[J]. ACM Trans. Graph.,2005, 24(3):595-600.
    [64]BAI X, SAPIRO G. Geodesic matting:A framework for fast interactive image and video segmentation and matting[J]. Int. J. Comput. Vision,2009,82(2):113-132.
    [65]BAI X, WANG J, SIMONS D, et al. Video snapcut:robust video object cutout using local-ized classifiers[J]. ACM Trans. Graph.,2009,28(3):70:1-70:11.
    [66]PRICE B L, MORSE B S, COHEN S. Livecut:Learning-based interactive video segmen-tation by evaluation of multiple propagated cues[C]. ICCV,2009 IEEE 12th International Conference on Computer Vision. Kyoto, Japan:IEEE,2009:779-786.
    [67]BOYKOV Y, VEKSLER O, ZABIH R. Fast approximate energy minimization via graph cuts[J]. IEEE Trans. Pattern Anal. Mach. Intell.,2001,23(11):1222-1239.
    [68]WANG J, XU Y, SHUM H Y, et al. Video tooning[J]. ACM Trans. Graph.,2004,23(3):574-583.
    [69]RAV-ACHA A, KOHLI P, ROTHER C. et al. Unwrap mosaics:a new representation for video editing[J]. ACM Trans. Graph.,2008,27(3):17:1-17:11.
    [70]KOPF J, NEUBERT B, CHEN B, et al. Deep photo:model-based photograph enhancement and viewing[J]. ACM Trans. Graph.,2008,27(5):116:1-116:10.
    [71]CHEN J. PARIS S, WANG J, et al. The video mesh:A data structure for image-based video editing[J]. Artificial Intelligence,2009,98(MIT-CSAIL-TR-2009-062):7771-7776.
    [72]BHAT P, ZITNICK C L, SNAVELY N, et al. Using photographs to enhance videos of a static scene[C]. KAUTZ J, PATTANAIK S. In Rendering Techniques 2007:18th Eurographics Workshop on Rendering:327-338.
    [73]ZHANG G, DONG Z, JIA J, et al. Refilming with depth-inferred videos [J]. IEEE Transac-tions on Visualization and Computer Graphics,2009,15(5):828-840.
    [74]AZUMA R, BAILLOT Y, BEHRINGER R, et al. Recent advances in augmented reality [J]. IEEE Comput. Graph. Appl.,2001,21(6):34-47.
    [175]KAJIYA J T. The rendering equation[J]. SIGGRAPH Comput. Graph.,1986,20(4):143-150.
    [76]APPEL A. On calculating the illusion of reality[C]. Information Processing 68, Proceedings of IFIP Congress 1968. vol 2. Edinburgh, UK:North-holland Publishing Co.,1968:945-950.
    [77]WHITTED T. An improved illumination model for shaded display [J]. Commun. ACM, 1980,23(6):343-349.
    [78]AMANATIDES J. Ray tracing with cones[J]. SIGGRAPH Comput. Graph.,1984, 18(3):129-135.
    [79]COOK R L, PORTER T, CARPENTER L. Distributed ray tracing[J]. SIGGRAPH Comput. Graph.,1984.18(3):137-145.
    [83]HECKBERT P S, HANRAHAN P. Beam tracing polygonal objects[J]. SIGGRAPH Com-put. Graph.,1984,18(3):119-127.
    [81]COOK R L. Stochastic sampling in computer graphics[J]. ACM Trans. Graph.,1986. 5(1):51-72.
    [82]LAFORTUNE E P, WILLEMS Y D. Bi-directional path tracing[C]. Proceedings of Third International Conference on Computational Graphics and Visualization Tech-niques(Compugraphics'93). Alvor, Algarve, Portugal:ACM Press.1993:145-153.
    [83]JENSEN H W. Global illumination using photon maps[C]. Proceedings of the eurographics workshop on Rendering techniques'96. London. UK. UK:Springer-Verlag.1996:21-30.
    [84]KRIVANEK J. Global illumination with monte carlo ray tracing[C]. ACM SIGGRAPH 2008 classes. New York, NY. USA:ACM,2008:61:1-61:25.
    [85]SLOAN P P, KAUTZ J. SNYDER J. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments[J]. ACM Trans. Graph.,2002.21 (3):527-536.
    [86]KAUTZ J, SLUAN P P, SN YDER J. Fast, arbitrary brdf shading for low-frequency lighting using spherical harmonics[C]. Proceedings of the 13th Eurographics workshop on Render-ing. Aire-la-Ville, Switzerland, Switzerland:Eurographics Association,2002:291-296.
    [87]SLOAN P P, HALL J, HART J, et al. Clustered principal components for precomputed radiance transfer[J]. ACM Trans. Graph.,2003,22(3):382-391.
    [88]LIU X, SLOAN P P J, SHUM H Y, et al. All-frequency precomputed radiance transfer for glossy objects.[C]. KELLER A, JENSEN H W. Rendering Techniques. Eurographics Association 2004:337-344.
    [89]NG R, RAMAMOORTHI R, HANRAHAN P. Triple product wavelet integrals for all-frequency relighting[J]. ACM Trans. Graph.,2004,23(3):477-487.
    [90]WANG R, TRAN J, LUEBKE D. All-frequency interactive relighting of translucent objects with single and multiple scattering[J]. ACM Trans. Graph.,2005,24(3):1202-1207.
    [91]TSAI Y T, SHIH Z C. All-frequency precomputed radiance transfer using spherical ra-dial basis functions and clustered tensor approximation[J]. ACM Trans. Graph.,2006, 25(3):967-976.
    [92]GREEN P, KAUTZ J, MATUSIK W, et al. View-dependent precomputed light transport using nonlinear gaussian function approximations[J]. Proceedings of the 2006 symposium on Interactive 3D graphics and games. New York, NY, USA:ACM,2006:7-14.
    [93]SUN W, MUKHERJEE A. Generalized wavelet product integral for rendering dynamic glossy objects[J]. ACM Trans. Graph.,2006,25(3):955-966.
    [94]REN Z, WANG R, SNYDER J, et al. Real-time soft shadows in dynamic scenes using spherical harmonic exponentiation[J]. ACM Trans. Graph.,2006,25(3):977-986.
    [95]WANG J, XU K, ZHOU K, et al. Spherical harmonics scaling[J]. Vis. Comput.,2006, 22(9):713-720.
    [96]PAN M. WANG XINGUO LIU R. PENG Q, et al. Precomputed radiance transfer field for rendering interreflections in dynamic scenes[J]. Computer Graphics Forum,2007, 26(3):485-493.
    [97]WANG J, REN P, GONG M, et al. All-frequency rendering of dynamic, spatially-varying reflectance[J]. ACM Trans. Graph.,2009,28(5):133:1-133:10.
    [98]ZHENG Q, CHELLAPPA R. Estimation of illuminant direction, albedo, and shape from shading[J]. IEEE Trans. Pattern Anal. Mach. Intell.,1991,13(7):680-702.
    [99]KANBARA M, YOKOYA N. Geometric and photometric registration for real-time aug-mented reality[C]. Proceedings of the 1st International Symposium on Mixed and Aug-mented Reality. Washington, DC, USA:IEEE Computer Society,2002:279-280.
    [100]KANBARA M, YOKOYA N. Real-time estimation of light source environment for pho-torealistic augmented reality[C]. Proceedings of the Pattern Recognition,17th International Conference on (ICPR'04) Volume 2-Volume 02. Washington, DC, USA:IEEE Computer Society,2004:911-914. WANG Y, SAMARAS D. Estimation of multiple illuminants from a single image of ar-bitrary known geometry[C]. Proceedings of the 7th European Conference on Computer Vision-Part Ⅲ. London. UK, UK:Springer-Verlag,2002:272-288.
    [102]WANG Y. SAMARAS D. Estimation of multiple directional light sources for synthesis of augmented reality images[J]. Graph. Models,2003,65(4):185-205. DEBEVEC P E, MALIK J. Recovering high dynamic range radiance maps from pho-tographs[C]. Proceedings of the 24th annual conference on Computer graphics and interac-tive techniques. New York, NY, USA:ACM Press/Addison-Wesley Publishing Co.,1997: 369-378.
    [104]HEIDRICH W, SEIDEL H P. View-independent environment maps[C]. Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware. New York, NY, USA:ACM,1998:39-ff.
    [105]DEBEVEC P. Image-based lighting[J]. IEEE Comput. Graph. Appl., 2002,22(2):26-34.
    [106]LIU Y, QIN X. XU S, et al. Light source estimation of outdoor scenes for mixed reality [J]. Vis. Comput.,2009,25(5-7):637-646.
    [107]顾耀林,王骏,倪彤光.基于环境映照的增强现实技术[J].计算机工程,2004,30(11):7-9.
    [108]PFISTER H, ZWICKER M, VAN BAAR J, et al. Surfels:surface elements as rendering primitives[C]. Proceedings of the 27th annual conference on Computer graphics and inter-active techniques. New York, NY, USA:ACM Press/Addison-Wesley Publishing Co.,2000: 335-342.
    [109]RUSINKIEWICZ S, LEVOY M. Qsplat:a multiresolution point rendering system for large meshes[C]. Proceedings of the 27th annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM Press/Addison-Wesley Publishing Co.,2000:343-352.
    [110]ARVO J, KIRK D. Particle transport and image synthesis[J]. SIGGRAPH Comput. Graph., 1990,24(4):63-66.
    [111]SPANIER J, GELBARD E. Monte Carlo Principles and Neutron Transport Problems[M]. Dover Publications,2008.
    [112]TURK G. Re-tiling polygonal surfaces[J]. SIGGRAPH Comput. Graph.,1992,26(2):55-64.
    [113]朱淼良,姚远,蒋云良.增强现实综述[J].中国图象图形学报A辑,2004,9(7):767-774.
    [114]BERTALMIO M, SAPIRO G, CASELLES V, et al. Image inpainting[C]. Proceedings of the 27th annual conference on Computer graphics and interactive techniques. New York, NY, USA:ACM Press/Addison-Wesley Publishing Co.,2000:417-424.
    [115]OLIVEIRA M M, BOWEN B, MCKENNA R, et al. Fast digital image inpainting[C]. Pro-ceedings of the International Conference on Visualization, Imaging and Image Processing (VIIP 2001). Marbella, Malaga, Spain:ACTA Press,2001:261-266.
    [116]DRORI I, COHEN-OR D, YESHURUN H. Fragment-based image completion.[J]. ACM Trans. Graph.,2003,22(3):303-312.
    [117]CRIMINISI A, PEREZ P, TOYAMA K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Transactions on Image Processing,2004,13(9):1200-1212.
    [118]SUN J, YUAN L, JIA J, et al. Image completion with structure propagation[J]. ACM Trans. Graph.,2005,24(3):861-868.
    [119]KOMODAKIS N. Image completion using global optimization[J]. CVPR,2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006,1:442-452.
    [120]王宇新,贾棋,刘天阳,et al.遮挡物体移除与图像纹理修补方法[J].计算机辅助设计与图形学学报,2008,20(01):43-49.
    [121]JIA J, WU T P, TAI Y W, et al. Video repairing:Inference of foreground and background under severe occlusion[J]. CVPR,2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2004,1:364-371.
    [122]KAZHDAN M, BOLITHO M, HOPPE H. Poisson surface reconstruction[C]. Proceedings of the fourth Eurographics symposium on Geometry processing. Aire-la-Ville, Switzerland, Switzerland:Eurographics Association,2006:61-70.
    [123]MOREL J M, YU G. Asift:A new framework for fully affine invariant image comparison[J]. SIAM J. ling. Sci.,2009,2(2):438-469.
    [124]FISCHLER M A, BOLLES R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Commun. ACM,1981, 24(6):381-395.
    [125]STAMMINGER M, DRETTAKIS G. Perspective shadow maps[J]. ACM Trans. Graph., 2002,21(3):557-562.
    [126]LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110.
    [127]DEHNAD K. Density estimation for statistics and data analysis [J]. Technometrics,1987, 29(4):495-495.
    [128]REEVES W T, SALESIN D H, COOK R L. Rendering antialiased shadows with depth maps[J]. SIGGRAPH Comput. Graph.,1987,21(4):283-291.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700