用户名: 密码: 验证码:
京剧脸谱数字化建模与绘制技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着世界各国对传统非物质文化遗产保护的重视,非物质文化遗产的数字化建模越来越受到国内外学者的关注。被誉为国粹的京剧作为中国传统文化的重要代表,其独具特色的脸谱艺术更是我国传统文化宝库中的精华,具有高度审美价值、文化价值、研究价值和应用价值。本文以传统京剧脸谱为具体研究对象,通过对脸谱艺术中的纹样、程式、色彩及艺术特征进行分析和归纳,在计算机中对脸谱进行数字化建模和处理,并利用图形学技术合成脸谱图像和表情动画。通过对京剧脸谱的数字化建模和绘制技术的研究,可以让我们更有效的保护和继承这个民族瑰宝,同时也在一定程度上促进了计算机技术的发展。
     本文是针对京剧脸谱的数字化保护技术研究,不是简单对脸谱各种信息的进行数字化记录和呈现,更主要的是利用计算机技术对脸谱进行建模,并设计开发脸谱辅助设计和展示系统来促进脸谱的保护与研究。主要就如何建立脸谱纹样数据库、如何合成矢量化脸谱和表情脸谱、如何多角度的观察脸谱表情动画、如何真实感的绘制三维脸谱等问题展开研究。本文的主要贡献包括以下几个方面:
     1.建立了一个脸谱纹样库和脸谱辅助合成系统。首先分析京剧脸谱绘制过程以及基础纹样形状特征,然后基于分层原理构造出矢量化的纹样数据库。在合成脸谱阶段,用户只需按照脸谱绘制顺序逐层选取所需的纹样,根据自己的创作需要对各层的纹样进行组合得到最终的京剧脸谱图案。系统还提供一系列变形工具供用户对脸谱局部纹样进行编辑修改,以其生成更多的、富有变化的京剧脸谱图案。
     2.提出了一个基于矢量对象的分层驱动脸谱表情的生成方法。将脸谱纹样和表情动作分开处理,首先构造出矢量化纹样库,然后根据脸部运动编码系统标准将脸谱表情分解成40个动作单元。在合成脸谱过程中,用户只需按照脸谱绘制顺序和创作需要逐层选取所需的纹样即可得到一个脸谱图案,脸谱表情则通过自由变形(FFD)技术对脸谱纹样变形得到。
     3.提出了一个混合驱动的2.5维脸谱表情动画的方法。利用脸部运动编码系统,将脸谱表情动作分解成基本表情、眼睛、口型、头部转动四个部分。然后根据四个部分自身的运动原理采用不同动画技术来驱动。在生成脸谱动画过程中,用户只需选定所需脸谱图案和配音,设定好关键帧的表情,即可得到一段生动的脸谱动画。
     4.提出了两种适合京剧脸谱的真实感绘制方法:离线和实时绘制。研究比较了目前人脸皮肤真实感绘制的技术方法,找到了适合京剧脸谱的离线和实时真实感绘制的方法。在离线绘制方法中,先利用高精度法线图来增强皮肤细节,再把BSSRDF光照模型绘制的次表面散射和漫反射与高光反射相结合,离线绘制出真实感的脸谱;在实时绘制方法中,运用D'Eon的皮肤实时绘制技术,修改其中的模拟散射的高斯混合参数的比重以适合表现真实感的京剧脸谱化妆效果。
Along with most countries attaching much value to the protection of traditional intangible cultural heritage, digital modeling of intangible cultural heritages raised attentions of researches at home and abroad. Peking Opera is an important representative of Chinese traditional cultures, and its unique facial makeup art is the essence of our cultural treasures. Facial makeups in Peking opera have very high values in terms of aesthetic, culture, research and application. This thesis focuses on traditional facial makeups in Peking Opera. Firstly we analyzed patterns, colors and artistic features of the traditional facial makeups, and used compute to model and process them. Then we used graphic technologies to synthesize digital images and animations of facial makeups and render photo-realistic 3D facial makeups. Digital modeling for facial makeup in Peking Opera can effectively protect and inherit this priceless national treasure; to a certain extent, it also promotes the development of digital modeling technologies.
     This thesis mainly studies digital modeling and rendering technologies of Facial makeup in Peking Opera. The related problems which were discussed in this thesis include:how to build a facial makeup's synthesis system; how to control a facial makeup's expression; how to observe facial expression animation in multi-view ways; how to render photo-realistic 3d facial makeups. The main contributions of this thesis are as follows:
     1. A computer-aided design system for facial makeup synthesis in Peking Opera is established. Firstly we analyzed the drawing process of facial makeups and characteristics of the patterns used in it, and then constructed a pattern bank based on layers corresponding to the drawing process. During the synthesis step, users picked up some patterns from the pattern bank and composed them to form new facial makeups. The system also provides a serial of tools to let users edit or modify local patterns.
     2. A vectorizated hierarchical driven model of facial makeup's expression is proposed. Facial makeup's patterns and expression actions are handled separately. Firstly we constructed a vectorised pattern database, and then facial expression was decomposed 40 local action units based on the standard of facial action coding system. In the synthesis process, the user only need to follow the facial makeup layer draw order and select the desired patterns, he can get a facial makeup. Facial makeup's expression is achived through using the free-form deformation (FFD) technique to facial makeup patterns. Users can also control local expressions by some parameters.
     3. A hybrid-driven 2.5D model for expression animation of facial makeup in Peking Opera is proposed. Firstly we decomposed facial makeup's motion into four parts:basic expressions, eye motions, mouth motions, and head rotations according of the standard of facial action coding system. Each part is controlled by different animation technolegies. In the synthesis process of facial makeup's animation, users only need to select desired facial makeup and sound, and set expressions of keyframes, the system produces a vivid facial makeup's animation.
     4. Two photo-realistic rendering methods for facial makeup is proposed. By analyzing the current photo-realistic rendering methods of human facial skin, we found off-line and real-time rendering methods suitable for facial makeup. For the off-line rendering method, Firstly we used a high resolution normal map to enhance the skin detail, and then combined the BSSRDF skin model and a physical highlight model to render realistic facial makeup; For the real-time rendering method, based D'Eon real-time skin model, we modified parameters of the Gaussian mixture to render realisticly subsurface scattering effect of real facial makeup.
引文
[1]赵梦林.中国京剧脸谱[M].北京:朝华出版社,2003.
    [2]中国脸谱网http://www.lianpu.com.cn/.
    [3]《中华文明史话》编委会,中华文明史话:戏曲史话[M].中国大百科全书出版社,2010.
    [4]林同华.中华美学大词典[M].安徽教育出版社,2002.
    [5]周华斌,假面与脸谱[J],中华戏曲,2003.29:p35-36.
    [6]中华人民共和国非物质文化遗产法,2011.
    [7]中国非物质文化遗产网http://www.ihchina.cn/main.jsp/.
    [8]杨荫浏.杨荫浏全集[M].江苏文艺出版社,2010.
    [9]Jessi Stumpfel, Christopher Tchou, Nathan Yun, Digital Reunification of the Parthenon and its Sculptures [C]. Proceedings of the 4th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage (VAST 2003), Brighton, UK,2003.
    [10]Jessi Stumpfel, Chris Tchou, Johnathan Cohen, Assembling the Sculptures of the Parthenon[C]. Proceedings of Siggraph 2003, Sketches and Applications: Archeological Reconstruction Session, San Diego,2003.
    [11]Levoy M, Pulli K, Curless B. The Digital Michelangelo Project:3D Scanning of Large Statues[CD]. ACM Press,2000.
    [12]K J Dana, B van Ginneken, S K Nayar, and J J Koenderink. Reflectance and texture of real world surfaces[J], ACM Transactions on Graphics,1999:31-34.
    [13]Antonicelli A, Sciscio G, Rosicarelli R, et al. Exploiting Pompei Cultural Heritage: The Plinius Project[C]. Proceedings of Eurographics 1999, Eurographics Association,1999.
    [14]孟中元,杨琦.虚拟现实——秦兵马俑遗址与文物的数字化保护与展示[J].东南文化,2009,4:98-102.
    [15]刘箴,河姆渡遗址博物馆虚拟展示的初步实现[C],首届数字(虚拟)科技馆技 术与应用学术研讨会,2007.
    [16]邵未,张倩,孙守迁.面向编钟乐舞的动作捕捉技术的研究[J].系统仿真学报,2003,15:350-352.
    [17]法国罗浮宫虚拟博物馆,http://www.louvre.fr/llvr/index.jsp.
    [18]Hambach S. How to Visit Dunhuang without Traveling to Central Asia[C]. Proceedings of Eurographics 1999, Eurographics Association,1999.
    [19]Gong Z, Lu D M, Pan Y H. Dunhuang Art Cave Presentation and Preserve Based on VE[C]. IOS Press,1998:612-618.
    [20]Lu D M, Pan Y H. Image based Virtual Navigation System for Art Caves[J]. The International Journal of Virtual Reality,2000,4 (4):40-45.
    [21]William L Martens. Digital Abstraction of Sound and Motion for Virtual Preservation of Living Cultural Heritage:The"Chain of Spirits"[C]. Proceedings of the 9th Intrnational Conferece on Virtual Systems and Multimedia, Montreal, QC, Oct.2003:76-82.
    [22]鲁东明,潘云鹤.敦煌石窟虚拟重现与壁画修复模拟[J].测绘学报.2002,31(1):12-14.
    [23]潘云鹤,樊锦诗.敦煌真实与虚拟[M],浙江大学出版社,2003.
    [24]张献颖,计算机辅助文物复原关键技术研究[D].西北大学,2003.
    [25]樊少荣,周明全,姬利艳.考古文物的数字化过程研究[J].微机发展,2004.12:22-23.
    [26]周术诚,耿国华,周明全.三维破碎物体多尺度拼接技术[J].计算机辅助设计与图形学学报,2006,18(10):1525-1529.
    [27]张驰.民间剪纸的美学特征分析[J].九江学院学报(社会科学版),2006(1):97-99.
    [28]彭冬梅,刘肖健,孙守迁.信息视角:非物质文化遗产保护的数字化理论研究[J].计算机辅助设计与图形学学报,2008(1):47-51
    [29]彭冬梅,潘鲁生,孙守迁.数字化保护——非物质文化遗产保护的新手段[J].美术研究,2006(1):47-51.
    [30]彭冬梅.面向剪纸艺术的非物质文化遗产数字化保护技术研究[D].浙江大学,2008.
    [31]王巨山.手工艺类非物质文化遗产理论及博物馆化保护研究[D].山东大学,2007.
    [32]Tim Hawkins, Jonathan Cohen, Paul Debevec. A Photometric Approach to Digitizing Cultural Artifacts[C]. Proceedings of the 2nd International Symposium on Virtual Reality, Archaeology, and Cultural Heritage.2001
    [33]S J Abas and A Salman. Geometric and group-theoretic methods for computer graphic studies of islamic symmetric patterns[J]. Computer Graphics Forum, 1992,11(1):43-53
    [34]Branko Grunbaum and G C Shephard. Interlace patterns in islamic and moorish art[J], Leonardo,25,331-339.
    [35]Hussein Karam and Masayuki Nakajima. Islamic symmetric pattern generation based on group theory[C]. Proceedings of the International Conference on Computer Graphics, IEEE Computer Society. Washington, DC, USA,1999.
    [36]Wong M T, Zongker D E, Salesin D H. Computer generated floral ornament [C]. Proceedings of SIGGRAPH 98, Orlando,1998:423-434.
    [37]Yanxi Liu, James Hay, YingQing Xu et al, Digital Paper cutting[C], Proceeding of SIGGRAPG'05, Sketch.2005.
    [38]张显全,于金辉,蒋凌琳,刘丽娜等.基于纹样的计算机剪纸系统[J].计算机工程,2006.6(11):248-250.
    [39]Xu Jie, Kaplan Craig S., Mi Xiaofeng. Computer-Generated Papercutting[C]. Proceedings of the 15th Pacific Conference on Graphics. Oct.2007:343-350.
    [40]Yan Li, Jinhui Yu et al, Computer Generation of Paper-cutting Effects on 3D Mesh Models[C], Proceedings of Pacific Graphics'05.2005.
    [41]Yan Li, Jinhui Yu, Kwan-Liu Ma, Jiaoying Shi.3D Paper-Cut Modeling and Animation[C], Computer Animation and Virtual World,2007,18:395-403
    [42]张永锋,蒋大为,何磊,周敏.基于de Casteljau算法的Poisson细分曲线[J].科学技术与工程,2007.7(10):2263-2267
    [43]Gift Siromoney and Rani Siromoney. Rosenfeld's cycle grammars and kolam[J]. Graph-Grammars and Their Application to Computer Science,1986:564-579.
    [44]Weyrich Tim, W Matusik, H Pfister. Analysis of Human Faces Using a Measurement-Based Skin Reflectance Model[J]. ACM Transactions on Graphics, 25(3):1013-1024.
    [45]F I Parke. Computer generated animation of faces[D]. University of Utah,Salt Lake City, UT, December 1974. UTEC-CSc-72-120.
    [46]F I Parke. A Parameteric Model for Human Faces[D]. University of Utah, Salt Lake City, UT, December 1974. UTEC-CSc-75-047.
    [47]M Gillenson and B Chandrasekaran. Whatisface:Human facial composition by computer graphics[C]. Proceedings of Siggraph'75,9(1),1975.
    [48]M L Gillenson. The Interactive Generation of Facial Images on a CRT Using a Heuristic Strategy[D]. Ohio State University, Computer Graphics Research Group, Columbus, Ohio, March 1974.
    [49]F I Parke. Parameterized models for facial animation[C]. IEEE Computer Graphics and Applications, November 1982,2(9):61-68.
    [50]Waters K. A muscle model for animating three-dimensional facial expression[J]. Computer Graphics,1987,21(4):17-24.
    [51]N Magnenat-Thahnann, N E Primeau, and D Thalmann. Abstract muscle actions procedures for human face animation[J]. Visual Computer,1988,3(5):290-297.
    [52]Volker Blanz and Thomas Vetter. A Morphable Model for the Synthesis of 3D Faces[C]. Proceedings of Conference Computer Graphics (SIGGRAPH'99), August 1999:187-194.
    [53]Brian Guenter, Cindy Grimm, Daniel Wood, Henrique Malvar, and Fr'ed'eric Pighin. Making Faces[C]. Proceedings of GraphicsSIGGRAPH'98, July 1998: 55-66.
    [54]Won-Sok Lee and Nadia Magnenat-Thahnann. Fast Head Modeling for Animation[J]. Image and Vision Computing, March 2000,18(4):355-364.
    [55]Ron Fedkiw. http://physbam.stanford.edu/-fedkiw/.
    [56]梅丽,鲍虎军,郑文庭,彭群生.基于实拍图像的人脸真实感重建[J].计算机 学报,2000,23(9):996-1002.
    [57]梅丽,鲍虎军,彭群生.特定人脸的快速定制和肌肉驱动的表情动画[J].计算机辅助设计与图形学学报,2001,13(12):1077-1082.
    [58]晏洁,文本驱动的唇动合成系统[J],计算机工程与设计,1998,19(1):31-34.
    [59]晏洁,高文,尹宝才,具有真实感的三维虚拟特定人脸生成方法[J],计算机学报,1999,22(2):147-153
    [60]晏洁,从一般人脸模型到特定人脸模型的修改[J],计算机工程与科学,1997,19(2):21-24
    [61]晏洁,具有真实感的三维人脸合成方法的研究与实践[D],哈尔滨工业大学博士论文,1999.
    [62]Jie Yan, Wen Gao, Baocai Yin, Yibo Song. An Individual Facial Image Synthesis System for Virtual Human[C], Proceedings of International conference on multimodal interface(ICMI'99), Hongkong,1999:272-278.
    [63]张翀.真实感三维人脸建模及表情动画技术的研究[D].西北工业大学,2004.
    [64]尹宝才,高文,基于模型的头部运动估计和面部图象合成[J].计算机研究与发展,1999,36.(1):67-71.
    [65]尹宝才,高文,利用Bezier曲面的面部表情和口型几何造型[J].计算机学报,1998,21:347-350.
    [66]B C Yin, W Gao. Radial Basis Function Interpolation on Space Mesh[C]. Proceedings of ACM SIGGRAPH'97 Visual. Los Angeles,1997:150.
    [67]杜平,徐大为,刘重庆.特定人的三维人脸模型生成与应用,上海交通大学学报,2003,37(3):435-439.
    [68]奖大龙.真实感三维人脸合成方法研究[D].中国科研院研究生院,2005.
    [69]P Ekman, W V Friesen. Facial Action Coding system[M], Consulting Psyehologists Press, Palo Alto, CA,1978.
    [70]I S Pandzic, R Forchheimer. MPEG-4 Facial Animation The Standard, Implementation and Applications[M], John Wiley & Sons Press,2002.
    [71]Akimoto T, Suenaga Y, Wallace R S. Automatic creation of 3D facial models[J]. IEEE Computer Graphics and Applications,1993,13(5):16-22
    [72]F Pighin, J Hecker, D Lischinski, R Szeliski and D H Salesin. Synthesizing realistic facial expressions from photographs[C]. Proceedigns of the Computer Graphics, Annual Conference Series, Siggraph, July 1998:75-84.
    [73]Liu Z, Zhang Z, Jacobs C and Cohen M. Rapid Modeling of Animated Faces From Video[R]. Microsoft Research, Feb 2000:99-21.
    [74]Lee Y, Terzopoulos D, Waters K. Realistic modeling for facial animation[C]. Proceedings of SIGGRAPH Conference, Los Angeles, USA,1995:55-62.
    [75]V Blanz and T Vetter. A morphable model for the synthesis of 3D-faces[C]. Proceedings of In SIGGRAPH 99 Conference3, Los angeles,1999:187-194.
    [76]J P Lewis and F I Parke. Automated Lip-Synch and Speech Synthesis for Character Animation[C]. Proceedings of Human Factors in ComputingSystems and Graphics Interface'87, April 1987:143-147.
    [77]A. Pearce, B. Wyvill, G. Wyvill, and D. Hill. Speech and expression:A computer solution to face animation[C]. Proceedings of Graphics Interface'86,1986: 136-140.
    [78]D R Hill, B Wyvill, and A Pearce. Animating speech:An automated approach using speech synthesized by rules[J]. The Visual Computer, March 1988,3(5): 277-289.
    [79]K waters, T M Levergood. DECface:An automatic lip-Synchronization algorithim for synthetic faces[R]. DEC Cambridge Research Laboratory,1993.
    [80]M Brand. Voice Puppetry[C]. Proceedings of SIGGRAPH 99. August,1999: 21-28.
    [81]S Kshirsagar, and N Magnenat-Thalmann. Lip Synchronization Using Linear Predictive Analysis[C], Proceedings of IEEE Intemational Conference on Multimedia and ExPo, New York, USA,2000:1077-1080.
    [82]S Morishima and H Harashima. Facial animation synthesis human-machine communication system[C]. Proceedings of the 5th Intenational conference on Human-Computer Interaction, ACM, New York,1993, Vol.11:1085-1090.
    [83]T Guiard-Marigny, A Adjoudani, and C Benoit. A 3D model of the lips for visual speech synthesis[c]. In Proceedings of the 2nd ETRW on speeeh synthesis, New York,1994:49-52.
    [84]B Le Goff, T Guiard-Marigny, M Cohen, and C Benoit. Real-time analysis-synthesis and intelligibility of talking faces[C]. Proceedings of the 2nd ETRW on speech Synthesis, New York,1994:53-56.
    [85]Jensen H, Marschner S, Levoy M, et al. A Practical Model for Subsurface Light Transport[C]. Proceedings of SIGGRAPH'01, Los Angeles,2001:511-518.
    [86]Craig Donner and Henrik Wann Jensen. A Spectral BSSRDF for Shading Human Skin[C]. Proceedings of the Eurographics Symposium on Rendering, Cyprus, June 2006:409-417
    [87]Craig Donner and Henrik Wann Jensen. Light Diffusion in Multi-Layered Translucent Materials[J]. ACM Transactions on Graphics (SIGGRAPH'2005), Los Angeles, August 2005:1032-1039
    [88]Eugene d'Eon, Advanced Skin Rendering[C], GDC 2007 Demo Team Secrets
    [89]Eugene d'Eon, David Luebke, Advanced Techniques for Realistic Real-Time Skin Rendering[M], GPU Gems 3, Addison Wesley,2007
    [90]George Borshukov and J P Lewis, "Realistic Human Face Rendering for The Matrix Reloaded" [C], Proceedings of ACM SIGGRAPH'03 Sketches and Applications Program, July 2003.
    [91]Horace H.S.Ip,Lijun Yin.Constructing 3D Individualized Head Model from Two Orthogonal Views.The Visual Computer.1996.12,No.5
    [92]Li-an Tang, Thomas S Huang. Automatic Construction of 3D Human Face Models Based on 2D Images[C]. Proceedings of ICIP'96.1996. Lausanne, Switzerland. Ⅲ:467-470
    [93]王兆其,虚拟人合成研究综述[J].中国科学院研究生院学报,2000,17(2):89-98.
    [94]Williams L. Performance-driven facial animation[J]. Computer Graphics,1990, 24(3):235-242
    [95]George Borshukov, Universal Capture:Image-based Facial Animation and Rendering for "The Matrix" Sequels[C]. Proceedings of SIGGRAPH'2004 course notes on Facial Modeling and Animation,2004
    [96]Coutour. http://www.mova.com/
    [97]J Beskow. Rule-based visual speech synthesis[C]. Proceedings of 4th European Conference on Speech Communication and Technology, Madrid, September 1995.
    [98]Dominic W Massaro, Jonas Beskow and Michael M.Cohen. Picture My Voice: Audio to Visual Speech Synthesis using Artificial Neural Networks. Proceedings of The fourth annual Auditory-Visual Speech Processing conference (AVSP'99). Santa Cruz,1999.
    [99]Hani Yehia. Takaaki Kuratate and Eric Vatikiotis-Bateson. Using speech acoustics to drive facial motion[C]. Proceedings of the 14th international congress of phonetic sciences(ICPhS'99),1:631-634.
    [100]Tamura M, Masuko T, Kobayashi T, Tokuda K. Visual speech synthesis based on parameter generation from HMM:Speech Driven and text-and-speech driven approaches[C]. Proceedings of the International Conference on Auditory-Visual Speech Processing, December 1998.
    [101]C Pelachaud, N I Badler, and M steedman. Generating facial expressions for speech[J]. Cognitive Science, March 1996,20(1):1-46.
    [102]FaceFilter Studio 2, http://www.reallusion.com/facefilter.
    [103]Virtual MakeOver, http://www.dailymakeover.com/makeover/virtualMakeover.
    [104]Makeup Pilot, http://www.colorpilot.com/makeup.html.
    [105]VirtualFashion Professional, http://virtual-fashion.com.
    [106]斯坦丁.格氏解剖学[M].北京大学医学出版社,2008.
    [107]F Di Fiore, P Schaeken, K Elens, and F Van Reeth. Automatic in-betweening in computer assisted animation by exploiting 2.5D modelling techniques[C]. In Proceedings of Computer Animation 2001, November 2001:192-200.
    [108]Fabian Di Fiore and Frank Van Reeth. Multi-level performance-driven stylised facial animation[C]. Proceedings of Computer Animation and Social Agents (CASA2005), October 2005:73-78.
    [109]Farin Gerald and Hansford Dianne (2000). The Essentials of CAGD [M]. Natic, MA:A K Peters, Ltd. ISBN 1-56881-123-3.
    [110]Sederberg T W, Parry S R. Free-form fundamentals of solid geometry[J]. Comput Graph.1986.20(4):151-160.
    [111]S P Lee, J Badler, N Badler. Eyes alive[J]. ACM Transactions on Graphics. 2002,21(3):637-644.
    [112]Frank Antony Proudlock, Irene Gottlob. Physiology and pathology of eye-head oordination[J]. Progress in Rentinal and Eye research,2007(26):486-515.
    [113]陈盈科,郑伯川.真实感眼部表情的一种实现方法[J].计算机与数字工程,2008,36(10):154-156.
    [114]周维.汉语语音同步的真实感三维人脸动画研究[D].中国科学院研究生院,2008.
    [115]刘洋3ds Max人物角色汉语口型动画自动生成[J].电视字幕·特技与动画,2007,13(10):39-44.
    [116]Kelemen, Csaba, and Laszlo Szirmay-Kalos. A Micro facet Based Coupled Specular-Matte BRDF Model with Importance Sampling[C]. Proceedings of the Euro graphics, Short presentations,2001:25-34.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700