基于LAP方法的机器人灵巧手控制
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
本文讨论将计算机视觉用于人手姿态的检测,在基于任务建模的方式下,进行多指机械灵巧手的操作控制的研究工作。
     机械手,包括灵巧手,是机器人进行操作的最重要的工具,对应的控制是机器人适应性地完成各种作业任务的关键。具有多手指、多关节和多自由度的多指灵巧机械手可以提高机器人的操作能力,实现复杂的灵巧操作。
     传统上的机械手通常采用离线编程或示教的方法进行控制,机械手完成的是事先规定好的工作,这种工作模式降低了机械手对非结构环境的适应性。采用视觉传感器的机器人系统,用摄像机摄入被抓取的物体的影像,通过图像处理分析,获得物体的方位、形状参数,可以让机器人能自动识别需要抓取的对象,确定机械手的夹持器的方位和抓取方法。
     但是,基于目前技术发展的水平,不可能完全靠传感器对机械手的直接控制来完成复杂的任务,即使在所谓结构环境中,机械手仍需要依赖多种形式的“学习”,“积累”处理可能出现的各种情形的知识。另一方面,在非结构化环境中,靠机器人完全适应性地完成各种作业任务是不现实的,往往需要人机配合,由人进行必要的监控和决策。并用人机交互手段干预机械手的动作。鼠标与键盘是传统上最常用的人机接口。
     新型的人机接口装置——数据手套的出现,大大增加了机械手,特别是多指、多关节的机械灵巧手的适应性,但是数据手套多数是采用命令姿态作为控制方式的,操作者在使用数据手套以前需要进行训练,手套需要标定。命令姿态数量也是有限的。而且,数据手套是接触性传感器,容易损坏、不够舒适、价格比较高。另外,与计算机相连的数据手套在一定程度上限制了操作者的活动。手套开始工作后,就连续检测人手的动作,影响了操作者与其他人或设备的交互。
     在过去的几年中,计算机视觉拓展了一个所谓“看人”(looking at people,以下简称LAP)的应用领域,用计算机视觉作为人机接口,通过观察(监视)、分析人体不同部分的表情或姿态,从中提取有用的控制信息。由于是非接触性检测,因此对于操作者而言感觉比较自由而且自然,同时还能为机器提供丰富的信息资源,具有广阔的应用范围。在Looking at people应用中,手势姿态分析主要研究内容是分析用手势表达的符号语言(如手语字符)的意义。
     本文进行的研究将Looking at people的方法应用于分析人的手势姿态,进而完成多手指、多关节机械灵巧手的操作控制。这是一个交叉研究领域,涉及到计算机立体视觉、图像处理和机器人学等学科。
     系统的特点是采用根据任务建模的方式而不是预先规定的命令姿态进行控制。由操作者观察被操作对象的表面形态、方位,作出动作决策(抓取物体的手势姿态),摄像机从不同角度“观察”人的手势,经过图像分析,信息合成,得到三维空间中手的姿态信息,然后利用人手和机械手结构上的异、同点,将人手的姿态转变成机械手可执行的动作姿态。
     研究主要完成了下列工作:
     (1)根据人手的生理结构特点,设计了适合于具有小幅度运动的非刚性构架上特征点的搜索、排序方法,可以迅速、准确地将摄像机拍摄的处于任意空间方位姿态下的手的姿态图像上的各关节点(含手腕和指尖),按手指的归属重新排序,这些点在图像采集、
Using free- hand gestures for remote control of objects an effective interaction way because the hand gestures are natural forms of communication and are easy to learn. A single gesture can be used to specify both a command and its parameters indicating the positions and movements of the hand and fingers, which provide a higher potential of expression. Using free hand as an input device eliminates the need for employing intermediate transducers.The ways for recognizing a hand gesture were classified as sensors-based and vision-based. Sensors-based ways use sensors physically attached to the hand to recognize the gesture, one of them is the "Data Glove". The mechanical data glove linked to the computer has the shortcoming of spoiling moving comfort and thus reduces the autonomy; in addition, the "glove" is expensive and could be damaged easily, the number of order gestures is also limited. Vision-based ways use optical sensors to detect the object, image processing and analyzing technologies were combined to perform the gesture recognition. The study about hand gesture analysis in vision-based system is mainly focused on the explanation of sign language. The research deals with steady positions recognition instead of dynamic gestures because a system that interprets gestures and translates them into a sequence of commands must have a way of segmenting the continuous stream of captured motion into discrete lexical entities, this process is somewhat artificial and necessarily approximate.The research introduced in this paper used "looking at people"(LAP), a new application area of computer vision in recent years, in the action control of a dexterous gripper with 3 fingers and 9 DOF. Instead of data glove, the system uses a pair of CCD camera and image processing and analyzing technologies for detecting the hand gesture of the operator.The camera pair observes the hand gesture of the operator from different directions and a pair of gesture images is caught. Image processing, image analyzing are performed to segment the key points on the hand and get the correlation of the points, stereo-vision theory is used to calculated the coordinates of the key points in the 3D space. The comparison between the physiological structure of the person hand and the mechanical structure of the dexterous gripper was performed to find a suitable way for transforming the information of the hand gesture to an executable parameter for the dexterous gripper.This is a subject-crossed research concerned with robotics, computer stereo vision, image processing and analyzing technologies. The main works done in the research are the following:A set of robust algorithms for detecting the key points positions on the hand and their co-relations was designed. It is suitable for the non-rigid structure with some small movement such as human's hand. The algorithm could search all the key points on the hand under arbitrary orientation and rearrange them according to their host finger and their position on the fingers.On the basis of Lumigraph, the new theory in computer vision, an algorithm for
    calculating the coordinates of the key points above in 3D space was developed. So the solid geometric calculations could be employed in the calculation of 3D information without calibrating the camera and the calculating the projection matrix, a complex matrix calculation process. Lining the space points up can restructure the hand gesture in 3D space.The research uses the task-based modeling in the control of the dexterous manipulator instead of command gesture. The operator observes the position and orientation of the object and design the grasping gesture for the mechanical gripper, shows it with his own hand. As introduced above, the system calculates the gesture information and the fingertip information of the operator was mapped as the fingertip position of the dexterous gripper. The inverse motion equation derived from the motion equation could be used to calculate the joints' angles on 3 fingers.The vision-based grasping control of the dexterous gripper suggested in the paper provides the dexterous gripper with the direct grasping gesture instead of the order sequence in the data glove system, the fingertip mapping algorithm makes the gripper have precise fingertip position in the grasping action and can control the grasping for the objects with uncommon surface. Getting rid of a glove linked with computer makes the operator more free and comfortable. Any action of the hand beyond the monitoring area of the CCD will not affect the controlled device. The number of cameras in the system is not limited and the cameras in the system could be arbitrarily grouped as pairs without considering the relative position between them. This method was considered quite applicable for a multi-camera system, it could resolve the occlusion problem occurred in human-machine communication. This method is expected to be used in other research works, such as modeling a virtual hand in a virtual circumstance, tracking the motion of a human limb and fast 3D reconstruction of an object.
引文
[1] Alberto Rovetta, Remo Sala, Xia Wen, and Arianna Togno: " Remote Control in Telerobotic Surgery", IEEE Trans. on Systems, Man, and Cybernetics - Part: A: Systems and Humans, Vol. 26, No. 4 July 1996
    [2] Chao-Ping Tung, and Avinash C. Kak: "Integrating Sensing, Task Planning, and execution for robotic Assembly", IEEE Trans. on Robotics and Automation, Vol. 12, No. 2, April 1996
    [3] Mohammed Bennamoun, and Boualem Boashash: "A Structural - Description - Based Vision System for Automatic Object Recognition", IEEE Trans. on Systems, Man, and Cybernetics - Part: B: Cybernetics, Vol. 27, No. 6, Dec. 1997
    [4] Ishay Kamon, Tamar Flash, and Shimon Edelman: "Learning Visually guided Grasping: A Test Case in Sensorimotor Learning", IEEE Trans. on Systems, Man, and Cybernetics - Part: A: Systems and Humans, Vol. 28, No. 3, May. 1998, 266-276
    [5] Ming Xie: " Robotic Hand-Eye Coordination: New Solutions with Uncalibrated Stereo Cameras", Machine Vision and Applications(1997) 10:136-143
    [6] Christopher E.Smith, Nikolaos P. Papanikolopoulos: "Grasping of Static and Moving Objects Using a Vision-Based Control Approach", Journal of Intelligent and Robotic Systems 19: 237-270, 1997
    [7] Kevin Stanley, Q. M. Jonathan Wu, and William A. Gruver: "Implementation of Vision-Based Planar Grasp Planning", IEEE Trans. on Systems, Man, and Cybernetics - Part: C: Applications and Reviews, Vol. 30, No. 4, Nov. 2000
    [8] Y. F. Li and M. H. Lee: "Applying Vision Guidance in Robotic Food Handling", IEEE Robotics & Automation Magazine, March, 1996, pp. 4-12
    [9] Bradley J. Nelson, Nikolaos P. Papanikolopoulos, and Pradeep K. Khosla: "Robotic Visual Servoing and Robotic Assembly Tasks", IEEE Robotics & Automation Magazine
    [10] Yasuo Kuniyoshi, Masayuki Inaba, Hirochika Inoue: "Seeing, Understanding and Doing Human Task", Proceedings of the 1992 IEEE international Conference on Robotics and Automation, Nico, France, May 1992, pp. 2-9
    [11] J. Domingo and J. Pelechano: "Measurement and Storage of a Network of Jacobians as a Method for the Visual Positioning of a Robot Arm", Jounal of Intelligent and Robotic System, 16:407-422, 1996
    [12] Thomas Baudel, and Michel Beaudouin-Lafan,: "Charade: remote control of objects with freehand gestures", Communications of the ACM, Vol. 36, No. 7, 1993, pp. 28-35
    [13] Lalit Gupta, Suwei Ma: "Gesture-Based Interaction and Communication: Automated Classification of Hand Gesture Contours", IEEE Trans. On Systems, Man, and Cybernetics--Part C: Applications and Reviews, Vol. 31, No. 1, Feb, 2001
    [14] Cullen Jennings, "Robust Finger Tracking with Multiple Cameras", http://www.cs.ubc.ca/spider/jennings/ratfg-rts99/paper_ICCV99-9.html
    [15] R. Bolt: "The Human Interface", Van Nostrand Reinhold, New York, 1984
    [16] P.Appino, J. Lewis, L. Koved, D. Ling, D. Rabenhorst and C. Codella: "An architecture for virtual worlds", Presence 1, 1 (1991)
    [17] The Vivid Group: "Vivid's Group Mandala system", CHI'91 Interactive Experience, SINGGRAPH'92 art show
    [18] 曾芬芳,王建华,别小川,袁野:“基于神经网络的手势识别”,机器入,No.1,1999
    [19] 吴江琴,高文,陈熙霖,刘伟:“基于ANN/HMM的中国手语识别系统”,计算机工程与应用,1999.9,中国学术期刊(光盘版)电子杂志社,pp.1-5
    [20] 王春立,高文,“具有不同数目状态结点的HMMs在中国手语识别中的应用”,计算机研究与发展,Vol.38,No.1,2001.1,中国学术期刊(光盘版)电子杂志社,PP.111-115
    [21] 王家顺,王田苗,魏军,韩壮志,游松:“一种面向遥操作的新型数据手套研制”,机器人,Vol.22,No.3,May,2000,pp.201-206
    [22] 魏军,游松,朱广超,王田苗,刘连忠:“面向灵巧手的三指五自由度数据手套BHG-1设计”,机器人,Vol.21,No.3,1999
    [23] D. Sturman: "Whole-hand input", Ph.D. thesis, Media Arts and Scineces, Massachusetts Institute of Technology, 1992
    [24] C. Magginoi: "Gesture computer-New ways of operating a computer" , in Proceedings, 1st International Workshop on Automatic Face and Gesture Recognition, 1995, pp. 166-171.
    [25] H. Morita, S. Hashimoto, and S. Ohteru: "A computer music system that follows a human conductor", IEEE Couput. (July 1991), pp. 44-53
    [26] Chan Wah Ng, Surendra Ranganath: Real-time gesture recognition system and application, Image and Vision Computing 20 (2002) 993-1007
    [27] W. T. Freeman, K. Tanaka, J. Ohta, et al., Computer vision for computer games, in Proceedings, Int 'l Conf Automatic Face and Gesture Recognition, Killington, 1996, pp. 100-105
    [28] K. Murakami, and H. Taguchi: "Gesture recognition using recurrent meuralnetworks", in Proceedings of Human Factors in Computing Systems (CHI'91), ACM Press, 1991, pp. 237-242
    [29] Darwin G. Caldwell, Andrew Wardle, Osman Kocak, and Michael Goodwin: "Telepresence Feedback and Input Systems for a Twin Armed Mobile Robot", IEEE Robotics & Automation Magazine, Sep. 1996, pp. 29-38
    [30] Myung Hwan Yun, David Cannon, Andris Freivalds, and Geb Thomas: "An Instrumented Glove for Grasp Specification in Virtual-Reality-Based Point-and-Direct Telerobotics", IEEE Trans. on Systems, Man and Cybernetics --- Part B: Cybernetics, Vol. 27, No. 5, Oct. 1997, pp.835-846
    [31] D. M. Gavrila: "The Visual Analysis of Human Movement: A survey", Computer Vision and Image Understanding, Vol. 73, No. 1, January, pp.82-98, 1999
    [32] J. K. Aggarwal and Q. Cai: "Human Motion Analysis: A Review", Computer Vision and Image Understanding, Vol. 73, No. 3, March, pp.428-440, 1999
    [33] James J. Kuch and Thomas S. Huang: "Vision Based hand Modeling and Tracking for Virtual Teleconferencing and Telecollaboration", pp. 666-670
    [34] Yuanxin Zhu, Guangyou Xu and David J. Kriegman: A Real-Time Approach to the Spotting, Representation, and Recognition of Hand Gestures for Human-Computer Interaction, Computer Vision and Image Understanding, Vol. 85, 2002, 189-208
    [35] Ying Wu and Thomas S. Huang, Hand Modeling, analysis and recognition For Vision-Based Human Computer Interaction, IEEE SIGNAL PROCESSING
    ??MAGAZINE MAY 2001 pp.51-60
    [36]T. Starner et al., "A wearable computer based American sign language recognizer," in Proc. IEEE Int. Symp. Wearable Computing, Oct. 1997, pp. 130-137.
    [37]C. Vogler and D. Metaxas, "ASL recognition based on a coupling between HMMs and 3D motion analysis," in Proc. IEEE Int. Conf. Computer Vision, Mumbai, India, Jan. 1998, pp. 363-369
    [38] V. Pavlovic, R. Sharma, and T.S. Huang, "Visual interpretation of hand gestures for human computer interaction: A review," IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 677-695, July 1997.
    [39]J. Crowley, F. Berard, and J. Coutaz, "Finger tracking as an input device for augmented reality," in Proc. Int. Workshop Automatic Face and Gesture Recognition, Zurich, Switzerland, 1995, pp. 195-200.
    [40] V. Pavlovic, "Dynamic Bayesian networks for information fusion with applications to human-computer interfaces," Ph.D. dissertation, Univ. Illinois at Urbana-Champaign, 1999.
    [41]N. Shimada et al., "Hand gesture estimation and model refinement using monocular camera-Ambiguity limitation by inequality constraints," in Proc. 3rd Conf. Face and Gesture Recognition, 1998, pp. 268-273.
    [42] J. Cassell, A Framework for Gesture Generation and Interpretation. Cambridge, U.K.: Cambridge Univ. Press, 1998.
    [43]D. McNeill, Hand and Mind. Chicago, IL: Univ. Chicago Press, 1992.
    [44]Sturman, D., Zelter, D. And Pieper, S.: "Hands-on interaction with virtual environments", UIST: Proceedings of the ACM SIGGRAPH Symposium on User Interfaces, Williamsburg, VA, November 1989, pp. 19-24
    [45]Yuntao Cui, Yuyang Weng: "Appearance-Based Hand Sign Recognition from Intensity Image Sequences", Computer Vision and Image Understanding 78, pp. 157-176 (2000)
    [46]J. Davis and M. Shah, Toward 3-D gesture recognition, Internat. J. Pattern Recognit. Artif. Intell. 13, 1999, 381-393.
    [47] R. Koch, Dynamic 3D scene analysis through synthetic feedback control, IEEE Trans. Pattern Anal Mach.Intell. 15, 1993, 556-568.
    [48]E. Clergue, M. Goldberg, N. Madrane et al, Automatic face and gesture recognition for video indexing, in Proceedings, 1st Int'l Workshop on Automatic Face and Gesture Recognition, 1995, pp. 110-115.
    [49]D. M. Gavrila and L. S. Davis, Towards 3D model-based tracking and recognition of human movement: A multi-view approach, in Proceedings, 1st Int'l Workshop on Automatic Face and Gesture Recognition, 1995, pp. 272-277.
    [50] J. J. Kuch, Vision-based hand modeling and tracking for virtual teleconferencing and telecollaboration, in Proceedings, Int'l Conf. Computer Vision, 1995, pp. 666-671.
    [51] J. M. Rehg and T. Kanade, Model-based tracking of self-occluding articulated objects, in Proceedings, Int'l Conf. Computer Vision, 1995, pp. 612-617
    [52] A. C. Downton and H. Drouet,: "Image analysis for model-based sign language
     coding", in Progress in Image Analysis and processing, Ⅱ, Proc. Of the 6th International Conference on Imge analysis and Processing, 1991, pp.79-89
    [53] M. Etoh, A. Tomono, and F. Kishino: "Stereo-based description by generalized cylinder complexes from occluding contours", Systems Comput. Japan 22(12), 1991, 79-89
    [54] T. Darrell, A. Pentland: "Space-time gestures", in IEEE Conf. Computer Vision and Pattern Recognition, 1993, pp. 335-340
    [55] A. Bobick and A. Wilson: "A state-based technique for the summarization and recognition of gesture", in Proc. 5th International Conf. Computer Vision, boston, 1995, pp. 382-388
    [56] Y. Cui, D.Swets, and J. Weng: "Learning-based hand sign recognition using SHOSLIF-M", in Proc. International Conf. Computer Vision, Boston, MA, 1995, pp. 631-636
    [57] T. E. Starner and A. Pentland,: "Visual recognition of American Sign Language using hidden Markov models", in Proc. International Workshop on Automatic Face-and Gesture-Recognition, June 1995, pp. 189-194
    [58] K. Cho and S. M. Dunn: "Learning shape calsses", IEEE Trans. Pattern Anal. Mach. Intell. 16, 1994, pp.882-888
    [59] C. Kervann and F. Heits, A hierarchical statictical framework for the segmentation of deformable objects in image sequences, in Proc. IEEE Conference on Conputer Vision and Pattern Recognition, 1994, pp. 724-728
    [60] A. Blake and M. Isard: "3D position, attitude and shape input using video tracking of hands and lips", in Proceedings of SIGGRAPH 1994, pp. 185-192
    [61] R. Cipolla, Y. Okamoto, and Y. Kuno: "Robust structure from Motion using Motion Parallax", Proceedings of the 4th International Conference on Computer Vision, Berlin, 1993, pp. 374-82
    [62] J. Davis, M. Shah: "Visual Gesture Recognition", IEE Proc.-Vis. Image Signal Process, Vol. 141, No. 2, April 1994, pp. 101-106
    [63] 徐燕,怀进鹏,王兆其:手势数据挖掘及挖掘结果的可视化,计算机辅助设计与图形学学报,Vol115,No14,Aprl,2003,pp.449-453
    [64] T. J. Darrell, I. A. Essa, and A. Pentland, Task-specific gesture analysis in real-time using interpolated views, IEEE Trans. Pattern Anal. Mach. Intell. 18, 1996, 1236-1242.
    [65] R. Kjeldsen and J. Kender, Toward the use of gesture in traditional user interfaces, in Proceedings, 2nd Int'l Conf Automatic Face and Gesture Recognition, Killington, 1996, pp. 151-156
    [66] Chao Hu, Max Qinghu Meng, Peter Xiaoping Liu, Xiang Wang, Visual Gesture Recognition for Human-Machine Interface of Robot Teleoperation, Proceedings of the 2003 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, Las Vegas, Nevada. October 2003
    [67] J. Schlenzing, E. Hunter, and R. Jain, Recursive identification of gesture inputs using hidden markov models, in Proceedings, 2nd IEEE Workshop Application of Computer Vision, 1994, pp. 187-194
    [68] G. Brakski, B.-L.Yeo, and M. M.Yeung, Gesture for video content navigation, in Proceedings', SPIE Vol. 3656, 1998, pp. 230-242.
    [69] G. Brakski, B.-L.Yeo, and M. M.Yeung, Gesture for video content navigation, in Proceedings, SPIE Vol. 3656, 1998, pp.230-242.
    [70] 任海兵,祝远新,徐光祐,张晓平,林学訚,复杂背景下的手势分割与识别,自动化学报,Vol.1128,No.12,Mar.,2002,pp.256-261
    [71] Feng-Sheng Chert, Chih-Ming Fu, Chung-Lin Huang: Hand Gesture Recognition Using a Real-time Tracking Method, and Hidden Markov Models, Image and Vision Computing, Vol. 21, 2003, pp. 745-758
    [72] Xiaoming Yin, Ming Xie: Estimation of the fundamental matrix from uncalibrated stereo hand images for 3D hand gesture recognition, Pattern Recognition Vol. 36, 2003, pp.567 - 584
    [73] 丁海洋,阮秋琦:多尺度模型与矩描绘子相结合的手势识别算法,北方交通大学学报,Vol.28,No.2,2004,pp.41-45
    [74] 姜威,陈援非,孔勇,李文明:“一种在复杂背景彩色图像中划分手部图像的方法”,山东大学学报(工学版),Vol.33,No.4,2003,pp.410-412
    [75] 周航,阮秋琦:基于表观特征的手势识别,广西师范大学学报(自然科学版),Vol.21,No.1,March,2003,pp.25-30
    [76] 邹伟,原魁,杜清秀,陈晋龙,一种基于的手势识别方法FSMM,计算机工程,Vol 29,No.15,2003,pp.134-135,168
    [77] 张良国,吴江琴,高文,姚鸿勋:“基于Hausdorff距离的手势识别”,中国图象图形学报,Vol.7(A),No.11,Nov.2002,pp.1144-1150
    [78] Aditya Ramamoorthy, Namrata Vaswania, Santanu Chaudhurya;., Subhashis Banerjee: Recognition of dynamic hand gestures, Pattern Recognition Vol. 36, 2003, pp. 2069 - 2081
    [79] 周航,阮秋琦:结合手部特征的单目相干映射手势识别,兰州交通大学学报(自然科学版),Vol.23,No.1,2004,pp.71-75
    [80] 任海兵,徐光祐,林学訚,基于特征线条的手势识别,软件学报,Vol.13,No.5,2002,pp.987-993
    [81] Roberto Cipolla, Nicholas Hollinghurst, Andrew Gee and Robert Dowland: "Computer vision in interactive robotics", Assembly Automation, Vol. 16, No. 1, 1996, pp. 18-24
    [82] Fukumoto, M, Mase, K. And Suenaga, Y: "Realtime detection for pointing actions for a glove-free interface", Proceedings of the IAPR Workshop on Machine Vision Applications, 1992, pp. 473-476
    [83] 刘江华,程君实,陈佳品:基于视觉的动态手势识别及其在仿人机器人交互中的应用,机器人,Vol.24,No.3,May,2002,pp.197-200,216
    [84] James Coughlan, Alan Yuille, Camper English, and Dan Snow: "Efficient Deformable template detection and Localization without User Initialization", Computer Vision and Image Understanding 78, pp.303-319 (2000)
    [85] Jintae Lee, Tosiyasu L Kunii. Model-based Analysis of Hand Posture. Computer Graphics and Applications, 1995,15(5): 77~86
    [86] James M. Rehg, Takeo Kanade: "DigitEyes: Vision-Based hand tracking for Human-Computer Interaction, pp. 16-22
    [87] 任倩,王树国,陈祥立,付宜利,蔡鹤皋:基于虚拟现实的机器人离线编程技术的研究,机器人,Vol.25,No.2,2003,pp.172-177
    [88] Antonio Chella, Haris Dzindo, tgnazio Infantino, Irene Macaluso:"A Posture sequence learning system for an anthropomorphic robotic hand", Robotics and Autonomous Systems 47(2004) 143-152
    [89] 王澍寰主编:《手外科学》人民卫生出版社1978年2月
    [90] 北京航空航天大学机器人研究所(何、朱广超):BH-3三指灵巧手主要技术资料 1998.6
    [91] F. Quek, "Unencumbered gesture interaction," IEEE Multimedia, vol. 3. no. 3, pp. 36-47, 1996.
    [92] A. Katkere, E. Hunter, D. Kuramura, J. Schlenzig, S. Moezzi, and R. Jain: "ROBOGEST: Telepresence using Hand Gestures", Technical report VCL-94-104, Visual Computing Laboratory, University of California, San Diego, December 1994
    [93] K. Ishibuchi, H. Takemura, and F. Kishino: "Real Time Hand Gesture Recognition using 3D Prediction Model", International Conference on Systems, Man, and Cybernetics, Vol 5, Le Touquet, France, October, 1993, pp. 324-328
    [94] I. J. Ko and H. I. Choi: "Extracting the hand region with the aid of a tracking facility", Electronic Letters 32 (17), August, 1996, pp. 1561 - 1563
    [95] M. Fleck, D. Forsyth, and C. Breggler: "Finding Naked People", ECCV'96, April 15-18, 1996, pp. 593-602
    [96] 徐德华:图像处理与分析科学出版社 1992 p.177
    [97] R. Sejiwiku, Hirohira NOSHITA et al. (Translator), Algorithms, Search, Cahracter Sequence, Calculation Geometry, Second Edition, Vol. 2, Japan Kiindai Kagaku Sha (in Japanese)
    [98] 马颂德,张正友:计算机视觉——计算理论与算法基础,科学出版社,1998.1
    [99] J. S. Li, X. B. Shen: "Research on Image Matching Technique, Journal of Microelectronics & Computer (in Chinese), Vol. 17, No.2, 2000, pp. 10-14
    [100] R.Hartley, A.Zisserman: "Multiple View Geometry in Computer Vision", Cambridge University Press,2000(韦穗,杨尚骏,章权兵,胡茂林译:计算机视觉中的多视图几何,安徽大学出版社,合肥,2002年1月)
    [101] Steven J. Gortler, Radek Grzeszczuk, Richard Szeliske, Michael F. Cohen: "The Lumigraph", Computer Graphics Proceedings, Annual Conference Series, SIGGRAPH 96, New Orieans, Louisiana, August 4-9, 1996, pp.43-54
    [102] S.E. Chen: Quicktime VR - an image-based approach to virtual enviroment navigation, In computer graphics, Annual Conference Series, 1995, pp. 29-38
    [103] L. McMillian, and G. Bishop: Plenoptic Modeling: An image-based rendering system, In Computer Graphics, Annual Conference Series, 1995, pp.39-46
    [104] E.H. Adelson & J. R. Bergen. The Plenoptic Function And The Elements Of Early Vision. In Computational Models of Visual Processing, pages 3-20, MIT Press, Cambridge, MA, 1991.
    [105] M.W. Halle: Holographic stereograms as discrete imaging systems, Practical Holography Ⅷ (SPIE) 2176 (1994), 73-84
    [106] M. Levoy, P. Hanrahan: Light-field rendering, In Computer Graphics Annual Conference series, 1996
    [107] H.A.Martins, J.R.Birk, and R.B.Kelley: "Camera Models Based on Data from Two Calibration Planes", Computer Graphics and Image Processing 17, 1981, pp.173-180
    [108] Senthil Kumar, Maha Ssllam, Dmitry Goldgof: Matching point features under small nonrigid motion, Pattern Recognition, 34(2001), 2353-2365
    [109] Mizutani: Image Processor for FA and it's application using gray scale pattern matching, O plus E No.141, New Technology Communications Co., Ltd. ( 1991 )
    [110] Torigoe Daisuke: Vision System for Micro-operation Assistance ( Japanese ),硕士论文,日本香川大学,2000
    [111] 吴瑞祥:“机器人技术及应用”,北京,北京航空航天大学出版社,1994,pp.12-14
    [112] 张永德,刘廷荣:“机器人多指灵巧手的结构参数优化设计”,机器人,Vol.2l,No.3,May,1999,pp.234-240
    [113] 李继婷,张玉茹,张启先:人手抓持识别与灵巧手的抓持规划,机器人,Vol.24,No.6,Nov.2002,pp.530-534

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700