生物视觉引导运动机制及机器人手眼协调研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
利用视觉信息控制机器人的运动,关键的难点在于视觉空间和机器人运动空间之间的非线性关系。目前人类设计的基于视觉控制的机器人系统无论在精度、稳定性还是鲁棒性上都无法和人类本身相比拟。人类(灵长类动物)通过其内部运动控制机制很好的解耦了视觉空间到运动空间之间的非线性关系,并且具有极好的鲁棒性。因此很多学者提出了多种仿生物机器人手眼协调系统,作出了开创性的研究。
     首先对神经生物学和人类运动学等领域关于视觉引导运动控制机制方面的研究进行了分析,从中提炼出可以为机器人领域所借鉴的一些内在机制。并根据这些内在机制构建了一个仿生物机器人手眼协调控制系统,该控制系统综合考虑生物视觉引导运动的固有特征和可变特征,将基于内部模型学习的视觉前馈控制和基于双目视差的视觉反馈控制引入到机器人控制系统中。
     针对机器人本体实时跟踪与感知问题,提出一种基于特征几何形状模板的、改进的CODENSATION跟踪算法;针对外部世界的目标物体,通过B样条曲线对其几何形状进行描述,并提出相应的CODENSATION跟踪算法;通过这两种不同的识别与跟踪策略很好的达到了机器人对其本体进行实时感知和对外部环境进行区分的目的。
     模仿生物视觉前馈控制的视觉-运动映射内部学习机制,提出了一种将BP神经网络和Sarsa(λ)强化学习相结合的学习算法,将这种学习算法应用到机器人视觉-运动映射模型中,将机器人姿态偏差角和视觉图像特征作为学习的输入量,将关节角速度作为学习的输出量,这种关节角速度可以解释为机器人的视觉前馈控制指令。仿真实验表明这种基于BP神经网络和强化学习的算法可以有效的逼近视觉-运动映射。
     针对传统的双目视差趋零控制存在的自由度限制和先验知识限制问题,提出了一种基于全局性轮廓特征和双目视差趋零控制的视觉反馈控制方法,构建由双目视差角误差和接近方向角误差组成的误差函数,从而推导出一种扩展的基于双目视差趋零控制的视觉反馈控制。这种扩展的视觉反馈控制可以较好的解决传统双目视差趋零控制中的自由度限制问题。同时目标跟随和接近实验表明这种扩展型视觉反馈控制可以有效的控制双目视差稳定趋零,从而达到精确对准和定位的目的。
     最后将手眼协调自主机器人应用到远程遥操作系统中构成监督控制遥操作系统和自主性可调的动态自主遥操作系统。为了和手眼协调自主机器人进行高效、流畅的交互,本文提出一种将自然图像界面和遥自主编程模块相结合的交互方式,通过这种交互方式将‘人—计算机—自主机器人’连接起来,自然图像交互界面为目标设定、定义提供渠道,遥自主编程模块为‘计算机—自主机器人’交互提供通道。实验表明这种应用手眼协调自主机器人的遥操作系统既可以发挥人类操作者的经验以应对较为复杂的工作环境,又可以充分的利用机器人的自主能力。
The key difficulty of vision based robot control consists in the nonlinear relationbetween vision space and robotic motion space. Nowadays vision based robot controlsystem designed by humanbeings is not as good as itself in the aspect of precision,stability or robustness. The control mechanisms inherent in humanbeings (Primates)decoupling the nonlinear relation between vision space and motion space in highperformance with good robustness. And so, many reaseachers did the pioneer researchesand proposed kinds of robotic hand-eye coordination sytem which inspired biologically.
     Firstly, the accomplishment about the research of vision guided motion control inthe domain of neurobiology science and human body movement science is analyzed,mechanisms suitable for robotic control are abstracted from which. A framework ofbiologically inspired robotic hand-eye coordination is proposed based on thesemechanisms, the framework having considered both of invariant feature and variablefeature inherent in biological internal model. And the internal model learning basedvisual feedforward control and binocular disparity based visual feedback control isintegrated in the framework.
     Aimed to the problem of robotic body realtime tracking and percepting, animproved CONDENSATION tracking algorithm based on geometric shape template isproposed. As for external objects, the shape of objects is discribed by B-spline mothed,and relevant improved CONDENSATION tracking algorithm is proposed. By these twodifferent recognition and tracking motheds, the robot is endowed with the ability ofself-body real time perception and the ability of differentiation self-body to externalenvironment.
     A learning algorithm based on BP neural network and reinforcement learning isproposed by mimicing the biological visual feed-forward control which realized byvisual-motor mapping internal model learning. The algorithm is applied in roboticvisual-motor model learning, input of learning include robotic status deviated angle andimage feature in vision, output of learning is velocity of robotic joint, the velocity can beexplained to control command to realized the feed-forward control. The results of simulation experiment showed the algorithm based on BP neural network andreinforcement learning is effective to approximate the visual-motor mapping.
     As traditional binocular disparity to zero control is limited by degree of freedom andpriori-knowledge, a new visual feedback control based on roundly contour feature andbinocular disparity is proposed. In this mothed, the error function is consisted bybinocular error of vision and reaching angle error, a new expanded binocular disparityzero control based visual feedback control is deduced by the error function. Theexpanded visual feedback control can resolved the problem of DOF limitation whichconsisted in traditional binocular disparity control. Furthermore, experiment of objecttracking and reaching showed the expanded visual feedback control can control thebinocular disparity to zero stably, alignmenting and positioning accurately.
     Finally, the hand-eye coordination robot is applied to teleoperation. As a result, asupervisory control teleoperation system and a dynamic autonomy teleoperation systemwith adjustable level of autonomy were constructed by using the autonomous hand-eyecoordination robot as telerobot. For interact with autonomous hand-eye coordinationtelerobot, a new interaction composed by natural image interface and tele-autonomyprogram module is created. 'Human operator-computer-autonomous robot' is connectedby the interaction, natural image interface provide a channel for object difinition andsetting, tele-autonomy program module provide tool for 'computer-autonomous robot'interaction. Experiments showed the teleoperation system based on autonomoushand-eye coordination robot not only exert human operator's experience to treat complexwork environment, but also exert the robot's capability of autonomy.
引文
[1] Wichman W., Use of optical feed back in the computer control of an arm, AI memo, Stanford AI poject, 1967.
    [2] Hauck A., M. Sorg, G. Farber, T. Schenk, A Biologically Motivated Model for the Control of Visually Guided Reach-To-Grasp Movements Proceedings of the 1998 IEEE ISIC/CIRA/ISJAoinSt ConferenceGaithersburg, 1998: 295-300
    [3] Shirai Y., Guiding a Robot by Visual Feedback in Assemling Tasks, Pattern Recognition, 1973, 5: 99-108
    [4] Hutchinson S., Hager G. D. and Corke P. I., A Tutorial on Visual Servo Control IEEE Transactions On Rorotlcs And Automation, 1996, 12(5): 651-670
    [5] Hill J., Park W. T., Real Time Control of a Robot with a Mobile Camera, in Proc 9th ISIR, 1979: 233-246
    [6] Sanderson A. C. and Weiss L. E., Image based visual servo control using relational graph error signals Proc. IEEE. 1980: 1074-1077
    [7] Kragic D. and Christensen H. I, Survey on Visual Servoing for Manipulation, Technology Report, ISRN KTH, Jan. 2002.
    [8] Corke P., "Visual control of robot manipulators-A review," in visual Servoing K. Hashimoto. Ed. Singapore: World Scientific, 1994: 1-31.
    [9] 赵清杰,连广宇,孙增圻,机器人视觉伺服综述,控制与决策,2001,16(6):849-853
    [10] 王麟琨,徐德,谭民,机器人视觉伺服研究进展,机器人,2004,26(3):277-282
    [11] Malis E., Chaumette F., and Boudet S., "2-1/2-D visual servoing," IEEE Transactions on Robotics and Automation, 1999, 15: 238-250
    [12] Malis E., Survey of vision-based robot control, ENSIETA European Naval Ship Design Short Course, Brest, France, 2002.
    [13] Herve J., Sharma R. and Cucka P., Toward robust vision-based control: Hand-Eye coordenation without calibration, In Proc. 1991 IEEE Inter. Symp. on Intelligent Control, 1991: 457-462
    [14] 王军平,曹秉刚,康龙云,CHO Hyungsuck,基于图像矩的视觉伺服控制及 应用,机械工程学报,2008,10:308-312
    [15] Yoshimi B. H., and Allen P. K., Alignment using an uncalibrated camera system, IEEE Trans. On Robotics and Automation, 1995,11 (4): 516-521
    [16] Sutanto H., Sharma R. and Varma V., Image based autodocking without calibration, In Proc. IEEE Inter. Conf. Robot. Automation., Albuquerque, New Mexico, Apr., 1997: 974-979
    [17] Hsu L. and Aquino P. L. S., Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera, Proceedings of the 38th Conference on Decision&Control, Phoenix, Arizona USA, 1999: 1248-1253
    [18] 苏剑波,席裕庚 与标定无关的机器人手眼系统的平面运动跟踪 人工智能与模式识别,1999,12(4):467-472
    [19] 郭振民,陈善本,吴林.一种基于图象的无标定视觉伺服方法的研究,哈尔滨工业大学学报,2002,34(3):294-296
    [20] 项龙江,司秉玉,薛定宇,徐心和,模型无关的无定标视觉伺服控制,机器人,2003,25(5):424-427
    [21] 赵杰,李牧,李戈,闫继宏,一种无标定视觉伺服控制技术的研究,控制与决策,2006,21(9):1015-1020
    [22] 吕遐东,黄心汉,王敏,基于模糊自适应Kalman滤波的机械手动态图像雅可比矩阵辨识,高技术通讯,2007,17(3):262-267
    [23] Miller W., Real-time application of neural networks for sensor-based control of robots with vision, IEEE Transactions on Systems, Man and Cybernetics, 1989, 19(4): 825-831.
    [24] Carusone J. and Eleurterio G., The feature CMAC: a neural-networkbased vision system for robotic control, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA98), 1998, 4: 2959-2964.
    [25] Cooperstock J. R. and Milios E. E., Self-supervised learning for docking and target reaching, Robotics and Autonomous Systems, 1993, 11: 243-260
    [26] Pfister A., Neuronal network control of the robot Orthosis Lokomat, Semesterarbeit, University of Zurich, Artificial Intelligence Laboratory, Technology Report, ETH Zurich, Automatic Control Laboratory, 2001
    [27] Shibata K., Ito K., Hand-Eye Coordination in Robot Arm Reaching Task by reinforcement Learning Using a Neural Network, 1999: 458-463
    [28] 潘且鲁,苏剑波,席裕庚,基于神经网络的机器人手眼无标定平面视觉跟踪,自动化学报,2001,27(2):194-199
    [29] Cretual A. and Chaumette F., Positionning a camera parallel to a plane using dynamic visual servoing. In IEEE Int. Conf. on Intelligent Robots and Systems, Grenoble, France, 1997, 1: 43-48
    [30] Colombo C., Allotta B., and Dario P., Affine visual servoing : A framework for relative positioning with a robot, International Conference on Robotics and Automation, Nagoya, Japan, 1995: 464-471
    [31] Questa P., Grossmann E. and Sandin G., Camera self orientation and docking maneuver using normal flow. In SPIE AeroSense, Orlando, Florida, April 1995
    [32] Stephen R. J. and Masud H., Visual control of hand action, Trends in Cognitive Sciences, 1997, 1(8): 310-317
    [33] Horstmann A. and Hoffmann K. P., Target selection in eye-hand coordination: Do we reach to where we look or do we look to where we reach? Exp Brain Res, 2005, 167(2): 187-195
    [34] Hwang E. J., Representation Of Proprioceptive Information For Generation Of Arm Dynamics, PHD thesis, Johns Hopkins University, December 2004
    [35] Miall R. C. and Reckess G. Z., The Cerebellum and the Timing of Coordinated Eye and Hand Tracking, Brain and Cognition, 2002, 48: 212-226
    [36] Crawford J. D., Medendorp W. P. and Marotta J. J., Spatial Transformations for Eye-Hand Coordination, Journal of Neurophysiol, 2004, 92: 10-19
    [37] 赵杰,李戈,蔡鹤皋,基于PUMA机器人的视觉伺服控制实验研究,哈尔滨工业大学学报,2002,34(5):620-624
    [38] Webb B., Balasubramaniam R., Belzung C., Metta G. and Sandini G., Can robots make good models of biological behaviour?, Behavioral And Brain Sciences, 2001, 24: 1033-1050
    [39] Chinellato E. and Pobil A. P., Vision and Grasping: Humans vs. Robots, International Work-conference on the Interplay between Natural and Artificial Computation(IWINAC 2005), LNCS 3561: 366-375
    [40] Sehenk T., Philipp J., Hauβler A., Hauck A., Hermsdorfer J. and Mai N., A system for the study of visuomotor coordination during reaching for moving targets, Journal of Neuroscience Methods, 2000, 100: 3-12
    [41] Hauck A., Sorg M., Schenk T. and Farber G, What can be Learned from Human Reach-To-Grasp Movements for the Design of Robotic HandEye Systems?: Proceedings of the IEEE International Conference on Robotics and Automation(ICRA'99), 1999,2521-2526
    [42] Kawato M., Trajectory Formation in Arm Movements: Minimization Principles and Procedures, Chapter In: Advances in Motor Learning and Control, 1996
    [43] Hauck A., Passig G, Schenk T., Michael Sorg and Farber G, On the Performance of a Biologically Motivated Visual Control Strategy for Robotic Hand-Eye Coordination, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000:1626-1632
    [44] Kragic D. and Christensen H.I., Biologically Motivated Visual Servoing and Grasping for Real World Tasks, Proceedings of the 2003 IEEE/RSJ Conference on Intelligent Robots and Systems Las Vegas, Nevada. October 2003:3417-3422
    [45] Berend J. W., Nuttin M., Van B.H. and Corteville B., From Biological Inspiration Toward Next-Generation Manipulators: Manipulator Control Focused on Human Tasks, IEEE Transactions On Systems, Man, And Cybernetics, Part C:Applications And Reviews, 2005,35(1):53-65
    [46] Kuhn D., Buessler J.L. and Urban J.P., Neural Approach to Visual Servoing for Robotic Hand Eye Coordination, 1995, Proceedings, IEEE International Conference on Neural ,Perth, WA, Australia, 1995, 5:2364-2369
    [47] Fitzpatrick P. and Metta G, "Early integration of vision and manipulation", Adaptive Behavior, 2003, l(2):109-128
    [48] Metta G, Sandinia G and Konczakb J., A developmental approach to visually-guided reaching in artificial systems, Neural Networks, 1999, 12:1413-1427
    [49] Fitzpatrick P., From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot, PHD thesis, MIT (massachusetts institute of technology), June 2003
    [50] Schenck W, Hofmann H., M(?)oller R., Learning Internal Models for Eye-Hand Coordination in Reaching and Grasping,Proceedings of EuroCogSci 2003:289-294
    [51] Asuni G, Guglielmelli E., Starita A., Dario P., A Neuro-controller for Robotic Manipulators Based on Biologically-Inspired Visuo-Motor Co-ordination Neural Models,Proceedings of the 1 st International IEEE EMBS Conference on Neural Engineering Capri Island, Italy, March 20-22, 2003
    [52] Bovet S. and Pfeifer R., Emergence of Delayed Reward Learning from Sensorimotor Coordination, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005: 2272- 2277
    [53] Kumar N. and Behera L., Visual-Motor Coordination using a Quantum Clustering based Neural Control Scheme, Neural Processing Letters, 2004, 20:11-22.
    [54] Dodds Z., Jagersand M.n, Hager GD., and Toyama K., A Hierarchical Vision Architecture for Robotic Manipulation Tasks, ICVS'99, LNCS 1542, 1999: 312-330
    [55] Stoica A., Robot fostering techniques for sensory-motor development of humanoid robots, Robotics and Autonomous Systems, 2001, 37:127-143
    [56] Behera L. and Kirubanandan N., A hybrid neural control scheme for visual-motor coordination, IEEE Control Systems Magazine, 1999, 19: 34 - 41.
    [57] Walter J. and Schulten K., Implementation of self-organizing neural networks for visuomotor control of an industrial robot. IEEE Transactions on Neural Networks, 1993,4:86-95.
    [58] Laschi C, Asuni G, Guglielmelli E., Teti G, Johansson R., Konosu H., Wasik Z., Carrozza M.C., Dario P., A bio-inspired predictive sensory-motor coordination scheme for robot reaching and preshaping, Autonomous Robot, 2008, 25:85-101
    [59] Barrera A., Weitzenfeld A., Biologically-inspired robot spatial cognition based on rat neurophysiological studies, Autonomous Robot, 2008, 25:147-169
    [60] Laschi C. and Johansson R.S., Bio-inspired sensory-motor coordination,Autonomous Robot, 2008, 25:1-2
    [61] Miall R.C. and Cole J., Evidence for stronger visuo-motor than visuo-proprioceptive conflict during mirror drawing performed by a deafferented subject and control subjects, Experimental Brain Research, 2007,176:432-439
    [62] Bernier P.M, Chua R. and Franks I.M., Is proprioception calibrated during visually guided movements? , Experimental Brain Research, 2005, 167: 292-296
    [63] Isableu B., Ohlmann T., Cremieux J., and Amblard B., How dynamic visual field dependence and independence interacts with the visual contribution to postural control, Human Movement Science, 1998, 17:367-391
    [64] Ferrier N.J., Visual Control of Robot Reaching, Sixth International Conference on Computer Vision, 1998:903-910
    [65] 罗四维,视觉感知系统信息处理理论,电子工业出版社,2006年2月
    [66] Sheth B.R. and Shimojo S., How the lack of visuomotor feedback affects even the early stages of goal-directed pointing movements, Experimental Brain Research,2002,143:181-190
    [67] Desmurget M, & Grafton S., Forward modeling allows feedback control for fast reaching movements, Trends in Cognitive Sciences, 2000,4:423-431.
    [68] Rabes P.N., The planning and control of reaching movements. Current Opinion in Neurobiology, 2000,10: 740-746.
    [69] Nijhof E.J, On-line trajectory modifications of planar goal-directed arm movements, Human Movement Science, 2003, 22:13-36
    [70] Fagg A. H. and Arbib M. A., Modeling parietal-premotor interaction in primate control of grasping. Neural Networks, 1998, 11(7):1277-1303.
    [71] Milner A., Goodale M.: The visual brain in action, Oxford University Press ,1995
    [72] Ungerleider L. G and Mishkin M., Two cortical visual systems, Analysis of visual behavior, MIT Press, Cambridge, Massachusetts, 1982:549-586.
    [73] Rizzolatti G, Fogassi L. and Gallese V, Parietal cortex: from sight to action, Current Opinion Neurobiology, 1997, 7(4): 562-567.
    [74] Jeannerod M., Arbib M. A., Rizzolatti G and Sakata H., Grasping objects: the cortical mechanisms of visuomotor tranformation. Trends in Neurosciences, 1995, 18(7):314-320.
    [75] Lateiner J.E. and Sainburg R.L., Differential contributions of vision and proprioception to movement accuracy, Experimental Brain Research, 2003, 151:446-454
    [76] Paillard J. and Brouchon M., A Proprioceptive contribution to the spatial encoding of position cues for ballistic movements, Brain Research, 1974, 71:273-284
    [77] Shadmehr R and Mussa-Ivaldi F, Adaptive representation of dynamics during learning of a motor task. Journal Neuroscience, 1994, 14: 3208-3224
    [78] Wolpert D. M., Kawato M., Multiple paired tbrward and inverse models for motor control. Neural Networks, 1998, 11: 1317-1329.
    [79] Abend W., Bizzi E. and Morasso P., Human arm trajectory formation. Brain, 1982, 105: 331-348.
    [80] Kawato M., Internal models for motor control and trajectory planning, Current Opinion in Neurobiology, 1999, 9: 718-727
    [81] Imamizu H., Uno Y. and Kawato M., Internal representation of motor apparatus: implications from generalization in visuo-motor learning. Journal Experimental Psychology, 1995, 21: 1174-1198.
    [82] Kitazawa S., Kimura T. and Uka T., Prism adaptation of reaching movements: specificity for the velocity of reaching. Journal of Neuroscience, 1997, 17: 1481-1492.
    [83] Goodbody S. J. and Wolpert D. M., Temporal and amplitude generalization in motor learning. Journal of Neurophysiology, 1998, 79: 1825-1838.
    [84] 周绍慈,翁恩琪,神经生理学概论,华东师范大学出版社,1994年7月第1版
    [85] 马原野,王建红,认知神经科学原理和方法,重庆出版社,2003年4月
    [86] Atkeson C. G. and Hollerbach J. M., Kinematic features of unrestrained vertical arm movements. Journal of Neuroscience, 1985, 5: 2318-2330
    [87] Flash T. and Hogan N., The co-ordination of arm movements: an experimentally confirmed mathematical model, Journal of Neuroscience, 1985, 5: 1688-1703
    [88] Uno Y., Kawato M. and Suzuki R., Formation and control of optimal trajectories in human multijoint arm movements: minimum torque-change model, Biology Cybern, 1989, 61: 89-101
    [89] Van S. J. F, Gielen C. C., Motor programmes for goal-directed movements are continuously adjusted according to changes in target location. Experimental Brain Research, 1989, 78: 139-146.
    [90] Day B. L. and Lyon I. N., Voluntary modification of automatic arm movements evoked by motion of a visual target. Experimental Brain Research, 2000, 130: 159-168.
    [91] Pisella L., Grea H., Tilikete C., Vighetto A., Desmurget M., Rode G., Boisson D. and Rossetti Y, An 'automatic pilot' for the hand in human posterior parietal cortex: toward reinterpreting optic ataxia, Nature Neuro Science, 2000, 3: 729-736.
    [92] Richard A. M., Motor Learning: Concepts and Applications(6th Ed), The McGraw-Hill Companies, 2001
    [93] Pylyshyn Z. W., Situating vision in the world, Trends in Cognitive Sciences, 2000, 4(5): 197-207
    [94] Mclntyre J., Berthoz A., Lacquaniti F., Reference frames and internal models for visuo-manual coordination: what can we learn from microgravity experiments? Brain Research Reviews, 1998, 28: 143-154
    [95] Donkelaar P., Lee J. H. and Drew A. S., Cortical frames of reference for eye-hand coordination, Progress in Brain Research, 2002, 140: 301-310
    [96] Gribble P. L., Everling S. and Ford K., Andrew Mattar, Hand-eye coordination for rapid pointing movements: Arm movement direction and distance are specified prior to saccade onset, Experimental Brain Research, 2002, 145: 372-382
    [97] Forsyth D. A. and Ponce J., Computer Vision: A Modern Approach, Prentice Hail, 2002
    [98] 德芒热,晡热著,王向东译,曲线与曲面的数学,商务印书馆出版,2000年11月第1版
    [99] 施法中,计算机辅助几何设计与非均匀有理B样条,北京航空航天大学出版社,2001年08月第1版
    [100] Kass M, Witkinm A, Terzopoulo S. D., Snakes: Active contour models. International Journal of Computer Vision, 1988, 1(4): 321-331
    [101] Menet S., Saint M. P, Medioni G., B-Snakes: Implementation and application to stereo. DARPA Image Understanding Workshop. 1990: 720-726
    [102] Blake A, Isard M., Active Contours. London: Springer Verlag, 1998
    [103] Mclnerney T., TerzoPoulos D., Topologically adaptable snakes, Proceedings of the fifth International Conference on Computer Vision (ICCV' 95), Cambridge, MA, 1995, 840-845
    [104] Cohen L. D., On active contour models and balloons CVGIP: Image Understanding, 1991, 53(2): 211-218
    [105] Wu H. H., Liu J. C., Charles C., A waveletframe based image force model for active contouring algorithms. IEEE Transactions on Image Processing, 2000, 9(11): 1983-1987.
    [106] 侯志强,韩崇昭.基于力场分析的主动轮廓模型.计算机学报,2004,27(6):743-749
    [107] Xu C. and Prince J. L., Snakes, shapes, and gradient vector flow, IEEE Trans. Image Processing, 1998, 7: 359-369.
    [108] 王洪元,周则明,王平安、一种改进Snake模型的边缘检测算法,南京理工大学学报,2003,27(4):395-399
    [109] Prince J. L. and Xu C., A new external force model for snakes, Image and Multidimensional Signal Processing Workshop, 1996, 25-31
    [110] 李培华,张田文,一种新的B样条主动轮廓线模型,汁算机学报,2002,25(12)1348-1356
    [111] Xu C. and Prince J. L., Gradient Vector Flow: A New External Force for Snakes, IEEE Proc. Conf. on Comp. Vis. Patt. Recog. (CVPR'97), 1997, 66-71
    [112] Marchand E. and Chaumette F., Feature Tracking For Visual Servoing Purposes, Robotics and Autonomous Systems, 2005, 52: 53-70
    [113] Hager G. and Toyama K., The XVision system: a general-purpose substrate for portable real-time vision applications, Computer Vision Image Understanding, 1998, 69(1): 23-37.
    [114] Marchand E., Visp: a software environment for eye-in-hand visual servoing, in: IEEE International Conference on Robotics and Automation(ICRA'99), 1999, 4: 3224-3229.
    [115] Sundareswaran V., Behringer R., Visual servoing-based augmented reality, IEEE International Workshop on Augmented Reality, SanFrancisco, November 1998
    [116] Porta J. M., Verbeek J. J. and Krose B. J. A., Active Appearance-Based Robot Localization Using Stereo Vision, Autonomous Robots, 2005, 18: 59-80
    [117] Sullivan M. J., Patpanikolopoulos N. P., Using Active Deformable Models to Track Deformable Objects in Robotic Visual Servoing Expberiments, Proceedings ofthe 1996 IEEE International Conference on Robotics and Automation Minneapolis, 1996, 2929-2934
    [118] 夏利民,谷士文,罗大庸,樊晓平,基于活动轮廓的机器人视觉伺服控制,国防科技大学学报,2000,22(1):60-64
    [119] Xia L. M., Gu S. W, Luo D. Y and Fan X. P, Robotic visual servoing based on snakes, Intelligent Control and Automation Proceedings of the 3rd World Congress, 2000, 2: 1317-1320
    [120] Pressigout M. and Marchand E., Real time planar structure tracking for visual servoing: a contour and texture approach, Intelligent Robots and Systems, 2005: 251-256
    [121] Isard M. and Blake A., CONDENSATION - Conditional Density Propagation for Visual Tracking, International Journal of Computer Vision, 1998, 29(1), pp. 5-28
    [122] Isard M., Visual Motion Analysis by Probabilistic Propagation of Conditional Density, PHD THESIS, University of Oxford, 1998
    [123] Howard I. P. and Rogers B. J., Binocular vision and stereopsis, Oxford University Press, Oxford, 1995.
    [124] Ferre M., Aracil R. and Navas M., Stereoscopic Video Images for Telerobotic Applications, Journal of Robotic Systems, 2005, 22(3): 131-146
    [125] Melmoth D. R. and Grant S., Advantages of binocular vision for the control of reaching and grasping, Experimental Brain Research, 2006, 171 (3): 371-388
    [126] Xiao N. F. and Todo I., Stereo Vision-Based Robot Servoing Control for Object Grasping, JSME International Journal, Series C, 2001, 44(1): 61-68
    [127] Martine P. and Cervera E., Stacking Jacobians Properly in Stereo Visual Servoing, Proceedings of the 2001 IEEE International Conference on Robotics & Automation(ICRA'01), 2001: 717-722
    [128] Hager G. D., Chang W. C. and Morse A. S., Robot Hand-Eye Coordination Based on Stereo Vision, IEEE Control Systems, 1995, (2): 30-39
    [129] Cervera E., Berry F. and Martinet P., Is 3D useful in stereo visual control?, IEEE International Conference on Robotics and Automation(ICRA '02), 2002, 2: 1630-1635
    [130] 潘且鲁,苏剑波,席裕庚,基于立体视觉的机器人手眼无标定三维视觉跟踪,机器人,2000,4:293-299
    [131] 马红雨,苏剑波,基于自抗扰控制器的机器人无标定三维手眼协调,自动化学报,2004,3:400-406
    [132] Chang W. C., Precise Positioning of Binocular Eye-to-Hand Robotic Manipulators, Journal of Intelligent Robot System, 2007, 49: 219-236
    [133] Sheynin S. and Tuzikov A., Moment computation for objects with spline curve boundary, IEEE Transactions On Pattern Analysis And Machine Intelligence, 2003, 25(10): 1317-1322
    [134] 文锋,陈宗海,卓睿等,连续状态自适应离散化基于K-均值聚类的强化学习方法,控制与决策,2006,21(2):143-147.
    [135] 高阳,陈世福,陆鑫.强化学习研究综述.自动化学报,2004,30(1):86-100.
    [136] Lee I. S. K., Lau H. Y. K., Adaptive state space partitioning for reinforcement learning. Engineering Applications of Artificial Intelligence, 2004, 17: 577-588.
    [137] Zhao Q., Sun Z., Sun F. and Zhu J., Appearance-based Robot Visual Servo via a Wavelet Neural Network, International Journal of Control, Automation, and Systems, 2008, 6(4): 607-612
    [138] Yao Y., Abidi B. and Abidi M., 3D Target Scale Estimation and Motion Segmentation for Size Preserving Tracking in PTZ Video, International Conference on Computer Vision and Pattern Recognition, New York, NY, 2008
    [139] Tordoff B. and Murray D., Reactive Control of Zoom while Fixating using Perspective and Affine Cameras, IEEE Transactions On Pattern Analysis And Machine Intelligence, 2004, 26(1): 98-112
    [140] Meireles M. R. G., Almeida P. E. M. and Simoes M. G., A Comprehensive Review for Industrial Applicability of Artificial Neural Networks, IEEE Transactions On Industrial Electronics, 2003, 50(3): 505-521
    [141] Sutton R. S., Barto A. G., Reinforcement Learning: An Introduction. Massachusetts, The MIT Press, 1998.
    [142] Watkins C., Learning from Delayed Rewards, PHD Thesis, University of Cambidge, England, 1989
    [143] 陆鑫,高阳,李宁,陈世福,基于神经网络的强化学习算法研究,计算机研究与发展,2002,39(8):981-985
    [144] 段勇,徐心和,基于模糊神经网络的强化学习及其在机器人导航中的应用,控制与决策,2007,22(5):525-530
    [145] Eihier S., Wilson W. J., Hulls C., Telerobotic Part Assembly with Shared Visual Servo Control. Proceedings of the 2002 IEEE International Conference on Robotics &Automation. Piscataway, NJ, 2002: 3763-3768
    [146] Robert B. K., Semi-Autonomous Robotic Manipulation, Systems, Man, and Cybernetics. IEEE International Conference on Computational Cybernetics and Simulation. Piscataway, NJ, 1997: 1759-1764
    [147] Park Y. S, Kang H, Thomas F. E, Edward C, Michael A. P, Eric L. F. Semi-automatic Teleoperation for D&D. 10th Robotics & Remote Systems Mtg. Proceedings. Gainesville, Florida, 2004: 283-290
    [148] Sheridan T. B., Telerobotics, Automation, and Human Supervisory Control, The MIT Press, 1992
    [149] 王清阳,席宁,王越超,基于谓词不变性的状态反馈控制在机器人遥操作中的应用,机器人,2003,21(5):428-431
    [150] 陈琛,马旭东,戴先中,基于行为控制的半自主移动机器人系统,计算机工程与应用,2003,39(2):108-110
    [151] Park Y. S, Kang H, Ewing T. F, Faulting E. L, DeJong B. P, Peshkin M. A, Colgate J. E., Semi-autonomous Telerobotic Manipulation: A Viable Approach for Space Structure Deployment and Maintenance. Space Technology and Applications International Forum-STAIF, 2005: 1129-1136
    [152] Elhajj I., Tan J., Xi N. and Fung WK., Multi-Site Internet-Based Cooperative Control of Robotic. IEEEI/RSJ International Conference on Intelligent Robotics and Systems, 2000: 826-831
    [153] Elhjj I., Xi N. and Fung W. K., Modeling and Control of Internet Based Cooperative Teleoperation. IEEE International Conference on Robotics and Automation, 2001: 662-667
    [154] Garcia C. E., Carelli R., Postigo J. F. and Soria C., Supervisory control for a telerobotic system: a hybrid control approach, Control Engineering Practice, 2003, 11: 805-817
    [155] Tendick F., Voichick J., G. Tharp and Stark L., A supervisory telerobotic control system using model based vision feedback, Proceedings of the 1991 IEEE International Conference on Robotics and Automation, 1991: 2280-2285
    [156] 李焱,贺汉根.应用遥编程的大时延遥操作技术,机器人,2001,23(5):391-396
    [157] 李海超,高洪明,吴林,孙华,基于共享控制策略的遥控弧焊机器人焊缝跟踪,焊接学报,2006,27(4):5-8
    [158] Eihier S., Wilson W. J., Visual Servo Based Shared Control for Telerobotics, IASTED Intenational Conf. on Robotics and Manufacturing, Cancun, Mexico, May, 2001
    [159] Hager G. D., Grunwald G. and Hirzinger G.., Feature-Based Visual Servoing and its Application to Telerobotics, Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems(IROS '94), 1994: 164-171
    [160] Monroy C., Kelly R., Arteaga M. and Bugarin E., Remote Visual Servoing of a Robot Manipulator via Internet2, Journal of Intelligent Robot System, 2007, 49: 171-118
    [161] Cheng G.. and Zelinsky A., Supervised Autonomy: A Framework for Human-Robot Systems Development, Autonomous Robots, 2001, 10: 251-266
    [162] Chang W. C and Lee S. A., Autonomous Visual Servoing with Tele-Supervision. Proceedings of the 2004 IEEE International Conference on Networking, Sensing & Control. Piscataway, 2004: 82-87
    [163] 刘景泰,吴水华,孙雷,陈涛.基于视觉的遥操作机器人精密装配系统.机器人,2005,27(2):178-182
    [164] Steinfeld A., Interface Lessons for Fully and Semi-Autonomous Mobile Robots. IEEE International Conference on Robotics and Automation (ICRA'2004). Piscataway, 2004
    [165] Fong T, Thorpe C and Baur C., Collaboration, Dialogue, and Human-Robot Interaction. Robotics Research: The 10th International Symposium. Piscataway, 2003: 255-270
    [166] 高胜,赵杰,基于人机合作的遥操作机器人系统控制模型,哈尔滨工业大学学报,2006,38(3):447-451
    [167] Fong T, Thorpe C and Baur C., Collaboration, Robot, asker of questions. Robotics and Autonomous Systems, 2003, 42: 235-243
    [168] Xiong Y. J, Li S. Q and Xie M., Predictive display and interaction of telerobots based on augmented reality. Robotica, 2006, 24(1): 447-453

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700