水下滑翔蛇形机器人滑翔控制的强化学习方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Reinforcement Learning Method for Gliding Control of Underwater Gliding Snake-like Robot
  • 作者:张晓路 ; 李斌 ; 常健 ; 唐敬阁
  • 英文作者:ZHANG Xiaolu;LI Bin;CHANG Jian;TANG Jingge;College of Information Science and Engineering, Northeastern University;The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences;Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences;University of Chinese Academy of Sciences;
  • 关键词:强化学习 ; 水下滑翔蛇形机器人 ; 马尔可夫决策过程 ; 循环神经网络
  • 英文关键词:reinforcement learning;;underwater gliding snake-like robot;;Markov decision process;;recurrent neural network
  • 中文刊名:JQRR
  • 英文刊名:Robot
  • 机构:东北大学信息科学与工程学院;中国科学院沈阳自动化研究所机器人学国家重点实验室;中国科学院机器人与智能制造创新研究院;中国科学院大学;
  • 出版日期:2019-03-26 18:20
  • 出版单位:机器人
  • 年:2019
  • 期:v.41
  • 基金:国家重点研发计划(2017YFB1300101);; 国家自然科学基金青年基金(61803365)
  • 语种:中文;
  • 页:JQRR201903006
  • 页数:9
  • CN:03
  • ISSN:21-1137/TP
  • 分类号:48-56
摘要
研究了一种强化学习算法,用于水下滑翔蛇形机器人的滑翔运动控制.针对水动力环境难以建模的问题,使用强化学习方法使水下滑翔蛇形机器人自适应复杂的水环境,并自动学习仅通过调节浮力来控制滑翔运动.对此,提出了循环神经网络蒙特卡洛策略梯度算法,改善了由于机器人的状态难以完全观测而导致的算法难以训练的问题,并将水下滑翔蛇形机器人的基本滑翔动作控制问题近似为马尔可夫决策过程,从而得到有效的滑翔控制策略.通过仿真和实验证明了所提出方法的有效性.
        A reinforcement learning algorithm for gliding control of underwater gliding snake-like robot is studied. To solve the problem that the hydrodynamic environment is hard to be modeled, a reinforcement learning method is adopted so that the underwater gliding snake-like robot can adapt to the complex water environment and automatically learn the gliding actions only by adjusting buoyancy. A Monte Carlo policy gradient algorithm using recurrent neural network is proposed to solve the problem that the algorithm is difficult to train because the robot state can't be fully observed. The gliding action control of the underwater gliding snake-like robot is approximated as Markov decision processes(MDPs), so as to obtain an effective gliding control policy. Simulation and experiment results show the effectiveness of the proposed method.
引文
[1] Javaid M Y, Ovinis M, Nagarajan T, et al. Underwater gliders:A review[C]//4th International Conference on Production, Energy and Reliability. Paris, France:EDP Sciences, 2014:No.02020.
    [2]俞建成,张奇峰,吴利红,等.水下滑翔机器人运动调节机构设计与运动性能分析[J].机器人,2005,27(5):390-395.Yu J C, Zhang Q F, Wu L H, et al. Movement mechanism design and motion performance analysis of an underwater glider[J]. Robot, 2005, 27(5):390-395.
    [3] Ming A G, Ichikawa T, Zhao W J, et al. Development of a sea snake-like underwater robot[C]//IEEE International Conference on Robotics and Biomimetics. Piscataway, USA:IEEE, 2014:761-766.
    [4]李立,王明辉,李斌,等.蛇形机器人水下3D运动建模与仿真[J].机器人,2015,37(3):336-342.Li L, Wang M H, Li B, et al. Modeling and simulation of snake robot in 3D underwater locomotion[J]. Robot, 2015, 37(3):336-342.
    [5]唐敬阁,李斌,李志强,等.水下蛇形机器人的滑翔运动性能研究[J].高技术通讯,2017,27(3):269-276.Tang J G, Li B, Li Z Q, et al. Research on the gliding performance of underwater snake-like robots[J]. High Technology Letters, 2017, 27(3):269-276.
    [6] Sutton R S, Barto A G. Reinforcement learning:An introduction[M]. Cambridge, USA:MIT Press, 1998.
    [7] Gl?scher J, Daw N, Dayan P, et al. States versus rewards:Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning[J]. Neuron, 2010, 66(4):585-595.
    [8] Puterman M L. Markov decision processes:Discrete stochastic dynamic programming[M]. Hoboken, USA:John Wiley&Sons, Inc., 1994.
    [9] Monahan G E. State of the art–A survey of partially observable Markov decision processes–Theory, models, and algorithms[J]. Management Science, 1982, 28(1):1-16.
    [10] Wang X N, He H G, Xu X. Reinforcement learning algorithm for partially observable Markov decision processes[J]. Control and Decision, 2004, 19(11):1263-1266.
    [11] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
    [12]郁树梅,马书根,李斌,等.水陆两栖蛇形机器人的上浮和下潜步态研究[J].仪器仪表学报,2011,32(S1):276-279.Yu S M, Ma S G, Li B, et al. Study on the floatation and submergence gait of amphibious snake-like robot[J]. Chinese Journal of Scientific Instrument, 2011, 32(S1):276-279.
    [13] Hausknecht M, Stone P. Deep recurrent Q-learning for partially observable MDPs[C]//AAAI Fall Symposium. El Segundo,USA:AI Access Foundation, Computer Science, 2015:29-37.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700