On-line reinforcement learning for trajectory following with unknown faults

Loading...
Thumbnail Image
Files
aics_27.pdf(1.21 MB)
Published Version
Date
2018-12
Authors
Sohége, Yves
Provan, Gregory
Journal Title
Journal ISSN
Volume Title
Publisher
CEUR-WS.org
Published Version
Research Projects
Organizational Units
Journal Issue
Abstract
Reinforcement learning (RL) is a key method for providing robots with appropriate control algorithms. Controller blending is a technique for combining the control output of several controllers. In this article we use on-line RL to learn an optimal blending of controllers for novel faults. Since one cannot anticipate all possible fault states, which are exponential in the number of possible faults, we instead apply learning on the effects the faults have on the system. We use a quadcopter pathfollowing simulation in the presence of unknown rotor actuator faults for which the system has not been tuned. We empirically demonstrate the effectiveness of our novel on-line learning framework on a quadcopter trajectory following task with unknown faults, even after a small number of learning cycles. The authors are not aware of any other use of on-line RL for fault tolerant control under unknown faults.
Description
Keywords
Reinforcement learning , Fault-tolerant control , Quadcopter control
Citation
Sohége, Y. and Provan, G. (2018) 'On-line reinforcement learning for trajectory following with unknown faults', Proceedings of the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2018), Dublin, Ireland, 6-7 December, pp. 1-12. Available at: http://ceur-ws.org/Vol-2259/aics_27.pdf (Accessed: 28 January 2019)
Copyright
© 2018, the Authors. Copying permitted for private and academic purposes.