Deep reinforcement learning and randomized blending for control under novel disturbances

Loading...
Thumbnail Image
Files
ifac20_final_updated.pdf(289.94 KB)
Accepted version
Date
2020-07
Authors
Sohége, Yves
Provan, Gregory
Quinones-Grueiro, Marcos
Biswas, Gautam
Journal Title
Journal ISSN
Volume Title
Publisher
Published Version
Research Projects
Organizational Units
Journal Issue
Abstract
Enabling autonomous vehicles to maneuver in novel scenarios is a key unsolved problem. A well-known approach, Weighted Multiple Model Adaptive Control (WMMAC), uses a set of pretuned controllers and combines their control actions using a weight vector. Although WMMAC offers an improvement to traditional switched control in terms of smooth control oscillations, it depends on accurate fault isolation and cannot deal with unknown disturbances. A recent approach avoids state estimation by randomly assigning the controller weighting vector; however, this approach uses a uniform distribution for control-weight sampling, which is sub-optimal compared to state-estimation methods. In this article, we propose a framework that uses deep reinforcement learning (DRL) to learn weighted control distributions that optimize the performance of the randomized approach for both known and unknown disturbances. We show that RL-based randomized blending dominates pure randomized blending, a switched FDI-based architecture and pre-tuned controllers on a quadcopter trajectory optimisation task in which we penalise deviations in both position and attitude.
Description
Keywords
Design of fault tolerant/reliable systems , Fault accommodation and Reconfiguration strategies , Methods based on neural networks and/or fuzzy logic for FDI
Citation
Sohege, Y., Provan, G., Quinones-Grueiro, M. and Biswas, G. (2020) 'Deep reinforcement learning and randomized blending for control under novel disturbances', IFAC World Congress 2020, Germany, 11-15 July.
Link to publisher’s version