3D UAV trajectory and data collection optimisation via deep reinforcement learning
dc.contributor.author | Nguyen, Khoi Khac | |
dc.contributor.author | Duong, Trung Q. | |
dc.contributor.author | Do-Duy, Tan | |
dc.contributor.author | Claussen, Holger | |
dc.contributor.author | Hanzo, Llajos | |
dc.contributor.funder | Royal Academy of Engineering | en |
dc.contributor.funder | European Research Council | en |
dc.contributor.funder | Engineering and Physical Sciences Research Council | en |
dc.date.accessioned | 2022-04-27T15:10:42Z | |
dc.date.available | 2022-04-27T15:10:42Z | |
dc.date.issued | 2022-04 | |
dc.date.updated | 2022-04-27T15:00:06Z | |
dc.description.abstract | Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication. However, due to the limitation of their on- board power and flight time, it is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT). In this paper, we design a new UAV-assisted IoT system relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices. Then, a deep reinforcement learning-based technique is conceived for finding the optimal trajectory and throughput in a specific coverage area. After training, the UAV has the ability to autonomously collect all the data from user nodes at a significant total sum-rate improvement while minimising the associated resources used. Numerical results are provided to highlight how our techniques strike a balance between the throughput attained, trajectory, and the time spent. More explicitly, we characterise the attainable performance in terms of the UAV trajectory, the expected reward and the total sum-rate. | en |
dc.description.sponsorship | U.K. Royal Academy of Engineering (RAEng Research Chair and Senior Research Fellowship scheme Grant RCSRF2021\11\41); Engineering and Physical Sciences Research Counc (projects EP/P034284/1 and EP/P003990/1 (COALESCE); European Research Council (ERC Advanced Fellow Grant Quant-Com (Grant No. 789028)) | en |
dc.description.status | Peer reviewed | en |
dc.description.version | Accepted Version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.citation | Nguyen, K. K., Duong, T. Q., Do-Duy, T., Claussen, H. and Hanzo, L. (2022) '3D UAV Trajectory and Data Collection Optimisation via Deep Reinforcement Learning', IEEE Transactions On Communications, 70 (4), pp. 2358-2371. doi: 10.1109/TCOMM.2022.3148364 | en |
dc.identifier.doi | 10.1109/TCOMM.2022.3148364 | en |
dc.identifier.endpage | 2371 | en |
dc.identifier.issn | 0090-6778 | |
dc.identifier.issued | 4 | en |
dc.identifier.journaltitle | IEEE Transactions On Communications | en |
dc.identifier.startpage | 2358 | en |
dc.identifier.uri | https://hdl.handle.net/10468/13127 | |
dc.identifier.volume | 70 | en |
dc.language.iso | en | en |
dc.publisher | IEEE | en |
dc.relation.project | info:eu-repo/grantAgreement/EC/H2020::ERC::ERC-ADG/789028/EU/Ubiquitous Quantum Communications/QuantCom | en |
dc.relation.uri | https://ieeexplore.ieee.org/document/9701330 | |
dc.rights | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works | en |
dc.subject | Data collection | en |
dc.subject | Data collection | en |
dc.subject | Deep reinforcement learning | en |
dc.subject | Optimization | en |
dc.subject | Resource management | en |
dc.subject | Three-dimensional displays | en |
dc.subject | Throughput | en |
dc.subject | Trajectory | en |
dc.subject | Trajectory | en |
dc.subject | UAV-assisted wireless network | en |
dc.subject | Wireless networks | en |
dc.title | 3D UAV trajectory and data collection optimisation via deep reinforcement learning | en |
dc.type | Article (peer-reviewed) | en |