Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Seungmin | - |
| dc.contributor.author | Ban, Tae Won | - |
| dc.contributor.author | Lee, Howon | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.issn | 2327-4662 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/38456 | - |
| dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85216326600&origin=inward | - |
| dc.description.abstract | In uncrewed aerial vehicle (UAV)-aided Internet of Things (IoT) networks, providing seamless and reliable wireless connectivity to ground devices (GDs) is difficult owing to the short battery lifetimes of UAVs. Hence, we consider a deep reinforcement learning (DRL)-based UAV base station (UAV-BS) control method to maximize the network-wide energy efficiency of UAV-aided IoT networks featuring continuously moving GDs. First, we introduce two centralized DRL approaches; round-robin deep Q-learning (RR-DQL) and selective-k deep Q-learning (SKDQL), where all UAV-BSs are controlled by a ground control station that collects the status information of UAV-BSs and determines their actions. However, significant signaling overhead and undesired processing latency can occur in these centralized approaches. Hence, we herein propose a quasi-distributed DQLbased UAV-BS control (QD-DQL) method that determines the actions of each agent based on its local information. By performing intensive simulations, we verify the algorithmic robustness and performance excellence of the proposed QD-DQL method based on comparison with several benchmark methods (i.e., RRDQL, SK-DQL, multiagent Q-learning, and exhaustive search method) while considering the mobility of GDs and the increase in the number of UAV-BSs. | - |
| dc.description.sponsorship | This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korea Government (MSIT) under Grant 2022R1A2C1010602; in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) through Development of 3-D Spatial Mobile Communication Technology under Grant 2021-0-00794, through the Development of 3D-NET Core Technology for High-Mobility Vehicular Service under Grant 2022-0-00704, and through the Development of Ground Station Core Technology for Low Earth Orbit Cluster Satellite Communications under Grant RS-2024-00359235; and in part by Korea Research Institute for Defense Technology Planning and Advancement (KRIT) Grant funded by the Korea Government(DAPA(Defense Acquisition Program Administration)) (KRIT-CT-22-047, Space-Layer Intelligent Communication Network Laboratory, 2022). | - |
| dc.language.iso | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.subject.mesh | Aerial vehicle | - |
| dc.subject.mesh | Efficiency maximization | - |
| dc.subject.mesh | Energy | - |
| dc.subject.mesh | Multi agent | - |
| dc.subject.mesh | Multi-agent deep reinforcement learning | - |
| dc.subject.mesh | Network-wide energy efficiency maximization | - |
| dc.subject.mesh | Reinforcement learnings | - |
| dc.subject.mesh | Unmanned aerial vehicle control | - |
| dc.subject.mesh | Unmanned aerial vehicle-aided internet of thing network | - |
| dc.subject.mesh | Unmanned aerial vehicle-base station | - |
| dc.subject.mesh | Vehicle Control | - |
| dc.title | Network-Wide Energy-Efficiency Maximization in UAV-Aided IoT Networks: Quasi-Distributed Deep Reinforcement Learning Approach | - |
| dc.type | Article | - |
| dc.citation.endPage | 15414 | - |
| dc.citation.number | 11 | - |
| dc.citation.startPage | 15404 | - |
| dc.citation.title | IEEE Internet of Things Journal | - |
| dc.citation.volume | 12 | - |
| dc.identifier.bibliographicCitation | IEEE Internet of Things Journal, Vol.12 No.11, pp.15404-15414 | - |
| dc.identifier.doi | 10.1109/jiot.2025.3532477 | - |
| dc.identifier.scopusid | 2-s2.0-85216326600 | - |
| dc.identifier.url | http://ieeexplore.ieee.org/servlet/opac?punumber=6488907 | - |
| dc.subject.keyword | Multiagent deep reinforcement learning (DRL) | - |
| dc.subject.keyword | network-wide energy efficiency maximization | - |
| dc.subject.keyword | UAV Control | - |
| dc.subject.keyword | UAV-base station (BS) | - |
| dc.subject.keyword | uncrewed aerial vehicle (UAV)-aided Internet of Things (IoT) network | - |
| dc.type.other | Article | - |
| dc.identifier.pissn | 23274662 | - |
| dc.subject.subarea | Signal Processing | - |
| dc.subject.subarea | Information Systems | - |
| dc.subject.subarea | Hardware and Architecture | - |
| dc.subject.subarea | Computer Science Applications | - |
| dc.subject.subarea | Computer Networks and Communications | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.