Ajou University repository

Network-Wide Energy-Efficiency Maximization in UAV-Aided IoT Networks: Quasi-Distributed Deep Reinforcement Learning Approach
Citations

SCOPUS

2

Citation Export

Publication Year
2025-01-01
Journal
IEEE Internet of Things Journal
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
IEEE Internet of Things Journal, Vol.12 No.11, pp.15404-15414
Keyword
Multiagent deep reinforcement learning (DRL)network-wide energy efficiency maximizationUAV ControlUAV-base station (BS)uncrewed aerial vehicle (UAV)-aided Internet of Things (IoT) network
Mesh Keyword
Aerial vehicleEfficiency maximizationEnergyMulti agentMulti-agent deep reinforcement learningNetwork-wide energy efficiency maximizationReinforcement learningsUnmanned aerial vehicle controlUnmanned aerial vehicle-aided internet of thing networkUnmanned aerial vehicle-base stationVehicle Control
All Science Classification Codes (ASJC)
Signal ProcessingInformation SystemsHardware and ArchitectureComputer Science ApplicationsComputer Networks and Communications
Abstract
In uncrewed aerial vehicle (UAV)-aided Internet of Things (IoT) networks, providing seamless and reliable wireless connectivity to ground devices (GDs) is difficult owing to the short battery lifetimes of UAVs. Hence, we consider a deep reinforcement learning (DRL)-based UAV base station (UAV-BS) control method to maximize the network-wide energy efficiency of UAV-aided IoT networks featuring continuously moving GDs. First, we introduce two centralized DRL approaches; round-robin deep Q-learning (RR-DQL) and selective-k deep Q-learning (SKDQL), where all UAV-BSs are controlled by a ground control station that collects the status information of UAV-BSs and determines their actions. However, significant signaling overhead and undesired processing latency can occur in these centralized approaches. Hence, we herein propose a quasi-distributed DQLbased UAV-BS control (QD-DQL) method that determines the actions of each agent based on its local information. By performing intensive simulations, we verify the algorithmic robustness and performance excellence of the proposed QD-DQL method based on comparison with several benchmark methods (i.e., RRDQL, SK-DQL, multiagent Q-learning, and exhaustive search method) while considering the mobility of GDs and the increase in the number of UAV-BSs.
ISSN
2327-4662
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/38456
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85216326600&origin=inward
DOI
https://doi.org/10.1109/jiot.2025.3532477
Journal URL
http://ieeexplore.ieee.org/servlet/opac?punumber=6488907
Type
Article
Funding
This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korea Government (MSIT) under Grant 2022R1A2C1010602; in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) through Development of 3-D Spatial Mobile Communication Technology under Grant 2021-0-00794, through the Development of 3D-NET Core Technology for High-Mobility Vehicular Service under Grant 2022-0-00704, and through the Development of Ground Station Core Technology for Low Earth Orbit Cluster Satellite Communications under Grant RS-2024-00359235; and in part by Korea Research Institute for Defense Technology Planning and Advancement (KRIT) Grant funded by the Korea Government(DAPA(Defense Acquisition Program Administration)) (KRIT-CT-22-047, Space-Layer Intelligent Communication Network Laboratory, 2022).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Lee, Howon Image
Lee, Howon이호원
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.