Ajou University repository

Multi-Agent Reinforcement Learning-Based Resource Allocation Scheme for UAV-Assisted Internet of Remote Things Systemsoa mark
  • Lee, Donggu ;
  • Sun, Young Ghyu ;
  • Kim, Soo Hyun ;
  • Kim, Jae Hyun ;
  • Shin, Yoan ;
  • Kim, Dong In ;
  • Kim, Jin Young
Citations

SCOPUS

4

Citation Export

DC Field Value Language
dc.contributor.authorLee, Donggu-
dc.contributor.authorSun, Young Ghyu-
dc.contributor.authorKim, Soo Hyun-
dc.contributor.authorKim, Jae Hyun-
dc.contributor.authorShin, Yoan-
dc.contributor.authorKim, Dong In-
dc.contributor.authorKim, Jin Young-
dc.date.issued2023-01-01-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/33450-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85161043533&origin=inward-
dc.description.abstractMulti-layered communication networks including satellites and unmanned aerial vehicles (UAVs) with remote sensing capability are expected to be an essential part of next-generation wireless communication systems. It has been reported that deep reinforcement learning algorithm brings performance improvement in various practical wireless communication environments. However, it is anticipated that the computational complexity will be a critical issue as the number of devices in the network is significantly increased. To resolve this problem, in this paper we propose a multi-agent reinforcement learning (MARL)-based resource allocation scheme for UAV-assisted Internet of remote things (IoRT) systems. The UAV and IoRT sensors are set to be MARL agents, which are independently trained to minimize energy consumption cost for communication by controlling the transmit power and bandwidth. It is shown that the UAV agent can reduce energy consumption by 70.9195 kJ, while the IoRT sensor agents yield 20.5756 kJ reduction, which are 65.4 % and 71.97 % reductions compared to the initial state of each agent. Moreover, the effects from the hyperparameters of the neural episodic control (NEC) baseline algorithm are investigated in terms of power consumption.-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshDeep learning-
dc.subject.meshInternet of remote thing-
dc.subject.meshLow earth orbit satellites-
dc.subject.meshMulti-agent reinforcement learning-
dc.subject.meshNeural episodic control-
dc.subject.meshReinforcement learnings-
dc.subject.meshResource management-
dc.subject.meshResources allocation-
dc.subject.meshSensor systems-
dc.subject.meshWireless communications-
dc.titleMulti-Agent Reinforcement Learning-Based Resource Allocation Scheme for UAV-Assisted Internet of Remote Things Systems-
dc.typeArticle-
dc.citation.endPage53164-
dc.citation.startPage53155-
dc.citation.titleIEEE Access-
dc.citation.volume11-
dc.identifier.bibliographicCitationIEEE Access, Vol.11, pp.53155-53164-
dc.identifier.doi10.1109/access.2023.3279401-
dc.identifier.scopusid2-s2.0-85161043533-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639-
dc.subject.keywordDeep learning-
dc.subject.keywordIoRT-
dc.subject.keywordmulti-agent reinforcement learning-
dc.subject.keywordneural episodic control-
dc.subject.keywordresource allocation-
dc.type.otherArticle-
dc.description.isoatrue-
dc.subject.subareaComputer Science (all)-
dc.subject.subareaMaterials Science (all)-
dc.subject.subareaEngineering (all)-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Kim, Jae-Hyun Image
Kim, Jae-Hyun김재현
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.