Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Joohyun | - |
dc.contributor.author | Hong, Seohee | - |
dc.contributor.author | Hong, Sengphil | - |
dc.contributor.author | Kim, Jaehoon | - |
dc.date.issued | 2021-08-10 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/31236 | - |
dc.description.abstract | Reinforcement learning (RL) is utilized in a wide range of real-world applications. Typical applications include single agent-based RL. However, most practical tasks require multiple agents for cooperative control processes. Multiple-agent RL demands complicated design, and numerous design possibilities should be considered for its practical usefulness. We propose two RL implementations for a message-queuing telemetry transport (MQTT) protocol system. Two types of implementations improve the communication efficiency of MQTT: (i) single-broker-agent implementation and (ii) multiple-publisher-agents implementation. We focused on different message priorities in a dynamic environment for each implementation. The proposed implementations improve communication efficiency by adjusting the loop cycle time of the broker or by learning the message importance. The proposed MQTT control scheme improves the battery efficiency of Internet-of-Things (IoT)-based devices with relatively insufficient battery power. | - |
dc.description.sponsorship | This work was supported by the Institute for Information &and Communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. 2016\u20100\u201000160, Versatile Network System Architecture for Multi\u2010Dimensional Diversity). This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF\u20102017R1A2B1009709). | - |
dc.language.iso | eng | - |
dc.publisher | John Wiley and Sons Ltd | - |
dc.subject.mesh | Agent implementation | - |
dc.subject.mesh | Co-operative control | - |
dc.subject.mesh | Communication efficiency | - |
dc.subject.mesh | Internet of Things (IOT) | - |
dc.subject.mesh | IoT system | - |
dc.subject.mesh | MQTT | - |
dc.subject.mesh | Multi agent | - |
dc.subject.mesh | Single-agent | - |
dc.title | Context-aware pub/sub control method using reinforcement learning | - |
dc.type | Conference Paper | - |
dc.citation.title | Concurrency and Computation: Practice and Experience | - |
dc.citation.volume | 33 | - |
dc.identifier.bibliographicCitation | Concurrency and Computation: Practice and Experience, Vol.33 | - |
dc.identifier.doi | 10.1002/cpe.5727 | - |
dc.identifier.scopusid | 2-s2.0-85082756459 | - |
dc.identifier.url | http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1532-0634 | - |
dc.subject.keyword | IoT system | - |
dc.subject.keyword | MQTT | - |
dc.subject.keyword | multi-agent | - |
dc.subject.keyword | reinforcement learning | - |
dc.subject.keyword | single-agent | - |
dc.description.isoa | false | - |
dc.subject.subarea | Software | - |
dc.subject.subarea | Theoretical Computer Science | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computer Networks and Communications | - |
dc.subject.subarea | Computational Theory and Mathematics | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.