Ajou University repository

Effective Controller Placement in Software-Defined Internet-of-Things Leveraging Deep Q-Learning (DQL)oa mark
Citations

SCOPUS

1

Citation Export

Publication Year
2024-01-01
Publisher
Tech Science Press
Citation
Computers, Materials and Continua, Vol.81, pp.4015-4032
Keyword
controller placementdeep Q-learningquality of serviceSoftware-defined networking
Mesh Keyword
Controller placementsDeep Q-learningNext-generation networksPerformancePlacement strategyProgrammabilityPropagation delaysQ-learningQuality-of-serviceSoftware-defined networkings
All Science Classification Codes (ASJC)
BiomaterialsModeling and SimulationMechanics of MaterialsComputer Science ApplicationsElectrical and Electronic Engineering
Abstract
The controller is a main component in the Software-Defined Networking (SDN) framework, which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks. In SDN, frequent communication occurs between network switches and the controller, which manages and directs traffic flows. If the controller is not strategically placed within the network, this communication can experience increased delays, negatively affecting network performance. Specifically, an improperly placed controller can lead to higher end-to-end (E2E) delay, as switches must traverse more hops or encounter greater propagation delays when communicating with the controller. This paper introduces a novel approach using Deep Q-Learning (DQL) to dynamically place controllers in Software-Defined Internet of Things (SD-IoT) environments, with the goal of minimizing E2E delay between switches and controllers. E2E delay, a crucial metric for network performance, is influenced by two key factors: hop count, which measures the number of network nodes data must traverse, and propagation delay, which accounts for the physical distance between nodes. Our approach models the controller placement problem as a Markov Decision Process (MDP). In this model, the network configuration at any given time is represented as a “state,” while “actions” correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers. Using a Deep Q-Network (DQN) to approximate the Q-function, the system learns the optimal controller placement by maximizing the cumulative reward, which is defined as the negative of the E2E delay. Essentially, the lower the delay, the higher the reward the system receives, enabling it to continuously improve its controller placement strategy. The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies. By dynamically learning from the network’s real-time conditions, the proposed method ensures that controller placement remains efficient and responsive, reducing communication delays and enhancing overall network performance.
Language
eng
URI
https://dspace.ajou.ac.kr/dev/handle/2018.oak/34678
DOI
https://doi.org/10.32604/cmc.2024.058480
Fulltext

Type
Article
Funding
The authors extend their appreciation to Researcher Supporting Project number (RSPD2024R582), King Saud University, Riyadh, Saudi Arabia. This study was supported by the Researcher Supporting Project number (RSPD2024R582), King Saud University, Riyadh, Saudi Arabia.
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

ALI JEHAD Image
ALI JEHADJEHAD, ALI
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.