Dynamic distance-based load balancing in mobile edge computing with deep reinforcement learning
Mohammad Esmaeil Esmaeili, Ahmad Khonsari, Mahdi Dolati,
Dynamic distance-based load balancing in mobile edge computing with deep reinforcement learning,
Computer Communications,
Volume 244,
2025,
108337,
ISSN 0140-3664,
https://doi.org/10.1016/j.comcom.2025.108337.
(https://www.sciencedirect.com/science/article/pii/S0140366425002944)
Abstract: Edge computing reduces latency by bringing computation closer to end devices, but the growing scale and heterogeneity of edge networks make resource management increasingly complex. Load balancing is essential for efficient resource use and low response times, yet static approaches struggle in dynamic environments. This calls for adaptable, data-driven load balancing methods that can continuously respond to changing conditions and optimize performance. This paper addresses the problem of load balancing in edge computing, where the distance between servers plays a critical role in performance. We propose two deep reinforcement learning (DRL)-based algorithms – Deep Q-Learning (DQL) and Long Short-Term Memory (LSTM) – that dynamically adjust the neighbor radius for load distribution in response to environmental changes. Unlike static approaches, our methods learn the radius online in a data-driven manner without requiring global coordination. Simulation results demonstrate that both algorithms adapt effectively to dynamic conditions. In scenarios with 80–100 edge servers and 500–1000 requests per second, DQL achieves up to 18% higher throughput, 21% lower average response time, and 23% lower blocking rate compared to recent methods, while LSTM remains competitive under stable workloads.
Keywords: Edge computing; Load balancing; Deep reinforcement learning; Deep Q-Learning (DQL); Long Short-Term Memory (LSTM); Resource allocation; Mobile edge networks