Advances in Edge Computing and Network Optimization

The field of edge computing and network optimization is rapidly evolving, with a focus on developing innovative solutions to address the challenges of latency, congestion, and resource allocation. Recent research has explored the use of decentralized AI, federated learning, and edge computing to improve network performance and efficiency. Notably, the development of novel frameworks and algorithms, such as those utilizing diffusion-based solvers and empirical RAT evaluation, has shown significant promise in optimizing network congestion and latency. Furthermore, the integration of edge computing with emerging technologies like 5G and IoT has created new opportunities for intelligent access, mobility, and routing strategies.

Some noteworthy papers in this area include: Tetris, which proposes an SLA-aware application placement strategy that reduces SLA violations by approximately 76%. COHERE, which introduces a congestion-aware offloading and handover framework that reduces the load on congested RATs by up to 32% and improves link delay by up to 166%. Diffusion-Based Solver, which presents a novel theoretical framework for CNF placement based on Denoising Diffusion Probabilistic Models, achieving orders of magnitude faster inference than MINLP solvers.

Sources

Asynchronous Risk-Aware Multi-Agent Packet Routing for Ultra-Dense LEO Satellite Networks

Toward Hybrid COTS-based LiFi/WiFi Networks with QoS Requirements in Mobile Environments

Tetris: An SLA-aware Application Placement Strategy in the Edge-Cloud Continuum

COHERE - Congestion-aware Offloading and Handover via Empirical RAT Evaluation for Multi-RAT Networks

Diffusion-Based Solver for CNF Placement on the Cloud-Continuum

Towards Efficient Federated Learning of Networked Mixture-of-Experts for Mobile Edge Computing

Decentralized AI Service Placement, Selection and Routing in Mobile Networks

Federated Attention: A Distributed Paradigm for Collaborative LLM Inference over Edge Networks

On the Optimization of Model Aggregation for Federated Learning at the Network Edge

TT-Prune: Joint Model Pruning and Resource Allocation for Communication-efficient Time-triggered Federated Learning

Built with on top of