Edge Computing Advancements for Real-Time Applications

The field of edge computing is moving towards developing more efficient and scalable solutions for real-time applications. Researchers are exploring innovative approaches to optimize model performance, reduce latency, and improve resource allocation. Notably, there is a growing focus on collaborative inference, where edge devices and cloud servers work together to achieve better accuracy and faster response times. Another key area of research is the development of novel optimization techniques, such as Bayesian optimization and multi-objective optimization, to balance competing demands like accuracy, latency, and energy consumption. Additionally, advancements in hardware and software co-design are enabling more efficient deployment of machine learning models on edge devices. Overall, these developments are paving the way for more widespread adoption of edge computing in applications like autonomous driving, smart cities, and IoT. Noteworthy papers include: EdgeSync, which introduces an efficient edge-model updating approach that enhances sample filtering and dynamic training management to improve accuracy and reduce update delays. Bayes-Split-Edge, which proposes a novel Bayesian optimization framework for collaborative inference in wireless edge networks, achieving significant reductions in evaluation cost and improved performance under tight constraints. MMEdge, which presents a pipelined sensing and encoding framework for on-device multimodal inference, enabling incremental computation and fine-grained cross-modal optimization to reduce latency and maintain accuracy. Pareto-Optimal Sampling and Resource Allocation, which develops a graph-based algorithm to find the complete set of Pareto optima for timely communication in shared-spectrum low-altitude networks, achieving significant reductions in energy consumption and resource utilization.

Sources

A Confidence-Constrained Cloud-Edge Collaborative Framework for Autism Spectrum Disorder Diagnosis

EdgeSync: Accelerating Edge-Model Updates for Data Drift through Adaptive Continuous Learning

Edge Collaborative Gaussian Splatting with Integrated Rendering and Communication

Rethinking Inference Placement for Deep Learning across Edge and Cloud Platforms: A Multi-Objective Optimization Perspective and Future Directions

Bayes-Split-Edge: Bayesian Optimization for Constrained Collaborative Inference in Wireless Edge Systems

Resource-Efficient and Robust Inference of Deep and Bayesian Neural Networks on Embedded and Analog Computing Platforms

MMEdge: Accelerating On-device Multimodal Inference via Pipelined Sensing and Encoding

Pareto-Optimal Sampling and Resource Allocation for Timely Communication in Shared-Spectrum Low-Altitude Networks

Built with on top of