Edge Computing and Cloud Resource Optimization

The field of cloud computing and edge computing is moving towards optimizing resource allocation and utilization. Researchers are exploring new policies and frameworks for dynamic server allocation, containerized service delivery, and automated algorithm selection to improve performance and reduce costs. A key focus area is the development of stability-aware and adaptive systems that can handle dynamic workloads and volatile conditions. Another important trend is the integration of machine learning and artificial intelligence to predict performance, optimize resource usage, and improve decision-making. Noteworthy papers in this area include:

  • Accelerating Containerized Service Delivery at the Network Edge, which presents a decentralized P2P-based system for optimizing image distribution in edge environments.
  • SAM: A Stability-Aware Cache Manager for Multi-Tenant Embedded Databases, which introduces a novel autonomic cache manager powered by a dual-factor model that achieves sustained high performance through strategic stability and robustness.

Sources

Quantifying the Performance Gap for Simple Versus Optimal Dynamic Server Allocation Policies

Accelerating Containerized Service Delivery at the Network Edge

SLA-Centric Automated Algorithm Selection Framework for Cloud Environments

SAM: A Stability-Aware Cache Manager for Multi-Tenant Embedded Databases

DSPE: Profit Maximization in Edge-Cloud Storage System using Dynamic Space Partitioning with Erasure Code

Built with on top of