Optimizing Resource Utilization in Cloud-Native Environments

The field of cloud-native computing is moving towards optimizing resource utilization and energy efficiency. Researchers are exploring innovative scheduling strategies that consider the distributedness of resources, such as CPU, memory, network, and storage, to improve overall resource efficiency. Another direction is the development of layer-aware and resource-adaptive container schedulers for edge computing, which aim to reduce deployment costs and container startup times. Additionally, there is a growing interest in applying concepts from other fields, such as CPU architecture, to design more efficient layouts for warehouses and other systems. Energy-optimized scheduling is also becoming increasingly important, particularly for AIoT workloads, with researchers proposing new schedulers that balance sustainability and performance. Noteworthy papers include: LRScheduler, which proposes a layer-aware and resource-adaptive container scheduler for edge computing. GreenPod, which presents a TOPSIS-based scheduler for energy-optimized scheduling of AIoT workloads.

Sources

Distributedness based scheduling

LRScheduler: A Layer-aware and Resource-adaptive Container Scheduler in Edge Computing

CPU-Based Layout Design for Picker-to-Parts Pallet Warehouses

Energy-Optimized Scheduling for AIoT Workloads Using TOPSIS

Built with on top of