The field of real-time systems and edge computing is rapidly evolving, with a focus on developing innovative solutions to optimize energy efficiency, reduce latency, and improve overall system performance. Recent developments have highlighted the importance of integrating machine learning and reinforcement learning techniques to enable adaptive decision-making and real-time optimization. Notably, researchers are exploring the application of these techniques in vehicular services, opportunistic networks, and edge computing to improve routing protocols, task computation, and data aggregation.
Noteworthy papers include: RI-PIENO, which presents a revised and improved framework for petrol-filling itinerary estimation and optimization, achieving significant cost savings and more efficient routing. Q-Learning-Based Time-Critical Data Aggregation Scheduling in IoT, which proposes a novel Q-learning framework for time-critical data aggregation in IoT networks, demonstrating up to 10.87% lower latency compared to state-of-the-art heuristic algorithms. Energy-Efficient Task Computation at the Edge for Vehicular Services, which designs an optimization problem for task computation and offloading in multi-access edge computing, achieving a significant reduction in user dissatisfaction and task interruptions. Energy-Efficient Routing Protocol in Vehicular Opportunistic Networks, which proposes a dynamic cluster-based routing approach using deep reinforcement learning, extending node lifetimes and reducing energy use. Model-Based Learning of Whittle indices, which presents a new model-based algorithm for learning Whittle indices, outperforming existing Q-learning approaches in terms of sample efficiency and computational cost.