Advancements in Vehicle-to-Everything Communication and Resource Allocation

The field of Vehicle-to-Everything (V2X) communication and resource allocation is witnessing significant advancements, driven by the increasing demand for efficient and reliable communication systems. Researchers are exploring innovative solutions to address the challenges posed by complex wireless environments, high-speed moving vehicles, and latency-sensitive applications. One of the key areas of focus is the development of analytical models and reinforcement learning algorithms to optimize resource allocation and improve the reliability of V2X transmission. Notably, the integration of Deep Q-Networks (DQN) and other machine learning techniques is showing promising results in enhancing the performance of V2X systems. Furthermore, the application of multi-agent systems and adaptive transmission designs is being investigated to support ultra-reliable low-latency communication (URLLC) in various scenarios, including smart factories and two-hop cooperative communication. Noteworthy papers include: Analytical Model of NR-V2X Mode 2 with Re-Evaluation Mechanism, which establishes an analytical model to evaluate the performance of NR-V2X Mode 2. Reinforcement Learning for Resource Allocation in Vehicular Multi-Fog Computing, which investigates the application of RL algorithms for adaptive task allocation in Multi-Fog Computing environments. Power Control Based on Multi-Agent Deep Q Network for D2D Communication, which proposes a reinforcement learning algorithm for adaptive power control in device-to-device communication. Deep Q-Network for Optimizing NOMA-Aided Resource Allocation in Smart Factories with URLLC Constraints, which presents a DQN-based algorithm for optimizing resource allocation in smart factories. Adaptive Cooperative Transmission Design for Ultra-Reliable Low-Latency Communications via Deep Reinforcement Learning, which develops an adaptive transmission design for two-hop cooperative communication systems.

Sources

Analytical Model of NR-V2X Mode 2 with Re-Evaluation Mechanism

Reinforcement Learning for Resource Allocation in Vehicular Multi-Fog Computing

Power Control Based on Multi-Agent Deep Q Network for D2D Communication

Deep Q-Network for Optimizing NOMA-Aided Resource Allocation in Smart Factories with URLLC Constraints

Adaptive Cooperative Transmission Design for Ultra-Reliable Low-Latency Communications via Deep Reinforcement Learning

Built with on top of