Advancements in Power Systems and Graph-Based Methods

The field of power systems and graph-based methods is witnessing significant developments, driven by the integration of artificial intelligence, machine learning, and reinforcement learning techniques. Researchers are exploring innovative approaches to optimize power grid control, demand response, and fault diagnosis, leveraging graph neural networks, distributed reinforcement learning, and other advanced methods. Notably, the use of graph-based models and reinforcement learning is enabling more efficient and resilient power systems, as well as improved solutions for complex problems like crew dispatch and vehicle routing. Furthermore, the application of these techniques to real-world problems, such as post-disaster road assessment and power grid restoration, is demonstrating substantial potential for practical impact. Noteworthy papers include: The paper 'Power Grid Control with Graph-Based Distributed Reinforcement Learning' which proposes a novel framework for real-time grid management using graph neural networks and distributed reinforcement learning. The paper 'Deep Reinforcement Learning for Real-Time Drone Routing in Post-Disaster Road Assessment Without Domain Knowledge' which presents an attention-based encoder-decoder model for real-time drone routing decision in post-disaster road damage assessment.

Sources

Revisiting Deep AC-OPF

Adaptation of Parameters in Heterogeneous Multi-agent Systems

Passivity Compensation: A Distributed Approach for Consensus Analysis in Heterogeneous Networks

Semi-on-Demand Transit Feeders with Shared Autonomous Vehicles and Reinforcement-Learning-Based Zonal Dispatching Control

Deep Reinforcement Learning for Real-Time Drone Routing in Post-Disaster Road Assessment Without Domain Knowledge

Selection of Optimal Number and Location of PMUs for CNN Based Fault Location and Identification

Generative Sequential Notification Optimization via Multi-Objective Decision Transformers

Exploring Variational Graph Autoencoders for Distribution Grid Data Generation

Power Grid Control with Graph-Based Distributed Reinforcement Learning

Deep Reinforcement Learning-Based Decision-Making Strategy Considering User Satisfaction Feedback in Demand Response Program

AutoGrid AI: Deep Reinforcement Learning Framework for Autonomous Microgrid Management

Drawing Trees and Cacti with Integer Edge Lengths on a Polynomial-Size Grid

Learning Optimal Crew Dispatch for Grid Restoration Following an Earthquake

Using Reinforcement Learning to Optimize the Global and Local Crossing Number

Degree Realization by Bipartite Cactus Graphs

Distributed Automatic Generation Control subject to Ramp-Rate-Limits: Anytime Feasibility and Uniform Network-Connectivity

TrajAware: Graph Cross-Attention and Trajectory-Aware for Generalisable VANETs under Partial Observations

VariSAC: V2X Assured Connectivity in RIS-Aided ISAC via GNN-Augmented Reinforcement Learning

Resilient Global Practical Fixed-Time Cooperative Output Regulation of Uncertain Nonlinear Multi-Agent Systems Subject to Denial-of-Service Attacks

Universal Graph Learning for Power System Reconfigurations: Transfer Across Topology Variations

Distributed Unknown Input Observer Design with Relaxed Conditions: Theory and Application to Vehicle Platooning

Built with on top of