Advances in Autonomous Systems and Cybersecurity

The field of autonomous systems and cybersecurity is rapidly evolving, with a focus on developing more robust and resilient systems. Recent research has explored the use of large language models (LLMs) and multi-agent systems to improve the security and efficiency of various applications, including power grid control, network monitoring, and incident response. One of the key challenges in this area is ensuring the safety and reliability of these systems, particularly in the face of potential attacks or failures. To address this, researchers have proposed various approaches, such as risk analysis techniques, threat modeling, and defense mechanisms like BlindGuard and Cowpox. Additionally, there is a growing interest in developing more autonomous and adaptive systems, such as self-evolving AI agents and agentic AI frameworks, which can learn and improve over time. Noteworthy papers in this area include 'Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems' and 'Towards Effective Offensive Security LLM Agents: Hyperparameter Tuning, LLM as a Judge, and a Lightweight CTF Benchmark', which demonstrate innovative approaches to risk analysis and offensive security. Overall, the field of autonomous systems and cybersecurity is advancing rapidly, with a focus on developing more robust, resilient, and adaptive systems that can improve the security and efficiency of various applications.

Sources

Comparison of Information Retrieval Techniques Applied to IT Support Tickets

Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems

Towards Effective Offensive Security LLM Agents: Hyperparameter Tuning, LLM as a Judge, and a Lightweight CTF Benchmark

Semantic Reasoning Meets Numerical Precision: An LLM-Powered Multi-Agent System for Power Grid Control

From Imperfect Signals to Trustworthy Structure: Confidence-Aware Inference from Heterogeneous and Reliability-Varying Utility Data

Safety of Embodied Navigation: A Survey

When AIOps Become "AI Oops": Subverting LLM-driven IT Operations via Telemetry Manipulation

Dual-Head Physics-Informed Graph Decision Transformer for Distribution System Restoration

Methodology for Business Intelligence Solutions in Internet Banking Companies

Integrating Neurosymbolic AI in Advanced Air Mobility: A Comprehensive Survey

A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems

Pentest-R1: Towards Autonomous Penetration Testing Reasoning Optimized via Two-Stage Reinforcement Learning

A Multi-Model Probabilistic Framework for Seismic Risk Assessment and Retrofit Planning of Electric Power Networks

A Survey on Agentic Service Ecosystems: Measurement, Analysis, and Optimization

BlindGuard: Safeguarding LLM-based Multi-Agent Systems under Unknown Attacks

Deep Reinforcement Learning with Local Interpretability for Transparent Microgrid Resilience Energy Management

Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems

Enhance the machine learning algorithm performance in phishing detection with keyword features

Extending the OWASP Multi-Agentic System Threat Modeling Guide: Insights from Multi-Agent Security Research

AWorld: Dynamic Multi-Agent System with Stable Maneuvering for Robust GAIA Problem Solving

Securing Agentic AI: Threat Modeling and Risk Analysis for Network Monitoring Agentic AI System

NetMoniAI: An Agentic AI Framework for Network Security & Monitoring

Agentic AI Frameworks: Architectures, Protocols, and Design Challenges

Quantifying the Value of Seismic Structural Health Monitoring for post-earthquake recovery of electric power system in terms of resilience enhancement

Advancing Autonomous Incident Response: Leveraging LLMs and Cyber Threat Intelligence

REFN: A Reinforcement-Learning-From-Network Framework against 1-day/n-day Exploitations

Built with on top of