Advances in Software Engineering and Security

The field of software engineering and security is rapidly evolving, with a growing focus on leveraging large language models (LLMs) and other artificial intelligence (AI) techniques to improve software development, testing, and security. Recent research has highlighted the potential of LLMs to enhance code evaluation metrics, vulnerability discovery, and security testing, among other areas. Notably, the integration of LLMs with traditional software engineering techniques has shown promise in improving the accuracy and efficiency of various software development tasks. Furthermore, the development of new frameworks and tools, such as those for automated attack tree-based security test generation and semantic-aware fuzzing, has demonstrated significant advancements in automating security testing methodologies. Overall, the field is moving towards a more integrated and AI-driven approach to software engineering and security, with a focus on improving the reliability, efficiency, and security of software systems. Noteworthy papers include: LoCaL, which proposes a new benchmark for evaluating code evaluation metrics and highlights the need for more robust metrics that can mitigate surface bias. Synergizing Static Analysis with Large Language Models for Vulnerability Discovery and beyond, which demonstrates the potential of combining static analysis with LLMs to improve vulnerability discovery. STAF, which introduces a novel approach to automating security test case generation using LLMs and a four-step self-corrective Retrieval-Augmented Generation framework.

Sources

LoCaL: Countering Surface Bias in Code Evaluation Metrics

Synergizing Static Analysis with Large Language Models for Vulnerability Discovery and beyond

How Far Are We? An Empirical Analysis of Current Vulnerability Localization Approaches

LeakageDetector 2.0: Analyzing Data Leakage in Jupyter-Driven Machine Learning Pipelines

When Bugs Linger: A Study of Anomalous Resolution Time Outliers and Their Themes

Reading Between the Lines: Scalable User Feedback via Implicit Sentiment in Developer Prompts

Security smells in infrastructure as code: a taxonomy update beyond the seven sins

Detection of security smells in IaC scripts through semantics-aware code and language processing

SynapFlow: A Modular Framework Towards Large-Scale Analysis of Dendritic Spines

LLM-based Vulnerability Discovery through the Lens of Code Metrics

Semantic-Aware Fuzzing: An Empirical Framework for LLM-Guided, Reasoning-Driven Input Mutation

Intuition to Evidence: Measuring AI's True Impact on Developer Productivity

Demystifying the Evolution of Neural Networks with BOM Analysis: Insights from a Large-Scale Study of 55,997 GitHub Repositories

STAF: Leveraging LLMs for Automated Attack Tree-Based Security Test Generation

Investigating Security Implications of Automatically Generated Code on the Software Supply Chain

Developer Productivity With and Without GitHub Copilot: A Longitudinal Mixed-Methods Case Study

Built with on top of