Advances in Software Engineering and Large Language Models

The field of software engineering is witnessing significant developments with the increasing adoption of large language models (LLMs). Recent studies have highlighted the importance of reproducibility in LLM-based research, with a focus on mitigating reproducibility smells and introducing reproducibility maturity models. Additionally, the use of LLMs in software engineering tasks such as vulnerability detection and automated backporting of patches is being explored. However, the reliability and limitations of LLMs in these tasks are still being investigated. Noteworthy papers in this area include the development of ng-reactive-lint, a tool for detecting high-impact anti-patterns in Angular applications, and the introduction of BackportBench, a comprehensive benchmark suite for patch backporting problems. Furthermore, research on the prevalence of LLM-assisted text in scholarly writing and the economies of open intelligence in the model ecosystem is shedding light on the growing impact of LLMs on the research landscape.

Sources

ng-reactive-lint: Smarter Linting for Angular Apps

Large Language Models for Software Engineering: A Reproducibility Crisis

Large Language Models Cannot Reliably Detect Vulnerabilities in JavaScript: The First Systematic Benchmark and Evaluation

BackportBench: A Multilingual Benchmark for Automated Backporting of Patches

Estimating the prevalence of LLM-assisted text in scholarly writing

Towards Observation Lakehouses: Living, Interactive Archives of Software Behavior

Economies of Open Intelligence: Tracing Power & Participation in the Model Ecosystem

Quantitative Analysis of Technical Debt and Pattern Violation in Large Language Model Architectures

Has ACL Lost Its Crown? A Decade-Long Quantitative Analysis of Scale and Impact Across Leading AI Conferences

Built with on top of