The field of predictive maintenance and software security is rapidly evolving, with a focus on leveraging large language models (LLMs) and LLM-based agents to improve performance and accuracy. Recent research has explored the potential of LLMs in cleaning maintenance logs, detecting security vulnerabilities, and improving the reliability of smart contracts. Notably, the use of LLMs has shown promise in handling generic cleaning tasks and detecting security patches, with some studies demonstrating significant reductions in false positive rates. Additionally, researchers have investigated the transferable vulnerability of source code models and proposed novel approaches to generating practical adversarial samples. Other work has focused on developing tools and frameworks to support the creation and analysis of smart contracts, such as Blockly2Hooks and QLCoder. Furthermore, new taxonomies and detection methods have been introduced to address specific vulnerabilities, including signature replay vulnerabilities in smart contracts. Overall, these advancements have the potential to significantly improve the reliability and security of software systems. Noteworthy papers include:
- A paper on using LLM agents for cleaning maintenance logs, which demonstrated the effectiveness of LLMs in handling generic cleaning tasks.
- A study on the comparative evaluation of LLMs and LLM-based agents in security patch detection, which showed that the Data-Aug LLM achieved the best overall performance.
- A paper on exploring the transferable vulnerability of source code models, which proposed a victim-agnostic approach to generating practical adversarial samples.
- A study on demystifying and detecting signature replay vulnerabilities in smart contracts, which designed a tool called LASiR to detect these vulnerabilities automatically.