The field of large language models (LLMs) is rapidly advancing, with significant improvements in their applications for software development and usability evaluation. Recent studies have demonstrated the potential of LLMs in improving the reliability of visual complexity assessment, generating descriptive names for REST API tests, and identifying usability flaws at the development stage. The use of diagnostic prompting, warmup and dropout schedules, and dual-encoder rerankers have shown promising results in enhancing the performance of LLMs. Notably, the alignment of LLM performance with human cognitive walkthroughs and the identification of UX flaws in code have highlighted the feasibility of using LLMs for automated usability testing. While there are still challenges to be addressed, such as the consistency of severity judgments and the need for human oversight, the advancements in this area are paving the way for more efficient and effective software development and usability evaluation processes. Noteworthy papers include: The paper on Neural Variable Name Repair achieved a 43.1 percent exact match in generating descriptive replacement names for identifiers. The study on Generating REST API Tests With Descriptive Names showed that a rule-based approach can achieve high clarity ratings and perform on par with state-of-the-art LLM-based models.