Advances in Code Completion and Human-LLM Collaboration

The field of code completion and human-LLM collaboration is moving towards a deeper understanding of how Large Language Models (LLMs) can be improved to better assist human developers. Researchers are exploring new techniques to evaluate and enhance LLM performance, such as measuring model confidence and mimicking human visual attention. These innovations have the potential to increase developer productivity and improve code quality. Noteworthy papers in this area include:

  • EyeMulator, which presents a technique for training CodeLLMs to mimic human visual attention, resulting in improved performance on various software development tasks.
  • How do Humans and LLMs Process Confusing Code, which finds that LLM perplexity spikes correlate with human neurophysiological responses indicating confusion, suggesting that LLMs and humans are similarly confused about the same kind of code.

Sources

The Fools are Certain; the Wise are Doubtful: Exploring LLM Confidence in Code Completion

EyeMulator: Improving Code Language Models by Mimicking Human Visual Attention

How do Humans and LLMs Process Confusing Code?

VisiTrail: A Cognitive Visualization Tool for Time-Series Analysis of Eye Tracking Data from Attention Game

Built with on top of