The field of neural networks is moving towards more efficient and expressive models, with a focus on optimizing and adapting existing architectures. Recent developments have explored the use of tensor decompositions, low-rank adaptations, and subspace-based methods to improve the performance and scalability of neural networks. These advances have shown promising results in reducing parameter redundancy, improving generalization, and enabling more efficient fine-tuning of large models. Notably, novel frameworks such as NeuronSeek and TensorGuide have demonstrated enhanced stability and expressivity, while methods like Subspace Boosting and PLoP have improved model merging and adapter placement. Additionally, techniques like MoRA and ScalaBL have addressed challenges in continual learning and uncertainty quantification. Overall, these developments are pushing the boundaries of neural network research, enabling more efficient and effective models for a wide range of applications.
Noteworthy papers include: NeuronSeek, which replaces symbolic regression with tensor decomposition to discover optimal neuronal formulations, offering enhanced stability and faster convergence. TensorGuide, which generates two correlated low-rank LoRA matrices through a unified TT structure, enhancing expressivity, generalization, and parameter efficiency. Subspace Boosting, which mitigates rank collapse in model merging by operating on the singular value decomposed task vector space. MoRA, which decomposes each rank-r update into r rank-1 components, enabling fine-grained mixture of rank-1 expert utilization while mitigating interference and redundancy. ScalaBL, which performs Bayesian inference in an r-dimensional subspace, allowing for scalable and efficient uncertainty quantification of large language models.