The field of neural architecture search and deep learning is witnessing significant advancements, with a focus on improving the expressiveness and parameter efficiency of deep convolutional networks. Recent developments have explored the integration of product units into residual blocks, enabling multiplicative feature interactions and potentially offering a more powerful representation of complex patterns. Additionally, innovative neural architecture search methods have been proposed, utilizing auxiliary evaluation metrics and multi-objective genetic algorithms to optimize network structures. These advancements have demonstrated competitive performance on benchmark datasets, with improvements in accuracy, efficiency, and robustness. Noteworthy papers include: A Neural Architecture Search Method using Auxiliary Evaluation Metric based on ResNet Architecture, which demonstrates the effectiveness of using loss value on the validation set as a secondary objective for optimization. Deep residual learning with product units proposes a deep product-unit residual neural network that achieves state-of-the-art performance on several datasets while utilizing fewer parameters and computational resources.