Advances in Neural Network Robustness and Efficiency

The field of neural networks is moving towards improving robustness and efficiency. Researchers are exploring new approaches to increase the reliability of models, such as analyzing the interaction between compressibility and adversarial robustness, and developing new methods for dimensionality reduction. Additionally, there is a growing interest in understanding the fundamental trade-offs governing simple neural networks, including the relationship between capacity, sparsity, and robustness. Some notable papers have introduced innovative approaches, such as the use of entropy-based feature extraction and the development of exact reformulations for direct metric optimization. Overall, the field is advancing towards creating more efficient and secure models. Noteworthy papers include: Loss-Complexity Landscape and Model Structure Functions, which establishes a mathematical analogy between information-theoretic constructs and statistical mechanics, and On the Interaction of Compressibility and Adversarial Robustness, which analyzes how different forms of compressibility affect adversarial robustness.

Sources

Loss-Complexity Landscape and Model Structure Functions

Tight Bounds for Answering Adaptively Chosen Concentrated Queries

Feature Engineering is Not Dead: Reviving Classical Machine Learning with Entropy, HOG, and LBP Feature Fusion for Image Classification

Glitches in Decision Tree Ensemble Models

Exact Reformulation and Optimization for Direct Metric Optimization in Binary Imbalanced Classification

Exploring Superposition and Interference in State-of-the-Art Low-Parameter Vision Models

An open dataset of neural networks for hypernetwork research

Disrupting Semantic and Abstract Features for Better Adversarial Transferability

A Lower Bound for the Number of Linear Regions of Ternary ReLU Regression Neural Networks

Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks

The Cost of Compression: Tight Quadratic Black-Box Attacks on Sketches for $\ell_2$ Norm Estimation

Evaluating Ensemble and Deep Learning Models for Static Malware Detection with Dimensionality Reduction Using the EMBER Dataset

On the Interaction of Compressibility and Adversarial Robustness

Large Learning Rates Simultaneously Achieve Robustness to Spurious Correlations and Compressibility

Self-similarity Analysis in Deep Neural Networks

SETOL: A Semi-Empirical Theory of (Deep) Learning

Information Entropy-Based Framework for Quantifying Tortuosity in Meibomian Gland Uneven Atrophy

Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods

Built with on top of