Advances in Robust Machine Learning and Inverse Problems

The field of machine learning and inverse problems is moving towards developing more robust and efficient methods for handling complex data and noise models. Recent research has focused on improving the performance of machine learning algorithms in the presence of heavy-tailed noise and oblivious contamination. Additionally, there is a growing interest in developing methods for uncertainty estimation and distribution-shift detection in inverse problems. Noteworthy papers in this area include: Understanding Robust Machine Learning for Nonparametric Regression with Heavy-Tailed Noise, which introduces a probabilistic effective hypothesis space for robust nonparametric regression. Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination, which provides formal evidence that the quadratic dependence in 1/α is inherent for efficient algorithms. Towards Distribution-Shift Uncertainty Estimation for Inverse Problems with Generative Priors, which proposes an instance-level, calibration-free uncertainty indicator sensitive to distribution shift. Recovery of Integer Images from Limited DFT Measurements with Lattice Methods, which develops theoretical and algorithmic foundations for recovering integer-valued images from limited DFT coefficients. Why the noise model matters: A performance gap in learned regularization, which analyzes the performance gap between learned variational regularization and the optimal affine reconstruction. Zero-Shot CFC: Fast Real-World Image Denoising based on Cross-Frequency Consistency, which proposes an efficient and effective method for real-world denoising. L2-Regularized Empirical Risk Minimization Guarantees Small Smooth Calibration Error, which provides the first theoretical proof that L2-regularized empirical risk minimization directly controls the smooth calibration error. Distributional Consistency Loss: Beyond Pointwise Data Terms in Inverse Problems, which introduces distributional consistency loss, a data-fidelity objective that replaces pointwise matching with distribution-level calibration.

Sources

Understanding Robust Machine Learning for Nonparametric Regression with Heavy-Tailed Noise

Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination

Towards Distribution-Shift Uncertainty Estimation for Inverse Problems with Generative Priors

Recovery of Integer Images from Limited DFT Measurements with Lattice Methods

Why the noise model matters: A performance gap in learned regularization

Zero-Shot CFC: Fast Real-World Image Denoising based on Cross-Frequency Consistency

$L_2$-Regularized Empirical Risk Minimization Guarantees Small Smooth Calibration Error

Distributional Consistency Loss: Beyond Pointwise Data Terms in Inverse Problems

Built with on top of