Interpretable Modeling in Remote Sensing and Neural Networks

The field of remote sensing and neural networks is moving towards more interpretable modeling approaches. Researchers are exploring ways to represent neural networks in a more transparent and explainable format, such as regional, lattice, and logical representations. This direction is driven by the need to understand and trust the decisions made by complex models. In the context of remote sensing, there is a growing interest in developing methods that can derive physically interpretable expressions from multi-spectral imagery. This involves combining vision transformers with physics-guided constraints to ensure consistency and interpretability. Noteworthy papers in this area include SatelliteFormula, which proposes a novel symbolic regression framework that combines a Vision Transformer-based encoder with physics-guided constraints. Another example is Sparse Interpretable Deep Learning with LIES Networks, which introduces a fixed neural network architecture with interpretable primitive activations that are optimized to model symbolic expressions.

Sources

Regional, Lattice and Logical Representations of Neural Networks

SatelliteFormula: Multi-Modal Symbolic Regression from Remote Sensing Imagery for Physics Discovery

Sparse Interpretable Deep Learning with LIES Networks for Symbolic Regression

Retrieval of Surface Solar Radiation through Implicit Albedo Recovery from Temporal Context

Built with on top of