The field of remote sensing and neural networks is moving towards more interpretable modeling approaches. Researchers are exploring ways to represent neural networks in a more transparent and explainable format, such as regional, lattice, and logical representations. This direction is driven by the need to understand and trust the decisions made by complex models. In the context of remote sensing, there is a growing interest in developing methods that can derive physically interpretable expressions from multi-spectral imagery. This involves combining vision transformers with physics-guided constraints to ensure consistency and interpretability. Noteworthy papers in this area include SatelliteFormula, which proposes a novel symbolic regression framework that combines a Vision Transformer-based encoder with physics-guided constraints. Another example is Sparse Interpretable Deep Learning with LIES Networks, which introduces a fixed neural network architecture with interpretable primitive activations that are optimized to model symbolic expressions.