The field of air quality modeling and fairness is moving towards the development of more accurate and interpretable models. Recent studies have focused on improving the transferability of models across different engines and environments, as well as enhancing the interpretability of air pollution forecasting models. Notable papers in this area include the Bayesian calibration of engine-out NOx models, which achieves high accuracy without retraining the model, and the provably outlier-resistant semi-parametric regression technique for calibrating low-cost air-quality sensors. Additionally, the SX-GeoTree, a self-explaining geospatial regression tree, has been proposed to improve the spatial similarity of feature attributions, and a physics-guided spatiotemporal decoupling approach has been developed for interpretable air pollution forecasting. The use of evolved sample weights for bias mitigation has also been explored, with results showing that it can produce models that achieve better trade-offs between fairness and predictive performance than alternative weighting methods. Furthermore, an interpretable and fair clustering framework has been proposed, which integrates fairness constraints into the structure of decision trees and offers additional advantages such as interpretability and the ability to handle multiple sensitive attributes. Some of the most noteworthy papers are: The Bayesian calibration of engine-out NOx models, which significantly improves the accuracy of predictions compared to conventional non-adaptive GP models. The RESPIRE technique, which offers specific advantages over baseline calibration methods popular in literature, such as improved prediction in cross-site, cross-season, and cross-sensor settings. The SX-GeoTree, which maintains competitive predictive accuracy while improving residual spatial evenness and doubling attribution consensus. The physics-guided spatiotemporal decoupling approach, which consistently outperforms state-of-the-art baselines across multiple forecasting horizons. The evolved sample weights for bias mitigation, which can produce models that achieve better trade-offs between fairness and predictive performance than alternative weighting methods. The interpretable and fair clustering framework, which delivers competitive clustering performance and improved fairness, as well as additional advantages such as interpretability and the ability to handle multiple sensitive attributes.