The field of artificial intelligence is moving towards a greater emphasis on trust and fairness, with a focus on developing methods and frameworks that can ensure the reliability and transparency of AI systems. Recent research has explored the use of trust-based networks, fairness-aware evidential learning, and subjective logic to assess the trustworthiness of AI training datasets and promote fairness in machine learning models. Additionally, there is a growing interest in developing certified individual fairness through neural network training and promoting trustworthy AI-enabled systems through data curation. Noteworthy papers in this area include: Trust@Health, which proposes a trust-based multilayered network for scalable healthcare service management. Fairness-Aware Multi-view Evidential Learning with Adaptive Prior, which introduces an adaptive prior based on training trajectory to flexibly calibrate the biased evidence learning process. Correct-By-Construction: Certified Individual Fairness through Neural Network Training, which proposes a novel framework that formally guarantees individual fairness throughout training.