The field of Indic language understanding is moving towards developing more accurate and efficient models for low-resource languages. Recent research has focused on creating benchmarks and datasets for these languages, such as IndicParam and ELR-1000, to evaluate the performance of large language models. Additionally, there is a growing emphasis on addressing the challenges of hallucination detection and religious bias in multilingual models. The development of community-driven initiatives like AdiBhashaa is also noteworthy, as it promotes more equitable AI research and centers local expertise. Notable papers include: Minimal-Edit Instruction Tuning for Low-Resource Indic GEC, which proposes an augmentation-free setup for grammatical error correction. IndicParam is a benchmark to evaluate LLMs on low-resource Indic languages and shows that even top-performing models struggle with these languages. ELR-1000 and AdiBhashaa provide culturally-grounded datasets and benchmarks for endangered languages and promote equitable AI research.