The field of autonomous robot exploration and mapping is rapidly advancing, with a focus on improving the efficiency and accuracy of exploration in complex environments. Recent developments have seen the integration of hierarchical representations, attention-based deep reinforcement learning, and novel reward mechanisms to enable more effective exploration. Additionally, there is a growing interest in multi-robot systems, with research focusing on task coordination, trajectory execution, and active target discovery. These advancements have the potential to significantly impact various applications, including robotics, logistics, and environmental monitoring. Noteworthy papers include: HEADER, which presents an attention-based reinforcement learning approach for efficient exploration in large-scale environments. SEA, which proposes a novel approach for active robot exploration through semantic map prediction and a reinforcement learning-based hierarchical exploration policy. IMAS$^2$, which introduces a two-layer optimization structure for joint agent selection and information-theoretic coordinated perception in Dec-POMDPs.