The field of robot manipulation is moving towards more efficient and effective policy learning methods. Recent developments have focused on improving the performance of robotic manipulation tasks, such as single- and dual-arm manipulation, through the use of innovative architectures and techniques. One notable trend is the integration of 3D visual scene representations and flow matching for trajectory prediction, which has led to significant improvements in training and inference speeds. Another area of research is the development of benchmarks for evaluating the performance of robotic manipulation policies, with a focus on sim-to-real transfer and generalizability. Noteworthy papers include: 3D FlowMatch Actor, which achieves state-of-the-art performance on the bimanual PerAct2 benchmark and sets a new state of the art on 74 RLBench tasks. OmniD, which proposes a multi-view fusion framework that synthesizes image observations into a unified bird's-eye view representation, achieving average improvements over the best baseline model for in-distribution, out-of-distribution, and few-shot experiments. FBI, which introduces a visuotactile imitation learning framework that dynamically fuses tactile interactions with visual observations through motion dynamics, outperforming baseline methods in both simulation and the real world. Mind and Motion Aligned, which proposes a novel benchmark that unifies the evaluation of task planning and low-level control within a simulated kitchen environment, supporting a mobile manipulator robot.