The field of autonomous driving is moving towards enhanced perception capabilities, with a focus on cooperative perception, infrastructure-based sensor placement, and improved 3D object detection. Researchers are exploring the use of monocular traffic cameras, heterogeneous multi-modal infrastructure sensors, and self-supervised pre-training methods to improve scene representation and perception range. The development of new datasets, such as those for lane detection, end-to-end autonomous parking, and drone-derived traffic analysis, is also driving innovation in the field. Noteworthy papers include: Enhanced Cooperative Perception Through Asynchronous Vehicle to Infrastructure Framework, which proposes a V2I framework that utilizes monocular traffic cameras to detect 3D objects. InSPE: Rapid Evaluation of Heterogeneous Multi-Modal Infrastructure Sensor Placement, which introduces a perception surrogate metric set to rapidly assess perception effectiveness across diverse infrastructure and environmental scenarios.