The field of computer vision is witnessing significant advancements in event-based vision and 3D scene reconstruction. Researchers are exploring innovative approaches to improve the performance of neural networks in these areas. One of the key directions is the development of new datasets and simulation pipelines for event-based vision, which enable the generation of high-fidelity event streams and accelerate the training of event vision models. Another area of focus is the improvement of 3D scene reconstruction methods, including the development of more efficient and accurate algorithms for novel view synthesis and grasp generation. Noteworthy papers in this area include MTevent, which introduces a novel dataset for 6D pose estimation and moving object detection in highly dynamic environments, and MutualNeRF, which enhances the performance of Neural Radiance Field (NeRF) under limited samples using Mutual Information Theory. Other notable works include GS2E, which generates high-fidelity event streams using 3D Gaussian Splatting, and V2V, which enables efficient video-to-voxel simulation for event-based vision.