The field of autonomous vehicle research is witnessing significant developments, with a focus on improving safety and efficiency. Recent studies have concentrated on enhancing trajectory prediction, crash detection, and pedestrian intention prediction. Notably, researchers have proposed innovative frameworks, such as V2X-RECT, which leverages redundant interaction filtering and tracking error correction to improve trajectory prediction in high-density environments. Another notable trend is the development of real-time lane-level crash detection systems, which can detect crashes with high accuracy and provide timely warnings. Furthermore, studies have explored the use of multimodal fusion networks and attention-guided cross-modal interaction transformers for pedestrian crossing intention prediction, achieving superior performance and accuracy.
Some noteworthy papers in this area include: V2X-RECT, which achieves significant improvements in trajectory prediction compared to state-of-the-art methods. Real-Time Lane-Level Crash Detection on Freeways Using Sparse Telematics Data, which detects crashes with a 75% identification rate and accurate lane-level localization. GContextFormer, which proposes a global context-aware hybrid multi-head attention approach for multimodal trajectory prediction and achieves greater robustness and improvements in high-curvature and transition zones. Hierarchical Spatio-Temporal Attention Network with Adaptive Risk-Aware Decision for Forward Collision Warning, which demonstrates high efficacy and achieves an F1 score of 0.912 and a low false alarm rate of 8.2%. Pedestrian Crossing Intention Prediction Using Multimodal Fusion Network and Multi-Context Fusion Transformer, which achieve superior performance compared to baseline methods. ACIT, which leverages six visual and motion modalities and achieves accuracy rates of 70% and 89% on the JAADbeh and JAADall datasets. Hybrid SIFT-SNN for Efficient Anomaly Detection of Traffic Flow-Control Infrastructure, which achieves a classification accuracy of 92.3% with a per-frame inference time of 9.5 ms.