The field of multimodal knowledge graphs and pedestrian attribute recognition is moving towards more effective integration of multiple modalities and contextual information to improve recognition accuracy. Researchers are exploring novel methods to construct and utilize knowledge graphs, enabling the discovery of complex relationships between attributes and visual features. Noteworthy papers in this area include the proposal of a knowledge graph-guided cross-modal hypergraph learning framework for pedestrian attribute recognition, and the introduction of a hypercomplex-driven robust multi-modal knowledge graph completion method. Additionally, there is a growing interest in developing frameworks for identifying and classifying pedestrian crossing situations to support the development of autonomous vehicles. Notable papers in this area are: The paper proposing a knowledge graph-guided cross-modal hypergraph learning framework achieves state-of-the-art performance on multiple benchmark datasets. The paper introducing the PCICF framework demonstrates its effectiveness in identifying and classifying complex pedestrian crossings using a large real-world dataset.