The field of knowledge graph embedding and link prediction is moving towards more effective and efficient methods for predicting new links and enhancing knowledge retrieval. Researchers are exploring new techniques, such as utilizing relationships between properties in knowledge graphs and generating high-quality negative samples, to improve model accuracy. Additionally, there is a growing interest in distilling complex models into simpler ones, such as multi-layer perceptrons, to reduce computational cost while maintaining performance. The vulnerability of link prediction models to adversarial attacks is also being addressed through the development of poisoning attack approaches. Noteworthy papers include: Optimal Embedding Guided Negative Sample Generation for Knowledge Graph Link Prediction, which proposes a novel framework for generating negative samples that significantly improves link prediction performance. Heuristic Methods are Good Teachers to Distill MLPs for Graph Link Prediction, which explores the impact of different teachers in GNN-to-MLP distillation and proposes an effective approach for integrating complementary signals via a gating mechanism.