The field of online hate speech and cyberbullying detection is rapidly evolving, with a growing emphasis on developing more accurate and generalizable models. Recent research has focused on improving the detection of implicit hate speech, cyberbullying in multilingual contexts, and the identification of hate-mongers. The use of multimodal approaches, incorporating text, user activity, and social network analysis, has shown promise in detecting hate-mongers and coded messages. Additionally, there is a growing recognition of the need for more transparent and explainable models, with many studies incorporating techniques such as attribution analysis and LIME-based local interpretations. Noteworthy papers in this area include those that propose novel architectures for cyberbullying detection, such as the hybrid DeBERTa and Gated Broad Learning System, which achieved state-of-the-art performance on multiple benchmark datasets. Another notable study presents a framework for cyberbullying detection in Hinglish text using the MURIL architecture, outperforming existing multilingual models. A survey on hate speech datasets highlights the importance of reflexive approach in dataset creation, while a study on implicit hate speech detection proposes an approach to enhance generalizability across diverse datasets.