Advances in Cyberbullying Detection and Online Hate Speech Mitigation

The field of online hate speech and cyberbullying detection is rapidly evolving, with a growing emphasis on developing more accurate and generalizable models. Recent research has focused on improving the detection of implicit hate speech, cyberbullying in multilingual contexts, and the identification of hate-mongers. The use of multimodal approaches, incorporating text, user activity, and social network analysis, has shown promise in detecting hate-mongers and coded messages. Additionally, there is a growing recognition of the need for more transparent and explainable models, with many studies incorporating techniques such as attribution analysis and LIME-based local interpretations. Noteworthy papers in this area include those that propose novel architectures for cyberbullying detection, such as the hybrid DeBERTa and Gated Broad Learning System, which achieved state-of-the-art performance on multiple benchmark datasets. Another notable study presents a framework for cyberbullying detection in Hinglish text using the MURIL architecture, outperforming existing multilingual models. A survey on hate speech datasets highlights the importance of reflexive approach in dataset creation, while a study on implicit hate speech detection proposes an approach to enhance generalizability across diverse datasets.

Sources

A Hybrid DeBERTa and Gated Broad Learning System for Cyberbullying Detection in English Text

Cyberbullying Detection in Hinglish Text Using MURIL and Explainable AI

Web(er) of Hate: A Survey on How Hate Speech Is Typed

Towards Generalizable Generic Harmful Speech Datasets for Implicit Hate Speech Detection

Social Hatred: Efficient Multimodal Detection of Hatemongers

On the efficacy of old features for the detection of new bots

How Effectively Can BERT Models Interpret Context and Detect Bengali Communal Violent Text?

Malicious earworms and useful memes, how the far-right surfs on TikTok audio trends

Built with on top of