The field of online discourse analysis and moderation is rapidly evolving, with a growing focus on developing innovative methods and tools to detect and mitigate extremist language, hate speech, and biased media coverage. Researchers are leveraging large-scale datasets, machine learning models, and natural language processing techniques to analyze and understand the complexities of online discourse. A key area of investigation is the development of effective moderation strategies for decentralized social media platforms, where community-level blocklists and collaborative voting systems are being explored. Furthermore, there is a increasing emphasis on creating AI-based platforms for monitoring and fostering democratic discourse, as well as detecting neo-fascist rhetoric and other forms of hate speech. Noteworthy papers in this area include:
- The taz2024full corpus, which provides a large-scale resource for analyzing gender bias and discrimination in German newspaper articles.
- The FASCIST-O-METER classifier, which presents a first-of-its-kind neo-fascist coding scheme for digital discourse in the USA societal context.
- The Canadian Media Ecosystem Observatory, which offers a national-scale infrastructure for monitoring political and media discourse across platforms in near real time.