Advances in Online Discourse Analysis and Moderation

The field of online discourse analysis and moderation is rapidly evolving, with a growing focus on developing innovative methods and tools to detect and mitigate extremist language, hate speech, and biased media coverage. Researchers are leveraging large-scale datasets, machine learning models, and natural language processing techniques to analyze and understand the complexities of online discourse. A key area of investigation is the development of effective moderation strategies for decentralized social media platforms, where community-level blocklists and collaborative voting systems are being explored. Furthermore, there is a increasing emphasis on creating AI-based platforms for monitoring and fostering democratic discourse, as well as detecting neo-fascist rhetoric and other forms of hate speech. Noteworthy papers in this area include:

  • The taz2024full corpus, which provides a large-scale resource for analyzing gender bias and discrimination in German newspaper articles.
  • The FASCIST-O-METER classifier, which presents a first-of-its-kind neo-fascist coding scheme for digital discourse in the USA societal context.
  • The Canadian Media Ecosystem Observatory, which offers a national-scale infrastructure for monitoring political and media discourse across platforms in near real time.

Sources

taz2024full: Analysing German Newspapers for Gender Bias and Discrimination across Decades

Understanding Community-Level Blocklists in Decentralized Social Media

IYKYK: Using language models to decode extremist cryptolects

KI4Demokratie: An AI-Based Platform for Monitoring and Fostering Democratic Discourse

Beyond the Battlefield: Framing Analysis of Media Coverage in Conflict Reporting

FASCIST-O-METER: Classifier for Neo-fascist Discourse Online

Building a Media Ecosystem Observatory from Scratch: Infrastructure, Methodology, and Insights

Built with on top of