The field of online content moderation and social media governance is rapidly evolving, with a growing focus on developing innovative solutions to address the challenges of toxic content, misinformation, and partisan skew. Recent research has highlighted the importance of principled, consistent, contextual, proactive, transparent, and accountable content moderation, and has identified significant structural misalignments between corporate incentives and public interests. Studies have also explored the effectiveness of large language models in detecting subtle linguistic cues associated with seniority inflation and implicit expertise, as well as their potential to deliver acceptance and commitment therapy. Furthermore, research has investigated the impact of social media platforms' algorithmic systems on the topic, political skew, and reliability of information served to users, and has identified the need for ongoing study of these systems and their role in democratic processes. Noteworthy papers in this area include 'Content Moderation Futures', which examines the failures and possibilities of contemporary social media governance, and 'The Thinking Therapist', which investigates the impact of post-training methodology and explicit reasoning on the ability of large language models to deliver acceptance and commitment therapy. Additionally, 'The Role of Follow Networks and Twitter's Content Recommender' and 'TikTok Rewards Divisive Political Messaging' provide valuable insights into the role of social media platforms in shaping users' experiences and the potential for divisive political messaging to be rewarded.
Advances in Online Content Moderation and Social Media Governance
Sources
The Thinking Therapist: Training Large Language Models to Deliver Acceptance and Commitment Therapy using Supervised Fine-Tuning and Odds Ratio Policy Optimization
The Role of Follow Networks and Twitter's Content Recommender on Partisan Skew and Rumor Exposure during the 2022 U.S. Midterm Election