Advancements in Large Language Models for Social Media Analysis and Moderation

The field of natural language processing is moving towards the development of more advanced large language models (LLMs) that can effectively analyze and moderate social media content. Recent studies have focused on improving the accuracy of LLMs in detecting propaganda, hate speech, and other forms of harmful content. Furthermore, researchers are exploring the use of LLMs for personalized conversation systems, sentiment analysis, and opinion mining. Notably, the development of frameworks such as MoMoE and PersonaConvBench has enabled more efficient and effective content moderation and conversation analysis. Additionally, the introduction of datasets like ConDID-v2 and CHEER has facilitated the training and evaluation of LLMs in various tasks. Overall, the field is shifting towards the creation of more sophisticated and user-centric LLMs that can adapt to the complexities of social media discourse. Noteworthy papers include ChestyBot, which detects foreign malign influence tweets with high accuracy, and Teaching Language Models to Evolve with Users, which introduces a framework for dynamic profile modeling in personalized alignment.

Sources

ChestyBot: Detecting and Disrupting Chinese Communist Party Influence Stratagems

Can AI automatically analyze public opinion? A LLM agents-based agentic pipeline for timely public opinion analysis

ProdRev: A DNN framework for empowering customers using generative pre-trained transformers

Are Large Language Models Good at Detecting Propaganda?

A Personalized Conversational Benchmark: Towards Simulating Personalized Conversations

MoMoE: Mixture of Moderation Experts Framework for AI-Assisted Online Governance

MindVote: How LLMs Predict Human Decision-Making in Social Media Polls

The Pin of Shame: Examining Content Creators' Adoption of Pinning Inappropriate Comments as a Moderation Strategy

ConspEmoLLM-v2: A robust and stable model to detect sentiment-transformed conspiracy theories

Reliable Decision Support with LLMs: A Framework for Evaluating Consistency in Binary Text Classification Applications

Can Large Language Models Understand Internet Buzzwords Through User-Generated Content

NeoN: A Tool for Automated Detection, Linguistic and LLM-Driven Analysis of Neologisms in Polish

Teaching Language Models to Evolve with Users: Dynamic Profile Modeling for Personalized Alignment

Can Large Language Models be Effective Online Opinion Miners?

Semiotic Reconstruction of Destination Expectation Constructs An LLM-Driven Computational Paradigm for Social Media Tourism Analytics

All You Need is "Leet": Evading Hate-speech Detection AI

LLaMAs Have Feelings Too: Unveiling Sentiment and Emotion Representations in LLaMA Models Through Probing

Understanding and Analyzing Inappropriately Targeting Language in Online Discourse: A Comparative Annotation Study

Built with on top of