Large Language Models in Social Simulations and Online Discourse

The field of large language models (LLMs) is rapidly advancing, with a growing focus on their application in social simulations and online discourse. Recent studies have investigated the ability of LLMs to mimic human interactions, generate realistic comments, and influence political debate. The use of LLMs in social simulations has raised concerns about their potential to manipulate public opinion and shape political narratives. Researchers are working to develop more rigorous methods for evaluating the empirical realism of LLMs and ensuring that they are used in a transparent and explainable manner. Noteworthy papers in this area include:

  • A study on the Public Service Algorithm, which introduces a novel framework for scalable and transparent content curation based on public service media values.
  • A paper on Generative Exaggeration in LLM Social Agents, which investigates how LLMs behave when simulating political discourse on social media and finds that they can amplify polarization and introduce structural biases.

Sources

How Large Language Models play humans in online conversations: a simulated study of the 2016 US politics on Reddit

Don't Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism

Public Service Algorithm: towards a transparent, explainable, and scalable content curation for news content based on editorial values

Evaluating the Simulation of Human Personality-Driven Susceptibility to Misinformation with LLMs

Generative Exaggeration in LLM Social Agents: Consistency, Bias, and Toxicity

Recommendation Algorithms on Social Media: Unseen Drivers of Political Opinion

Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human Trust

Built with on top of