Developments in AI, Misinformation, and Human Communication

The field of artificial intelligence and human communication is rapidly evolving, with a focus on developing more sophisticated language models and combating misinformation. Researchers are exploring the capabilities and limitations of large language models (LLMs) in handling intimate conversations, detecting misinformation, and engaging in grounded conversations. The development of more transparent and standardized guidelines for LLMs is becoming increasingly important, particularly in the context of international governance and user welfare. Meanwhile, the fight against misinformation is being approached from multiple angles, including the use of crowdsourcing, fact-checking organizations, and social correction. Noteworthy papers in this area include:

  • A study on LLMs' handling of sexually oriented requests, which highlights the need for unified ethical frameworks and standards across platforms.
  • Research on combating misinformation in the Arab world, which emphasizes the importance of connecting with grass-roots fact-checking organizations and promoting social correction.
  • A thesis on leveraging human intelligence to fight misinformation, which introduces a model for the joint prediction and explanation of truthfulness.
  • A paper on the geometries of truth in LLMs, which underlines the limitation of task-dependent approaches and highlights the need for more sophisticated methods.
  • A study on LLMs' ability to engage in grounded conversations, particularly in cases where they do not possess knowledge, which raises concerns about their role in mitigating misinformation in political discourse.

Sources

Can LLMs Talk 'Sex'? Exploring How AI Models Handle Intimate Conversations

Combating Misinformation in the Arab World: Challenges & Opportunities

The Geometries of Truth Are Orthogonal Across Tasks

Can LLMs Ground when they (Don't) Know: A Study on Direct and Loaded Political Questions

In Crowd Veritas: Leveraging Human Intelligence To Fight Misinformation

Self-Anchored Attention Model for Sample-Efficient Classification of Prosocial Text Chat

Comparing human and LLM politeness strategies in free production

How Do People Revise Inconsistent Beliefs? Examining Belief Revision in Humans with User Studies

Dynamic Epistemic Friction in Dialogue

Built with on top of