Algorithmic Bias and Transparency in Online Information Ecosystems

The field of online information ecosystems is shifting towards a greater understanding of the role of algorithmic bias and transparency in shaping public opinion and discourse. Recent studies have highlighted the importance of examining the impact of search engines and large language models on the amplification and suppression of polarizing content, as well as the need for auditing and regulating these systems to ensure transparency and pluralism. The use of crowdsourcing platforms is also being re-evaluated in light of evidence of fraud and manipulation, emphasizing the need for robust countermeasures to ensure data integrity. Furthermore, the integration of large language models into scholarly discovery and biomedical research is exposing critical fairness and bias issues, underscoring the need for strategic collaboration and resource allocation to promote equity and democratization. Noteworthy papers in this area include: The paper on auditing LLM editorial bias in news media exposure, which found that LLMs surface significantly fewer unique outlets and allocate attention more unevenly compared to traditional aggregators. The study on remembering unequally in LLM-generated co-authorship networks, which revealed a consistent bias favoring highly cited researchers and highlighted the risks and opportunities of deploying LLMs for scholarly discovery.

Sources

The Role of Search Engines in the Amplification and Suppression of LGBTIQ+ Polarization

Auditing LLM Editorial Bias in News Media Exposure

Is Crowdsourcing a Puppet Show? Detecting a New Type of Fraud in Online Platforms

Remembering Unequally: Global and Disciplinary Bias in LLM-Generated Co-Authorship Networks

Deciphering Scientific Collaboration in Biomedical LLM Research: Dynamics, Institutional Participation, and Resource Disparities

Just in Plain Sight: Unveiling CSAM Distribution Campaigns on the Clear Web

Built with on top of