The field of artificial intelligence (AI) is rapidly evolving, with significant advancements in recent years. One of the primary areas of focus is the integration of AI into social science research, where it is being used for tasks such as literature reviews and drafting research papers. However, concerns have been raised regarding the use of AI in research contexts, including automation bias, deskilling, and research misconduct. There is a growing need for transparency and explainability in AI systems, particularly in regards to their environmental impact. The lack of data on the resource demands and environmental impacts of AI models has led to the spread of misinformation and misconceptions. To address these challenges, there is a push for the development of AI policies and ethics guidelines. This includes the creation of educational modules to teach computer science students about AI ethics and policy, as well as the involvement of diverse stakeholders in the development of AI systems. Notable papers in this area include:
- 'Social Scientists on the Role of AI in Research' which provides insights into the perceptions and concerns of social scientists regarding the use of AI in their field.
- 'Digital Labor: Challenges, Ethical Insights, and Implications' which highlights the need for better working conditions and recognition for digital workers on crowdsourcing platforms.
- 'The AI Policy Module: Developing Computer Science Student Competency in AI Ethics and Policy' which presents a novel approach to teaching AI ethics and policy to computer science students.