Security and Transparency in AI-Powered Mobile Applications

The field of AI-powered mobile applications is moving towards a greater emphasis on security and transparency. Recent studies have highlighted the vulnerabilities of mobile LLM agents to adversarial attacks, underscoring the need for more robust security measures. Additionally, research has shown that users may unintentionally introduce risks when interacting with AI-based conversational agents, emphasizing the importance of developing guidelines for secure usage. Furthermore, efforts to increase transparency in mobile personalization are underway, with novel approaches using sensor spoofing and persona simulation to audit and visualize app responses to inferred user behaviors. Noteworthy papers include: Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels, which presents a systematic study of security risks in mobile LLM agents. Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational Agents, which sheds light on the extent to which users exhibit behaviors that may enable attacks. Beyond Permissions: Investigating Mobile Personalization with Simulated Personas, which presents a sandbox system for auditing and visualizing mobile app personalization.

Sources

Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels

Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational Agents

Beyond Permissions: Investigating Mobile Personalization with Simulated Personas

Built with on top of