The field of artificial intelligence is rapidly evolving, with significant developments in algorithmic information theory and AI governance. Researchers are exploring the fundamental limits of AI explainability, with a focus on quantifying approximation error and explanation complexity using Kolmogorov complexity. Additionally, there is a growing emphasis on understanding the complex networks of AI supply chains and their implications for AI development and regulation. Studies have shown that information passing along AI supply chains can be imperfect, leading to misunderstandings with real-world implications, and that upstream design choices can have downstream consequences. Furthermore, the rise of generative AI is transforming the financial landscape, offering opportunities for innovation and automation, but also introducing significant cybersecurity and ethical risks. Noteworthy papers in this area include: The Limits of AI Explainability, which establishes a theoretical foundation for understanding the fundamental limits of AI explainability through algorithmic information theory. Understanding Large Language Model Supply Chain, which conducts an empirical study of the LLM supply chain, analyzing its structural characteristics and security vulnerabilities. AI Supply Chains, which takes a first step towards a formal study of AI supply chains and their implications, providing two illustrative case studies.