The field of artificial intelligence is moving towards the development of more explainable and transparent models. Researchers are exploring new cognitive architectures and frameworks that can provide insights into the decision-making processes of AI systems. One of the key directions is the development of neuro-theoretical frameworks that can explain the emergence of intelligence in systems. Another area of focus is the understanding of large reasoning models and their ability to mimic human-like reasoning processes.
Notable papers in this area include: The paper proposing a neuro-theoretical framework for the emergence of intelligence in systems, which provides theoretical insights into cognitive processes and a computationally efficient approach for the creation of explainable AI. The paper introducing a comprehensive taxonomy to characterize atomic reasoning steps in large reasoning models, which can help improve the training and post-training of these models. The paper proposing a research program to investigate the Machine Consciousness Hypothesis, which suggests that consciousness is an emergent property of collective intelligence systems. The paper introducing a novel cognitive architecture called Weight-Calculatism, which demonstrates potential as a viable pathway toward Artificial General Intelligence (AGI) with radical explainability and intrinsic generality.