The field of artificial intelligence is witnessing a significant shift towards the integration of symbolic and subsymbolic approaches, aiming to overcome the limitations of individual methods. Recent developments have focused on enhancing symbolic machine learning with subsymbolic representations, leading to improved performance in various tasks. The use of neuro-symbolic models has shown promise in integrating multiple sources of information, enabling more flexible and interpretable decision-making. Additionally, researchers have explored the application of discrete representation learning in task-driven settings, highlighting the importance of understanding the properties and benefits of discrete representations in relation to downstream tasks. Notably, the introduction of Contextual Analog Logic with Multimodality (CALM) has demonstrated the potential to reason with logic structure while aligning with preferences in multi-modal environments. Furthermore, co-creative learning via Metropolis-Hastings interaction between humans and AI has emerged as a novel paradigm, enabling mutual integration of partial perceptual information and knowledge to construct shared external representations. Some notable papers include:
- Discrete JEPA, which proposes a novel approach to discrete token representation learning without reconstruction, achieving state-of-the-art results on visual symbolic prediction tasks.
- CALM, which introduces a framework that unites symbolic reasoning with neural generation, enabling context-sensitive decisions grounded in real-world multi-modal data.