Advancements in Artificial Intelligence and Deep Learning

The field of artificial intelligence and deep learning is rapidly evolving, with a focus on improving the interpretability, efficiency, and robustness of models. Recent research has explored the use of Infinite Time Turing Machines to analyze deep learning systems, revealing fundamental limitations in scalability, efficiency, and interpretability. Other studies have introduced novel architectures, such as the Universal State Machine and DeepTraverse, which enable more efficient and interpretable computation. Additionally, there has been a growing interest in developing more robust and secure models, with research on attacking attention mechanisms in foundation models and introducing adaptive pooling methods to mitigate noise. Noteworthy papers include 'Infinite Time Turing Machines and their Applications', which proposes a novel computational paradigm, and 'Attacking Attention of Foundation Models Disrupts Downstream Tasks', which highlights the vulnerability of foundation models to adversarial attacks.

Sources

Infinite Time Turing Machines and their Applications

Attacking Attention of Foundation Models Disrupts Downstream Tasks

A Neural Network Model of Spatial and Feature-Based Attention

Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks

Transformative or Conservative? Conservation laws for ResNets and Transformers

MARMOT: Masked Autoencoder for Modeling Transient Imaging

Convergence of Spectral Principal Paths: How Deep Networks Distill Linear Representations from Noisy Inputs

Spiking Neural Models for Decision-Making Tasks with Learning

Robust Noise Attenuation via Adaptive Pooling of Transformer Outputs

DeepTraverse: A Depth-First Search Inspired Network for Algorithmic Visual Understanding

Detec\c{c}\~ao da Psor\'iase Utilizando Vis\~ao Computacional: Uma Abordagem Comparativa Entre CNNs e Vision Transformers

Interpreting learned search: finding a transition model and value function in an RNN that plays Sokoban

Revisiting Transformers with Insights from Image Filtering

PiPViT: Patch-based Visual Interpretable Prototypes for Retinal Image Analysis

Saturation Self-Organizing Map

Preserving Task-Relevant Information Under Linear Concept Removal

Built with on top of