Advances in Adaptive Video Streaming and Interpretable AI

The field of adaptive video streaming and interpretable AI is moving towards personalized optimization and improved comprehensibility. Researchers are exploring new methods to align user-level Quality of Experience with algorithmic optimization objectives, such as using large language models to evaluate decision trees for developer comprehensibility. Additionally, there is a growing interest in sparse autoencoders and their applications in feature extraction and recommendation systems. Noteworthy papers in this area include: Towards User-level QoE, which proposes a large-scale deployed system for personalized adaptive video streaming based on user-level experience, achieving a 0.15% increase in total viewing time and a 1.3% reduction in stall time. Beyond Interpretability, which introduces a bitrate adaptation algorithm generation framework that considers comprehensibility, significantly improving comprehensibility while maintaining competitive performance.

Sources

Towards User-level QoE: Large-scale Practice in Personalized Optimization of Adaptive Video Streaming

Beyond Interpretability: Exploring the Comprehensibility of Adaptive Video Streaming through Large Language Models

Sparse but Wrong: Incorrect L0 Leads to Incorrect Features in Sparse Autoencoders

Attention Layers Add Into Low-Dimensional Residual Subspaces

Opening the Black Box: Interpretable Remedies for Popularity Bias in Recommender Systems

AdaptiveK Sparse Autoencoders: Dynamic Sparsity Allocation for Interpretable LLM Representations

S2Sent: Nested Selectivity Aware Sentence Representation Learning

Sparse Autoencoders for Low-$N$ Protein Function Prediction and Design

Improving Recommendation Fairness via Graph Structure and Representation Augmentation

Built with on top of