Efficient Semantic Communication in Edge Intelligence

The field of semantic communication is rapidly advancing, with a focus on efficient and adaptive methods for transmitting semantic information over resource-constrained edge devices. Recent developments have centered on optimizing transformer models for semantic communication, leveraging techniques such as token merging and Bayesian optimization to balance accuracy and computational cost. Notably, innovative approaches have emerged that enable flexible runtime adaptation to dynamic application requirements and channel conditions, providing a scalable and efficient approach for deploying transformer-based semantic communication in future edge intelligence systems. Some noteworthy papers in this area include: Adaptive Pareto-Optimal Token Merging for Edge Transformer Models in Semantic Communication, which presents a training-free framework for adaptive token merging that achieves significant reductions in floating-point operations while maintaining competitive accuracy. Communication Efficient Split Learning of ViTs with Attention-based Double Compression, which proposes a novel communication-efficient Split Learning framework that reduces the communication overhead required for transmitting intermediate Vision Transformers activations during the training process.

Sources

Adaptive Pareto-Optimal Token Merging for Edge Transformer Models in Semantic Communication

Mixture of Semantics Transmission for Generative AI-Enabled Semantic Communication Systems

Adaptive Token Merging for Efficient Transformer Semantic Communication at the Edge

Semantic Rate-Distortion Theory with Applications

Towards Native AI in 6G Standardization: The Roadmap of Semantic Communication

Integrated Sensing and Communication for Vehicular Networks: A Rate-Distortion Fundamental Limits of State Estimator

Communication Efficient Split Learning of ViTs with Attention-based Double Compression

Built with on top of