The field of model compression is moving towards developing innovative techniques to reduce the size and computational requirements of large language models and other deep learning models, enabling their deployment on resource-constrained edge devices. Researchers are exploring various methods, including quantization, knowledge distillation, and pruning, to achieve high compression ratios while maintaining acceptable performance. Notably, the use of advanced quantization techniques, such as convolutional code quantization and post-training quantization, has shown promising results in reducing model size and inference cost. Additionally, the integration of model compression techniques with specialized edge hardware, such as analog in-memory computing chips, is being investigated to further improve computational efficiency. Noteworthy papers include: EdgeCodec, which presents a lightweight neural compressor for barometric data that achieves high compression rates while maintaining low reconstruction error. CCQ, which proposes a convolutional code quantization approach that compresses large language models to extremely low bit widths with minimal accuracy loss.