The field of large language models (LLMs) is moving towards efficient inference on edge devices, driven by the need for real-time processing, improved privacy, and reduced latency. Recent developments focus on optimizing LLMs for deployment on resource-constrained devices, such as wearable devices and embedded systems. Researchers are exploring innovative techniques, including quantization, caching, and compression, to reduce the computational and memory demands of LLMs. Noteworthy papers in this area include TeLLMe, which presents a table-lookup-based ternary LLM accelerator for low-power edge FPGAs, and Kelle, which proposes a software-hardware co-design solution for deploying LLMs on eDRAM-based edge systems. Other notable works include AMS-Quant, which advances floating-point quantization exploration, and FourierCompress, which proposes a novel activation compression framework. These advancements have the potential to significantly improve the efficiency and accuracy of LLM inference on edge devices.