The field of artificial intelligence is witnessing significant advancements in efficient computing, particularly in the areas of large language models and low-precision computations. Researchers are exploring innovative methods to optimize computational resources, reduce memory usage, and improve inference performance. One notable direction is the development of collaborative edge computing frameworks, such as Jupiter, which enables fast and resource-efficient inference of generative large language models on edge devices. Another area of focus is the design of memory-efficient algorithms and systems, including ActiveFlow, which achieves adaptive DRAM usage for modern large language models, and MOM, which reduces peak memory usage for long-context language models. Additionally, advancements in pseudorandom generators, distributed retrieval-augmented generation, and virtual machines for low-precision GPGPU computation are also being made. Noteworthy papers include Jupiter, which introduces a flexible pipelined architecture for collaborative edge computing, and MOM, which proposes a method for partitioning critical layers into smaller mini-sequences and integrating seamlessly with KV cache offloading. These developments have the potential to significantly improve the efficiency and performance of AI applications, enabling their deployment on a wider range of devices and platforms.