Efficient Edge AI with Federated Learning and Large Language Models

The field of edge AI is rapidly advancing with a focus on efficient and private processing of large language models. Researchers are exploring ways to reduce latency and communication overhead in bandwidth-constrained settings by leveraging federated learning and hybrid language models. Notable innovations include collaborative learning of uncertainty thresholds, hierarchical model aggregation, and privacy-aware fine-tuning methods. These advancements have the potential to transform edge deployment of large language models, enabling scalable and efficient applications. Noteworthy papers include Federated Learning-Enabled Hybrid Language Models for Communication-Efficient Token Transmission, which reduces LLM transmissions by over 95 percent with negligible accuracy loss. PAE MobiLLM introduces a privacy-aware and efficient LLM fine-tuning method via additive side-tuning, enabling secure and efficient fine-tuning on mobile devices.

Sources

Federated Learning-Enabled Hybrid Language Models for Communication-Efficient Token Transmission

Edge Computing and its Application in Robotics: A Survey

Toward Edge General Intelligence with Multiple-Large Language Model (Multi-LLM): Architecture, Trust, and Orchestration

PAE MobiLLM: Privacy-Aware and Efficient LLM Fine-Tuning on the Mobile Device via Additive Side-Tuning

EdgeLoRA: An Efficient Multi-Tenant LLM Serving System on Edge Devices

Graph Representation-based Model Poisoning on Federated LLMs in CyberEdge Networks

Built with on top of