The field of artificial intelligence is moving towards more energy-efficient and privacy-preserving solutions. Recent developments in federated learning and large language models have shown promising results in reducing energy consumption and improving model performance. Researchers are exploring new architectures and techniques to optimize energy efficiency, such as targeted optimizations to transformer attention and MLP layers, and fine-grained empirical analysis of inference energy across core components of transformer architecture. Noteworthy papers include Litespark Technical Report, which introduces a novel pre-training framework that achieves substantial performance gains and energy consumption reduction, and Dissecting Transformers: A CLEAR Perspective towards Green AI, which presents a fine-grained empirical analysis of inference energy across core components of transformer architecture. Other notable works include FTTE: Federated Learning on Resource-Constrained Devices, Edge-FIT: Federated Instruction Tuning of Quantized LLMs for Privacy-Preserving Smart Home Environments, and CAFL-L: Constraint-Aware Federated Learning with Lagrangian Dual Optimization for On-Device Language Models, which all contribute to the advancement of energy-efficient and privacy-preserving AI solutions.
Advances in Energy-Efficient AI and Federated Learning
Sources
Energy Efficiency in Cloud-Based Big Data Processing for Earth Observation: Gap Analysis and Future Directions
Edge-FIT: Federated Instruction Tuning of Quantized LLMs for Privacy-Preserving Smart Home Environments
CAFL-L: Constraint-Aware Federated Learning with Lagrangian Dual Optimization for On-Device Language Models
Intelligent Healthcare Ecosystems: Optimizing the Iron Triangle of Healthcare (Access, Cost, Quality)
Towards Carbon-Aware Container Orchestration: Predicting Workload Energy Consumption with Federated Learning
FedSRD: Sparsify-Reconstruct-Decompose for Communication-Efficient Federated Large Language Models Fine-Tuning