HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization

Zhe Li Bicheng Ying Zidong Liu Chaosheng Dong Haibo Yang

Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization

Abstract

Federated Learning (FL) offers a promising framework for collaborative and privacy-preserving machine learning across distributed data sources. However, the substantial communication costs associated with FL significantly challenge its efficiency. Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios. Despite various communication-efficient strategies, the intrinsic dimension-dependent communication cost remains a major bottleneck for current FL implementations. This paper proposes a novel dimension-free communication algorithm - DeComFL, which leverages the zeroth-order optimization techniques and reduces the communication cost from $\mathscr{O}(d)$ to $\mathscr{O}(1)$ by transmitting only a constant number of scalar values between clients and the server in each round, regardless of the dimension $d$ of the model parameters. Theoretically, in non-convex functions, we prove that our algorithm achieves state-of-the-art rates, which show a linear speedup of the number of clients and local steps under standard assumptions. With additional low effective rank assumption, we can further show the convergence rate is independent of the model dimension $d$ as well. Empirical evaluations, encompassing both classic deep learning training and large language model fine-tuning, demonstrate significant reductions in communication overhead. Notably, DeComFL achieves this by transmitting only around 1MB of data in total between the server and a client to fine-tune a model with billions of parameters. Our code is available at https://github.com/ZidongLiu/DeComFL.

Code Repositories

ZidongLiu/DeComFL
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
classification-on-boolqOPT-1.3B
Test Accuracy: 62.5%
classification-on-boolqOPT-125M
Test Accuracy: 61.6%
classification-on-cbOPT-125M
Test Accuracy: 75%
classification-on-cbOPT-1.3B
Test Accuracy: 75.71%
classification-on-rteOPT-1.3B
Test Accuracy: 60.89%
classification-on-rteOPT-125M
Test Accuracy: 57.05%
classification-on-sst-2OPT-125M
Test Accuracy: 85.08%
classification-on-sst-2OPT-1.3B
Test Accuracy: 90.78%
classification-on-wicOPT-1.3B
Test Accuracy: 56.14%
classification-on-wicOPT-125M
Test Accuracy: 53.38%
classification-on-wscOPT-125M
Test Accuracy: 59.59%
classification-on-wscOPT-1.3B
Test Accuracy: 64.16%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp