HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning

Chu Myaet Thwal Minh N.H. Nguyen Ye Lin Tun Seong Tae Kim My T. Thai Choong Seon Hong

OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning

Abstract

Federated learning (FL) has emerged as a promising approach to collaboratively train machine learning models across multiple edge devices while preserving privacy. The success of FL hinges on the efficiency of participating models and their ability to handle the unique challenges of distributed learning. While several variants of Vision Transformer (ViT) have shown great potential as alternatives to modern convolutional neural networks (CNNs) for centralized training, the unprecedented size and higher computational demands hinder their deployment on resource-constrained edge devices, challenging their widespread application in FL. Since client devices in FL typically have limited computing resources and communication bandwidth, models intended for such devices must strike a balance between model size, computational efficiency, and the ability to adapt to the diverse and non-IID data distributions encountered in FL. To address these challenges, we propose OnDev-LCT: Lightweight Convolutional Transformers for On-Device vision tasks with limited training data and resources. Our models incorporate image-specific inductive biases through the LCT tokenizer by leveraging efficient depthwise separable convolutions in residual linear bottleneck blocks to extract local features, while the multi-head self-attention (MHSA) mechanism in the LCT encoder implicitly facilitates capturing global representations of images. Extensive experiments on benchmark image datasets indicate that our models outperform existing lightweight vision models while having fewer parameters and lower computational demands, making them suitable for FL scenarios with data heterogeneity and communication bottlenecks.

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-cifar-10OnDev-LCT-2/3
Parameters: 0.35M
Percentage correct: 86.04
Top-1 Accuracy: 86.04
image-classification-on-cifar-10OnDev-LCT-8/3
Parameters: 0.95M
Percentage correct: 87.65
Top-1 Accuracy: 87.65
image-classification-on-cifar-10OnDev-LCT-4/3
Parameters: 0.55M
Percentage correct: 87.03
Top-1 Accuracy: 87.03
image-classification-on-cifar-10OnDev-LCT-1/3
Parameters: 0.25M
Percentage correct: 85.73
Top-1 Accuracy: 85.73
image-classification-on-cifar-10OnDev-LCT-1/1
Parameters: 0.21M
Percentage correct: 84.55
Top-1 Accuracy: 84.55
image-classification-on-cifar-10OnDev-LCT-2/1
Parameters: 0.31M
Percentage correct: 86.27
Top-1 Accuracy: 86.27
image-classification-on-cifar-10OnDev-LCT-8/1
Parameters: 0.91M
Percentage correct: 86.64
Top-1 Accuracy: 86.64
image-classification-on-cifar-10OnDev-LCT-4/1
Parameters: 0.51M
Percentage correct: 86.61
Top-1 Accuracy: 86.61
image-classification-on-emnist-balancedOnDev-LCT-4/1
Accuracy: 89.39
Trainable Parameters: 514960
image-classification-on-emnist-balancedOnDev-LCT-1/1
Accuracy: 89.52
Trainable Parameters: 216208
image-classification-on-emnist-balancedOnDev-LCT-2/1
Accuracy: 89.18
Trainable Parameters: 315792
image-classification-on-emnist-balancedOnDev-LCT-8/1
Accuracy: 89.55
Trainable Parameters: 913296

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp