Command Palette
Search for a command to run...
Learning Vision Transformer with Squeeze and Excitation for Facial Expression Recognition
Mouath Aouayeb Wassim Hamidouche Catherine Soladie Kidiyo Kpalma Renaud Seguier

Abstract
As various databases of facial expressions have been made accessible over the last few decades, the Facial Expression Recognition (FER) task has gotten a lot of interest. The multiple sources of the available databases raised several challenges for facial recognition task. These challenges are usually addressed by Convolution Neural Network (CNN) architectures. Different from CNN models, a Transformer model based on attention mechanism has been presented recently to address vision tasks. One of the major issue with Transformers is the need of a large data for training, while most FER databases are limited compared to other vision applications. Therefore, we propose in this paper to learn a vision Transformer jointly with a Squeeze and Excitation (SE) block for FER task. The proposed method is evaluated on different publicly available FER databases including CK+, JAFFE,RAF-DB and SFEW. Experiments demonstrate that our model outperforms state-of-the-art methods on CK+ and SFEW and achieves competitive results on JAFFE and RAF-DB.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| facial-expression-recognition-on-ck | ViT + SE | Accuracy (7 emotion): 99.8 |
| facial-expression-recognition-on-jaffe | ViT | Accuracy: 94.83 |
| facial-expression-recognition-on-rafd | ViT + SE | Accuracy: 87.22 |
| facial-expression-recognition-on-sfew | ViT + SE | Accuracy: 54.29 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.