Command Palette
Search for a command to run...
Zecheng Li; Wengang Zhou; Weichao Zhao; Kepeng Wu; Hezhen Hu; Houqiang Li

Abstract
Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. First, we introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video paired with textual annotations, which enables effective large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating downstream tasks as a single sign language translation (SLT) task during fine-tuning, ensuring seamless knowledge transfer between pre-training and fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and a score-aware sampling strategy to efficiently fuse pose and RGB information, addressing keypoint inaccuracies and improving computational efficiency. Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign achieves state-of-the-art performance across multiple downstream SLU tasks. Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| sign-language-recognition-on-csl-daily | Uni-Sign | Word Error Rate (WER): 26.0 |
| sign-language-recognition-on-ms-asl | Uni-Sign | P-C Top-1 Accuracy: 76.97 P-I Top-1 Accuracy: 78.16 |
| sign-language-recognition-on-wlasl-2000 | Uni-Sign | Top-1 Accuracy: 63.52 |
| sign-language-recognition-on-wlasl100 | Uni-Sign | Top-1 Accuracy: 92.25 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.