HyperAI

Named Entity Recognition On Slue

Metrics

F1 (%)

Results

Performance results of various models on this benchmark

Model Name
F1 (%)
Paper TitleRepository
Wav2Seq (from HuBERT-large)65.4Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
W2V2-L-LL60K (pipeline approach, uses LM)69.6SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-LS960 (pipeline approach)49.5SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
HuBERT-B-LS960 (e2e approach, uses LM)61.9SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-LS960 (e2e approach)50.2SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-L-LL60K (e2e approach, uses LM)64.8SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-L-LL60K (pipeline approach)57.8SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-VP100K (e2e approach, uses LM)61.8SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
HuBERT-B-LS960 (e2e approach)49.8SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-LS960 (pipeline approach, uses LM)68.0SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-L-LL60K (e2e approach)50.9SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-VP100K (e2e approach)47.9SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
W2V2-B-LS960 (e2e approach, uses LM)63.4SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech
0 of 13 row(s) selected.