HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Relation Extraction
Relation Extraction On Chemprot
Relation Extraction On Chemprot
Metrics
Micro F1
Results
Performance results of various models on this benchmark
Columns
Model Name
Micro F1
Paper Title
Repository
CharacterBERT (base, medical)
73.44
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters
-
BioM-BERT
-
BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
SciBert (Finetune)
-
SciBERT: A Pretrained Language Model for Scientific Text
-
SciBERT (Base Vocab)
-
SciBERT: A Pretrained Language Model for Scientific Text
-
ELECTRAMed
-
ELECTRAMed: a new pre-trained language representation model for biomedical NLP
-
PubMedBERT uncased
77.24
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
-
SciFive Large
-
SciFive: a text-to-text transformer model for biomedical literature
-
BioLinkBERT (large)
79.98
LinkBERT: Pretraining Language Models with Document Links
-
KeBioLM
-
Improving Biomedical Pretrained Language Models with Knowledge
-
BioMegatron
-
BioMegatron: Larger Biomedical Domain Language Model
-
BioT5X (base)
-
SciFive: a text-to-text transformer model for biomedical literature
-
BioBERT
-
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
-
NCBI_BERT(large) (P)
-
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets
-
0 of 13 row(s) selected.
Previous
Next
Relation Extraction On Chemprot | SOTA | HyperAI