Command Palette
Search for a command to run...
Bálint Csanády Lajos Muzsai Péter Vedres Zoltán Nádasdy András Lukács

Abstract
Large Language Models (LLMs), such as GPT-4 and Llama 2, show remarkable proficiency in a wide range of natural language processing (NLP) tasks. Despite their effectiveness, the high costs associated with their use pose a challenge. We present LlamBERT, a hybrid approach that leverages LLMs to annotate a small subset of large, unlabeled databases and uses the results for fine-tuning transformer encoders like BERT and RoBERTa. This strategy is evaluated on two diverse datasets: the IMDb review dataset and the UMLS Meta-Thesaurus. Our results indicate that the LlamBERT approach slightly compromises on accuracy while offering much greater cost-effectiveness.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| sentiment-analysis-on-imdb | Llama-2-70b-chat (0-shot) | Accuracy: 95.39 |
| sentiment-analysis-on-imdb | RoBERTa-large with LlamBERT | Accuracy: 96.68 |
| sentiment-analysis-on-imdb | RoBERTa-large | Accuracy: 96.54 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.