Command Palette
Search for a command to run...
PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion
Jianhao Shen Chenguang Wang Ye Yuan Jiawei Han Heng Ji Koushik Sen Ming Zhang Dawn Song

Abstract
This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a "fill-in-the-blank" task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters. The code and datasets are available at \url{https://github.com/yuanyehome/PALT}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| link-prediction-on-fb15k-237 | PALT | Hits@10: 0.444 MR: 144 |
| link-prediction-on-umls | PALT | Hits@10: 0.990 MR: 1.57 |
| link-prediction-on-wn18rr | PALT | Hits@10: 0.693 MR: 61 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.