HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion

Jianhao Shen Chenguang Wang Ye Yuan Jiawei Han Heng Ji Koushik Sen Ming Zhang Dawn Song

PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion

Abstract

This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a "fill-in-the-blank" task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters. The code and datasets are available at \url{https://github.com/yuanyehome/PALT}.

Code Repositories

yuanyehome/palt
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
link-prediction-on-fb15k-237PALT
Hits@10: 0.444
MR: 144
link-prediction-on-umlsPALT
Hits@10: 0.990
MR: 1.57
link-prediction-on-wn18rrPALT
Hits@10: 0.693
MR: 61

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp