HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Downstream Model Design of Pre-trained Language Model for Relation Extraction Task

Cheng Li Ye Tian

Downstream Model Design of Pre-trained Language Model for Relation Extraction Task

Abstract

Supervised relation extraction methods based on deep neural network play an important role in the recent information extraction field. However, at present, their performance still fails to reach a good level due to the existence of complicated relations. On the other hand, recently proposed pre-trained language models (PLMs) have achieved great success in multiple tasks of natural language processing through fine-tuning when combined with the model of downstream tasks. However, original standard tasks of PLM do not include the relation extraction task yet. We believe that PLMs can also be used to solve the relation extraction problem, but it is necessary to establish a specially designed downstream task model or even loss function for dealing with complicated relations. In this paper, a new network architecture with a special loss function is designed to serve as a downstream model of PLMs for supervised relation extraction. Experiments have shown that our method significantly exceeded the current optimal baseline models across multiple public datasets of relation extraction.

Code Repositories

slczgwh/REDN
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
relation-extraction-on-semeval-2010-task-8REDN
F1: 91

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Downstream Model Design of Pre-trained Language Model for Relation Extraction Task | Papers | HyperAI