HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

DocBERT: BERT for Document Classification

Ashutosh Adhikari; Achyudh Ram; Raphael Tang; Jimmy Lin

DocBERT: BERT for Document Classification

Abstract

We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.

Code Repositories

dki-lab/covid19-classification
pytorch
Mentioned in GitHub
castorini/hedwig
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
document-classification-on-aapdKD-LSTMreg
F1: 72.9
document-classification-on-reuters-21578KD-LSTMreg
F1: 88.9
document-classification-on-yelp-14KD-LSTMreg
Accuracy: 69.4
text-classification-on-arxiv-10DocBERT
Accuracy: 0.764

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp