HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Domain-matched Pre-training Tasks for Dense Retrieval

Domain-matched Pre-training Tasks for Dense Retrieval

Abstract

Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.

Code Repositories

facebookresearch/dpr-scale
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
passage-retrieval-on-natural-questionsDPR-PAQ
Precision@100: 89.22
Precision@20: 84.68

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp