HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

e-SNLI: Natural Language Inference with Natural Language Explanations

Oana-Maria Camburu; Tim Rocktäschel; Thomas Lukasiewicz; Phil Blunsom

e-SNLI: Natural Language Inference with Natural Language Explanations

Abstract

In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time. In this work, we extend the Stanford Natural Language Inference dataset with an additional layer of human-annotated natural language explanations of the entailment relations. We further implement models that incorporate these explanations into their training process and output them at test time. We show how our corpus of explanations, which we call e-SNLI, can be used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Our dataset thus opens up a range of research directions for using natural language explanations, both for improving models and for asserting their trust.

Code Repositories

qtli/eib
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
natural-language-inference-on-e-snliExplainThenPredictAttention (e-InferSent Bi-LSTM + Attention)
Accuracy: 81.71
BLEU: 27.58

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp