HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Auxiliary Objectives for Neural Error Detection Models

Marek Rei; Helen Yannakoudakis

Auxiliary Objectives for Neural Error Detection Models

Abstract

We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.

Benchmarks

BenchmarkMethodologyMetrics
grammatical-error-detection-on-conll-2014-a1Bi-LSTM + POS (unrestricted data)
F0.5: 36.1
grammatical-error-detection-on-conll-2014-a1Bi-LSTM + POS (trained on FCE)
F0.5: 17.5
grammatical-error-detection-on-conll-2014-a2Bi-LSTM + POS (trained on FCE)
F0.5: 26.2
grammatical-error-detection-on-conll-2014-a2Bi-LSTM + POS (unrestricted data)
F0.5: 45.1
grammatical-error-detection-on-fceBi-LSTM + err POS GR
F0.5: 47.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp