HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A Simple Recipe for Multilingual Grammatical Error Correction

Sascha Rothe Jonathan Mallinson Eric Malmi Sebastian Krause Aliaksei Severyn

A Simple Recipe for Multilingual Grammatical Error Correction

Abstract

This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use large-scale multilingual language models (up to 11B parameters). Once fine-tuned on language-specific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a cLang-8 dataset. It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy lang-8 dataset. cLang-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages -- we demonstrate that performing a single fine-tuning step on cLang-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.

Code Repositories

google-research-datasets/clang8
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
grammatical-error-correction-on-conll-2014T5
F0.5: 68.87
grammatical-error-correction-on-falko-merlingT5 xxl
F0.5: 75.96

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp