Command Palette
Search for a command to run...
4 months ago
An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction
Shun Kiyono; Jun Suzuki; Masato Mita; Tomoya Mizumoto; Kentaro Inui

Abstract
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture.
Code Repositories
butsugiri/gec-pseudodata
Official
pytorch
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| grammatical-error-correction-on-bea-2019-test | Transformer + Pre-train with Pseudo Data | F0.5: 70.2 |
| grammatical-error-correction-on-conll-2014 | Transformer + Pre-train with Pseudo Data | F0.5: 65.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.
AI Co-coding
Ready-to-use GPUs
Best Pricing
Hyper Newsletters
Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp