Command Palette
Search for a command to run...
{George Han er Luke Melas-Kyriazi Alex Rush}

Abstract
Image paragraph captioning models aim to produce detailed descriptions of a source image. These models use similar techniques as standard image captioning models, but they have encountered issues in text generation, notably a lack of diversity between sentences, that have limited their effectiveness. In this work, we consider applying sequence-level training for this task. We find that standard self-critical training produces poor results, but when combined with an integrated penalty on trigram repetition produces much more diverse paragraphs. This simple training approach improves on the best result on the Visual Genome paragraph captioning dataset from 16.9 to 30.6 CIDEr, with gains on METEOR and BLEU as well, without requiring any architectural changes.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-paragraph-captioning-on-image-paragraph | SCST training, w/ rep. penalty | BLEU-4: 10.58 CIDEr: 30.63 METEOR: 17.86 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.