HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning

Jia Cheng Hu Roberto Cavicchioli Alessandro Capotondi

Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning

Abstract

We introduce a method called the Expansion mechanism that processes the input unconstrained by the number of elements in the sequence. By doing so, the model can learn more effectively compared to traditional attention-based approaches. To support this claim, we design a novel architecture ExpansionNet v2 that achieved strong results on the MS COCO 2014 Image Captioning challenge and the State of the Art in its respective category, with a score of 143.7 CIDErD in the offline test split, 140.8 CIDErD in the online evaluation server and 72.9 AllCIDEr on the nocaps validation set. Additionally, we introduce an End to End training algorithm up to 2.8 times faster than established alternatives. Source code available at: https://github.com/jchenghu/ExpansionNet_v2

Code Repositories

jchenghu/expansionnet_v2
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-captioning-on-cocoExpansionNet v2
CIDEr: 143.7
image-captioning-on-coco-captionsExpansionNet v2 (No VL pretraining)
BLEU-1: 83.5
BLEU-4: 42.7
CIDER: 143.7
METEOR: 30.6
ROUGE-L: 61.1
SPICE: 24.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning | Papers | HyperAI