HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning

Jaeyeon Kim Jaeyoon Jung Jinjoo Lee Sang Hoon Woo

EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning

Abstract

We propose EnCLAP, a novel framework for automated audio captioning. EnCLAP employs two acoustic representation models, EnCodec and CLAP, along with a pretrained language model, BART. We also introduce a new training objective called masked codec modeling that improves acoustic awareness of the pretrained language model. Experimental results on AudioCaps and Clotho demonstrate that our model surpasses the performance of baseline models. Source code will be available at https://github.com/jaeyeonkim99/EnCLAP . An online demo is available at https://huggingface.co/spaces/enclap-team/enclap .

Code Repositories

jaeyeonkim99/enclap
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
audio-captioning-on-audiocapsEnCLAP-large
CIDEr: 0.8029
METEOR: 0.2554
SPICE: 0.1879
SPIDEr: 0.4954
audio-captioning-on-audiocapsEnCLAP-base
CIDEr: 0.7795
METEOR: 0.2473
SPICE: 0.1863
SPIDEr: 0.4829

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp