HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization

{Cagliero Luca La Quatra Moreno}

Abstract

The emergence of attention-based architectures has led to significant improvements in the performance of neural sequence-to-sequence models for text summarization. Although these models have proved to be effective in summarizing English-written documents, their portability to other languages is limited thus leaving plenty of room for improvement. In this paper, we present BART-IT, a sequence-to-sequence model, based on the BART architecture that is specifically tailored to the Italian language. The model is pre-trained on a large corpus of Italian-written pieces of text to learn language-specific features and then fine-tuned on several benchmark datasets established for abstractive summarization. The experimental results show that BART-IT outperforms other state-of-the-art models in terms of ROUGE scores in spite of a significantly smaller number of parameters. The use of BART-IT can foster the development of interesting NLP applications for the Italian language. Beyond releasing the model to the research community to foster further research and applications, we also discuss the ethical implications behind the use of abstractive summarization models.

Benchmarks

BenchmarkMethodologyMetrics
abstractive-text-summarization-on-abstractiveBART-IT
# Parameters: 140
BERTScore: 73.24
ROUGE-1: 35.42
ROUGE-2: 15.88
ROUGE-L: 25.12
abstractive-text-summarization-on-abstractivemT5
# Parameters: 390
BERTScore: 72.77
ROUGE-1: 34.13
ROUGE-2: 15.76
ROUGE-L: 24.84
abstractive-text-summarization-on-abstractivemBART
# Parameters: 610
BERTScore: 73.4
ROUGE-1: 36.52
ROUGE-2: 17.52
ROUGE-L: 26.14
abstractive-text-summarization-on-abstractiveIT5-base
# Parameters: 220
BERTScore: 70.3
ROUGE-1: 33.99
ROUGE-2: 15.59
ROUGE-L: 24.91
abstractive-text-summarization-on-abstractive-1IT5-base
BERTScore: 71.06
ROUGE-1: 32.88
ROUGE-2: 15.53
ROUGE-L: 26.7
abstractive-text-summarization-on-abstractive-1mT5
BERTScore: 74.69
ROUGE-1: 35.04
ROUGE-2: 17.41
ROUGE-L: 28.68
abstractive-text-summarization-on-abstractive-1mBART
BERTScore: 75.86
ROUGE-1: 38.91
ROUGE-2: 21.41
ROUGE-L: 32.08
abstractive-text-summarization-on-abstractive-1BART-IT
BERTScore: 75.36
ROUGE-1: 37.31
ROUGE-2: 19.44
ROUGE-L: 30.41
abstractive-text-summarization-on-witsIT5-base
BERTScore: 77.14
ROUGE-1: 37.98
ROUGE-2: 24.32
ROUGE-L: 34.94
abstractive-text-summarization-on-witsBART-IT
BERTScore: 79.28
ROUGE-1: 42.32
ROUGE-2: 28.83
ROUGE-L: 38.84
abstractive-text-summarization-on-witsmBART
BERTScore: 78.65
ROUGE-1: 39.32
ROUGE-2: 26.18
ROUGE-L: 35.9
abstractive-text-summarization-on-witsmT5
BERTScore: 80.73
ROUGE-1: 40.6
ROUGE-2: 26.9
ROUGE-L: 37.43

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp