HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning

Yuxin Jiang Linhan Zhang Wei Wang

Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning

Abstract

Contrastive learning has been demonstrated to be effective in enhancing pre-trained language models (PLMs) to derive superior universal sentence embeddings. However, existing contrastive methods still have two limitations. Firstly, previous works may acquire poor performance under domain shift settings, thus hindering the application of sentence representations in practice. We attribute this low performance to the over-parameterization of PLMs with millions of parameters. To alleviate it, we propose PromCSE (Prompt-based Contrastive Learning for Sentence Embeddings), which only trains small-scale \emph{Soft Prompt} (i.e., a set of trainable vectors) while keeping PLMs fixed. Secondly, the commonly used NT-Xent loss function of contrastive learning does not fully exploit hard negatives in supervised learning settings. To this end, we propose to integrate an Energy-based Hinge loss to enhance the pairwise discriminative power, inspired by the connection between the NT-Xent loss and the Energy-based Learning paradigm. Empirical results on seven standard semantic textual similarity (STS) tasks and a domain-shifted STS task both show the effectiveness of our method compared with the current state-of-the-art sentence embedding models. Our code is publicly avaliable at https://github.com/YJiangcm/PromCSE

Code Repositories

yjiangcm/promcse
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
semantic-textual-similarity-on-cxcPromCSE-RoBERTa-large (0.355B)
avg ± std: 74.8± 1.0
semantic-textual-similarity-on-sickPromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8243
semantic-textual-similarity-on-sts-benchmarkPromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8787
semantic-textual-similarity-on-sts12PromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.7956
semantic-textual-similarity-on-sts13PromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8897
semantic-textual-similarity-on-sts14PromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8381
semantic-textual-similarity-on-sts15PromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8808
semantic-textual-similarity-on-sts16PromCSE-RoBERTa-large (0.355B)
Spearman Correlation: 0.8496

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp