HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis

Zilong Wang Zhaohong Wan Xiaojun Wan

TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis

Abstract

Multimodal sentiment analysis is an important research area that predicts speaker's sentiment tendency through features extracted from textual, visual and acoustic modalities. The central challenge is the fusion method of the multimodal information. A variety of fusion methods have been proposed, but few of them adopt end-to-end translation models to mine the subtle correlation between modalities. Enlightened by recent success of Transformer in the area of machine translation, we propose a new fusion method, TransModality, to address the task of multimodal sentiment analysis. We assume that translation between modalities contributes to a better joint representation of speaker's utterance. With Transformer, the learned features embody the information both from the source modality and the target modality. We validate our model on multiple multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our proposed method achieves the state-of-the-art performance.

Benchmarks

BenchmarkMethodologyMetrics
multimodal-sentiment-analysis-on-cmu-mosiTri-TransModality
F1-score (Weighted): 82.71

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis | Papers | HyperAI