HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum

Tao Meng; Fuchen Zhang; Yuntao Shou; Wei Ai; Nan Yin; Keqin Li

Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum

Abstract

Efficiently capturing consistent and complementary semantic features in a multimodal conversation context is crucial for Multimodal Emotion Recognition in Conversation (MERC). Existing methods mainly use graph structures to model dialogue context semantic dependencies and employ Graph Neural Networks (GNN) to capture multimodal semantic features for emotion recognition. However, these methods are limited by some inherent characteristics of GNN, such as over-smoothing and low-pass filtering, resulting in the inability to learn long-distance consistency information and complementary information efficiently. Since consistency and complementarity information correspond to low-frequency and high-frequency information, respectively, this paper revisits the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum. Specifically, we propose a Graph-Spectrum-based Multimodal Consistency and Complementary collaborative learning framework GS-MCC. First, GS-MCC uses a sliding window to construct a multimodal interaction graph to model conversational relationships and uses efficient Fourier graph operators to extract long-distance high-frequency and low-frequency information, respectively. Then, GS-MCC uses contrastive learning to construct self-supervised signals that reflect complementarity and consistent semantic collaboration with high and low-frequency signals, thereby improving the ability of high and low-frequency information to reflect real emotions. Finally, GS-MCC inputs the collaborative high and low-frequency information into the MLP network and softmax function for emotion prediction. Extensive experiments have proven the superiority of the GS-MCC architecture proposed in this paper on two benchmark data sets.

Benchmarks

BenchmarkMethodologyMetrics
emotion-recognition-in-conversation-onGS-MCC
Accuracy: 73.8
Weighted-F1: 73.9
emotion-recognition-in-conversation-on-meldGS-MCC
Accuracy: 68.1
Weighted-F1: 69.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum | Papers | HyperAI