HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning

Alexandros Xenos; Niki Maria Foteinopoulou; Ioanna Ntinou; Ioannis Patras; Georgios Tzimiropoulos

VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning

Abstract

Recognising emotions in context involves identifying the apparent emotions of an individual, taking into account contextual cues from the surrounding scene. Previous approaches to this task have involved the design of explicit scene-encoding architectures or the incorporation of external scene-related information, such as captions. However, these methods often utilise limited contextual information or rely on intricate training pipelines. In this work, we leverage the groundbreaking capabilities of Vision-and-Large-Language Models (VLLMs) to enhance in-context emotion classification without introducing complexity to the training process in a two-stage approach. In the first stage, we propose prompting VLLMs to generate descriptions in natural language of the subject's apparent emotion relative to the visual context. In the second stage, the descriptions are used as contextual information and, along with the image input, are used to train a transformer-based architecture that fuses text and visual features before the final classification task. Our experimental results show that the text and image features have complementary information, and our fused architecture significantly outperforms the individual modalities without any complex training methods. We evaluate our approach on three different datasets, namely, EMOTIC, CAER-S, and BoLD, and achieve state-of-the-art or comparable accuracy across all datasets and metrics compared to much more complex approaches. The code will be made publicly available on github: https://github.com/NickyFot/EmoCommonSense.git

Code Repositories

nickyfot/emocommonsense
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
emotion-recognition-in-context-on-boldA. Xenos et al
AUC: 69.83
Average mAP: 26.66
emotion-recognition-in-context-on-caer-1A. Xenos et al
Accuracy: 93.08
emotion-recognition-in-context-on-emoticA. Xenos et al
mAP: 38.52

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp