Multimodal Emotion Recognition


Multimodal emotion recognition is the process of recognizing emotions from multiple modalities, such as speech, text, and facial expressions.

Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks

Add code
Nov 08, 2024
Viaarxiv icon

EEG-based Multimodal Representation Learning for Emotion Recognition

Add code
Oct 29, 2024
Viaarxiv icon

UGotMe: An Embodied System for Affective Human-Robot Interaction

Add code
Oct 24, 2024
Figure 1 for UGotMe: An Embodied System for Affective Human-Robot Interaction
Figure 2 for UGotMe: An Embodied System for Affective Human-Robot Interaction
Figure 3 for UGotMe: An Embodied System for Affective Human-Robot Interaction
Figure 4 for UGotMe: An Embodied System for Affective Human-Robot Interaction
Viaarxiv icon

Enhancing Multimodal Affective Analysis with Learned Live Comment Features

Add code
Oct 21, 2024
Viaarxiv icon

MMCS: A Multimodal Medical Diagnosis System Integrating Image Analysis and Knowledge-based Departmental Consultation

Add code
Oct 20, 2024
Viaarxiv icon

Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis

Add code
Oct 13, 2024
Figure 1 for Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis
Figure 2 for Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis
Figure 3 for Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis
Figure 4 for Empowering Dysarthric Speech: Leveraging Advanced LLMs for Accurate Speech Correction and Multimodal Emotion Analysis
Viaarxiv icon

Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition

Add code
Sep 21, 2024
Figure 1 for Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition
Figure 2 for Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition
Figure 3 for Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition
Figure 4 for Strong Alone, Stronger Together: Synergizing Modality-Binding Foundation Models with Optimal Transport for Non-Verbal Emotion Recognition
Viaarxiv icon

Hierarchical Hypercomplex Network for Multimodal Emotion Recognition

Add code
Sep 13, 2024
Figure 1 for Hierarchical Hypercomplex Network for Multimodal Emotion Recognition
Figure 2 for Hierarchical Hypercomplex Network for Multimodal Emotion Recognition
Figure 3 for Hierarchical Hypercomplex Network for Multimodal Emotion Recognition
Figure 4 for Hierarchical Hypercomplex Network for Multimodal Emotion Recognition
Viaarxiv icon

Multimodal Emotion Recognition with Vision-language Prompting and Modality Dropout

Add code
Sep 11, 2024
Viaarxiv icon

Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment

Add code
Sep 10, 2024
Figure 1 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 2 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 3 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 4 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Viaarxiv icon