Picture for Zheng Lian

Zheng Lian

EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models

Add code
Feb 06, 2025
Figure 1 for EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
Figure 2 for EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
Figure 3 for EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
Figure 4 for EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models
Viaarxiv icon

SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding

Add code
Aug 24, 2024
Figure 1 for SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 2 for SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 3 for SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 4 for SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding
Viaarxiv icon

Towards Evaluating Large Language Models on Sarcasm Understanding

Add code
Aug 21, 2024
Figure 1 for Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 2 for Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 3 for Towards Evaluating Large Language Models on Sarcasm Understanding
Figure 4 for Towards Evaluating Large Language Models on Sarcasm Understanding
Viaarxiv icon

Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset

Add code
Jul 03, 2024
Viaarxiv icon

Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning

Add code
Jun 17, 2024
Figure 1 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 2 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 3 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 4 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Viaarxiv icon

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition

Add code
Apr 29, 2024
Figure 1 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 2 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 3 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 4 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Viaarxiv icon

Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild

Add code
Mar 22, 2024
Figure 1 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Figure 2 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Figure 3 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Viaarxiv icon

Can Deception Detection Go Deeper? Dataset, Evaluation, and Benchmark for Deception Reasoning

Add code
Feb 18, 2024
Viaarxiv icon

HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition

Add code
Jan 11, 2024
Viaarxiv icon

SVFAP: Self-supervised Video Facial Affect Perceiver

Add code
Dec 31, 2023
Figure 1 for SVFAP: Self-supervised Video Facial Affect Perceiver
Figure 2 for SVFAP: Self-supervised Video Facial Affect Perceiver
Figure 3 for SVFAP: Self-supervised Video Facial Affect Perceiver
Figure 4 for SVFAP: Self-supervised Video Facial Affect Perceiver
Viaarxiv icon