Picture for Shun Chen

Shun Chen

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition

Add code
Apr 29, 2024
Figure 1 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 2 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 3 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 4 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Viaarxiv icon

GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding

Add code
Dec 07, 2023
Figure 1 for GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding
Figure 2 for GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding
Figure 3 for GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding
Figure 4 for GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding
Viaarxiv icon

LSTM Fully Convolutional Networks for Time Series Classification

Add code
Sep 08, 2017
Figure 1 for LSTM Fully Convolutional Networks for Time Series Classification
Figure 2 for LSTM Fully Convolutional Networks for Time Series Classification
Figure 3 for LSTM Fully Convolutional Networks for Time Series Classification
Figure 4 for LSTM Fully Convolutional Networks for Time Series Classification
Viaarxiv icon