Picture for Yuan Zong

Yuan Zong

the Key Laboratory of Child Development and Learning Science of Ministry of Education, and the Department of Information Science and Engineering, Southeast University, China

Towards Realistic Emotional Voice Conversion using Controllable Emotional Intensity

Add code
Jul 20, 2024
Figure 1 for Towards Realistic Emotional Voice Conversion using Controllable Emotional Intensity
Figure 2 for Towards Realistic Emotional Voice Conversion using Controllable Emotional Intensity
Figure 3 for Towards Realistic Emotional Voice Conversion using Controllable Emotional Intensity
Viaarxiv icon

Temporal Label Hierachical Network for Compound Emotion Recognition

Add code
Jul 17, 2024
Viaarxiv icon

EALD-MLLM: Emotion Analysis in Long-sequential and De-identity videos with Multi-modal Large Language Model

Add code
May 01, 2024
Viaarxiv icon

PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion

Add code
Mar 03, 2024
Viaarxiv icon

Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition

Add code
Jan 23, 2024
Figure 1 for Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition
Figure 2 for Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition
Figure 3 for Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition
Figure 4 for Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition
Viaarxiv icon

Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition

Add code
Jan 19, 2024
Figure 1 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 2 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 3 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 4 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Viaarxiv icon

Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation

Add code
Jan 18, 2024
Figure 1 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 2 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 3 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 4 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Viaarxiv icon

Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach

Add code
Dec 11, 2023
Viaarxiv icon

PainSeeker: An Automated Method for Assessing Pain in Rats Through Facial Expressions

Add code
Nov 06, 2023
Viaarxiv icon

Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition

Add code
Oct 07, 2023
Figure 1 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 2 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 3 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 4 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Viaarxiv icon