Picture for Che-Jui Chang

Che-Jui Chang

From Words to Worlds: Transforming One-line Prompt into Immersive Multi-modal Digital Stories with Communicative LLM Agent

Add code
Jun 15, 2024
Viaarxiv icon

BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis

Add code
Apr 23, 2024
Viaarxiv icon

On the Equivalency, Substitutability, and Flexibility of Synthetic Data

Add code
Mar 24, 2024
Viaarxiv icon

The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents

Add code
Sep 26, 2023
Figure 1 for The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents
Figure 2 for The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents
Figure 3 for The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents
Figure 4 for The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents
Viaarxiv icon

Learning from Synthetic Human Group Activities

Add code
Jul 16, 2023
Figure 1 for Learning from Synthetic Human Group Activities
Figure 2 for Learning from Synthetic Human Group Activities
Figure 3 for Learning from Synthetic Human Group Activities
Figure 4 for Learning from Synthetic Human Group Activities
Viaarxiv icon

Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion

Add code
Sep 30, 2020
Figure 1 for Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion
Figure 2 for Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion
Figure 3 for Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion
Figure 4 for Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion
Viaarxiv icon