Picture for Xuankai Chang

Xuankai Chang

SQ-Whisper: Speaker-Querying based Whisper Model for Target-Speaker ASR

Add code
Dec 07, 2024
Viaarxiv icon

SynesLM: A Unified Approach for Audio-visual Speech Recognition and Translation via Language Model and Synthetic Data

Add code
Aug 01, 2024
Viaarxiv icon

The CHiME-8 DASR Challenge for Generalizable and Array Agnostic Distant Automatic Speech Recognition and Diarization

Add code
Jul 23, 2024
Viaarxiv icon

Towards Robust Speech Representation Learning for Thousands of Languages

Add code
Jul 02, 2024
Viaarxiv icon

ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets

Add code
Jun 12, 2024
Viaarxiv icon

The Interspeech 2024 Challenge on Speech Processing Using Discrete Units

Add code
Jun 11, 2024
Figure 1 for The Interspeech 2024 Challenge on Speech Processing Using Discrete Units
Figure 2 for The Interspeech 2024 Challenge on Speech Processing Using Discrete Units
Figure 3 for The Interspeech 2024 Challenge on Speech Processing Using Discrete Units
Figure 4 for The Interspeech 2024 Challenge on Speech Processing Using Discrete Units
Viaarxiv icon

A Large-Scale Evaluation of Speech Foundation Models

Add code
Apr 15, 2024
Viaarxiv icon

LV-CTC: Non-autoregressive ASR with CTC and latent variable models

Add code
Mar 28, 2024
Viaarxiv icon

TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages

Add code
Feb 25, 2024
Figure 1 for TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages
Figure 2 for TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages
Figure 3 for TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages
Figure 4 for TMT: Tri-Modal Translation between Speech, Image, and Text by Processing Different Modalities as Different Languages
Viaarxiv icon

OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer

Add code
Jan 30, 2024
Figure 1 for OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
Figure 2 for OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
Figure 3 for OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
Figure 4 for OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
Viaarxiv icon