Picture for Kuan-Po Huang

Kuan-Po Huang

How to Learn a New Language? An Efficient Solution for Self-Supervised Learning Models Unseen Languages Adaption in Low-Resource Scenario

Add code
Nov 27, 2024
Viaarxiv icon

Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks

Add code
Nov 08, 2024
Viaarxiv icon

Do Prompts Really Prompt? Exploring the Prompt Understanding Capability of Whisper

Add code
Jun 09, 2024
Viaarxiv icon

Dataset-Distillation Generative Model for Speech Emotion Recognition

Add code
Jun 05, 2024
Viaarxiv icon

Investigating Zero-Shot Generalizability on Mandarin-English Code-Switched ASR and Speech-to-text Translation of Recent Foundation Models with Self-Supervision and Weak Supervision

Add code
Dec 30, 2023
Viaarxiv icon

Noise robust distillation of self-supervised speech models via correlation metrics

Add code
Dec 19, 2023
Figure 1 for Noise robust distillation of self-supervised speech models via correlation metrics
Figure 2 for Noise robust distillation of self-supervised speech models via correlation metrics
Figure 3 for Noise robust distillation of self-supervised speech models via correlation metrics
Figure 4 for Noise robust distillation of self-supervised speech models via correlation metrics
Viaarxiv icon

Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages

Add code
Oct 04, 2023
Figure 1 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 2 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 3 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 4 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Viaarxiv icon

Ensemble knowledge distillation of self-supervised speech models

Add code
Feb 24, 2023
Viaarxiv icon

Improving generalizability of distilled self-supervised speech processing models under distorted settings

Add code
Oct 20, 2022
Figure 1 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 2 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 3 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 4 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Viaarxiv icon

Improving the transferability of speech separation by meta-learning

Add code
Mar 11, 2022
Figure 1 for Improving the transferability of speech separation by meta-learning
Figure 2 for Improving the transferability of speech separation by meta-learning
Figure 3 for Improving the transferability of speech separation by meta-learning
Viaarxiv icon