Picture for Shu-wen Yang

Shu-wen Yang

Building a Taiwanese Mandarin Spoken Language Model: A First Attempt

Add code
Nov 11, 2024
Viaarxiv icon

Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks

Add code
Nov 08, 2024
Viaarxiv icon

A Large-Scale Evaluation of Speech Foundation Models

Add code
Apr 15, 2024
Viaarxiv icon

SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning

Add code
Oct 16, 2022
Figure 1 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 2 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 3 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 4 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Viaarxiv icon

DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering

Add code
Mar 26, 2022
Figure 1 for DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering
Figure 2 for DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering
Figure 3 for DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering
Figure 4 for DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering
Viaarxiv icon

Investigating self-supervised learning for speech enhancement and separation

Add code
Mar 15, 2022
Figure 1 for Investigating self-supervised learning for speech enhancement and separation
Figure 2 for Investigating self-supervised learning for speech enhancement and separation
Figure 3 for Investigating self-supervised learning for speech enhancement and separation
Figure 4 for Investigating self-supervised learning for speech enhancement and separation
Viaarxiv icon

SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities

Add code
Mar 14, 2022
Figure 1 for SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities
Figure 2 for SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities
Figure 3 for SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities
Figure 4 for SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities
Viaarxiv icon

Speech Representation Learning Through Self-supervised Pretraining And Multi-task Finetuning

Add code
Oct 18, 2021
Figure 1 for Speech Representation Learning Through Self-supervised Pretraining And Multi-task Finetuning
Figure 2 for Speech Representation Learning Through Self-supervised Pretraining And Multi-task Finetuning
Viaarxiv icon

An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition

Add code
Oct 09, 2021
Figure 1 for An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition
Figure 2 for An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition
Figure 3 for An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition
Figure 4 for An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition
Viaarxiv icon

DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT

Add code
Oct 06, 2021
Figure 1 for DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT
Figure 2 for DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT
Figure 3 for DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT
Figure 4 for DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT
Viaarxiv icon