Picture for Yizhou Peng

Yizhou Peng

ED-sKWS: Early-Decision Spiking Neural Networks for Rapid,and Energy-Efficient Keyword Spotting

Add code
Jun 14, 2024
Viaarxiv icon

The NUS-HLT System for ICASSP2024 ICMC-ASR Grand Challenge

Add code
Dec 26, 2023
Viaarxiv icon

Adapting OpenAI's Whisper for Speech Recognition on Code-Switch Mandarin-English SEAME and ASRU2019 Datasets

Add code
Nov 29, 2023
Viaarxiv icon

Mutual Information-Based Integrated Sensing and Communications: A WMMSE Framework

Add code
Oct 19, 2023
Viaarxiv icon

Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder

Add code
Jul 09, 2022
Figure 1 for Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder
Figure 2 for Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder
Figure 3 for Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder
Figure 4 for Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder
Viaarxiv icon

Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition

Add code
Jul 09, 2022
Figure 1 for Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition
Figure 2 for Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition
Figure 3 for Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition
Figure 4 for Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition
Viaarxiv icon

Minimum word error training for non-autoregressive Transformer-based code-switching ASR

Add code
Oct 07, 2021
Figure 1 for Minimum word error training for non-autoregressive Transformer-based code-switching ASR
Figure 2 for Minimum word error training for non-autoregressive Transformer-based code-switching ASR
Figure 3 for Minimum word error training for non-autoregressive Transformer-based code-switching ASR
Figure 4 for Minimum word error training for non-autoregressive Transformer-based code-switching ASR
Viaarxiv icon

E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition

Add code
Jun 15, 2021
Figure 1 for E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition
Figure 2 for E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition
Figure 3 for E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition
Figure 4 for E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition
Viaarxiv icon