Picture for Wei-Tsung Kao

Wei-Tsung Kao

DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores

Add code
Apr 07, 2022
Figure 1 for DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores
Figure 2 for DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores
Figure 3 for DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores
Figure 4 for DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores
Viaarxiv icon

On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting

Add code
Apr 01, 2022
Figure 1 for On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting
Figure 2 for On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting
Figure 3 for On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting
Figure 4 for On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting
Viaarxiv icon

Membership Inference Attacks Against Self-supervised Speech Models

Add code
Nov 09, 2021
Figure 1 for Membership Inference Attacks Against Self-supervised Speech Models
Figure 2 for Membership Inference Attacks Against Self-supervised Speech Models
Figure 3 for Membership Inference Attacks Against Self-supervised Speech Models
Figure 4 for Membership Inference Attacks Against Self-supervised Speech Models
Viaarxiv icon

Utilizing Self-supervised Representations for MOS Prediction

Add code
Apr 21, 2021
Figure 1 for Utilizing Self-supervised Representations for MOS Prediction
Figure 2 for Utilizing Self-supervised Representations for MOS Prediction
Figure 3 for Utilizing Self-supervised Representations for MOS Prediction
Figure 4 for Utilizing Self-supervised Representations for MOS Prediction
Viaarxiv icon

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

Add code
Mar 12, 2021
Figure 1 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 2 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 3 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 4 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Viaarxiv icon

Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT

Add code
Jan 25, 2020
Figure 1 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 2 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 3 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 4 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Viaarxiv icon