Picture for Hemant Yadav

Hemant Yadav

JOOCI: a Framework for Learning Comprehensive Speech Representations

Add code
Oct 14, 2024
Viaarxiv icon

Speech Representation Learning Revisited: The Necessity of Separate Learnable Parameters and Robust Data Augmentation

Add code
Aug 20, 2024
Viaarxiv icon

MS-HuBERT: Mitigating Pre-training and Inference Mismatch in Masked Language Modelling methods for learning Speech Representations

Add code
Jun 09, 2024
Viaarxiv icon

Partial Rank Similarity Minimization Method for Quality MOS Prediction of Unseen Speech Synthesis Systems in Zero-Shot and Semi-supervised setting

Add code
Oct 08, 2023
Viaarxiv icon

Analysing the Masked predictive coding training criterion for pre-training a Speech Representation Model

Add code
Mar 13, 2023
Viaarxiv icon

A Survey of Multilingual Models for Automatic Speech Recognition

Add code
Feb 25, 2022
Viaarxiv icon

Intent Classification Using Pre-Trained Embeddings For Low Resource Languages

Add code
Oct 18, 2021
Figure 1 for Intent Classification Using Pre-Trained Embeddings For Low Resource Languages
Figure 2 for Intent Classification Using Pre-Trained Embeddings For Low Resource Languages
Figure 3 for Intent Classification Using Pre-Trained Embeddings For Low Resource Languages
Figure 4 for Intent Classification Using Pre-Trained Embeddings For Low Resource Languages
Viaarxiv icon

Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation Slides using Contextualized Embeddings

Add code
Feb 09, 2021
Figure 1 for Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation Slides using Contextualized Embeddings
Figure 2 for Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation Slides using Contextualized Embeddings
Figure 3 for Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation Slides using Contextualized Embeddings
Figure 4 for Cisco at AAAI-CAD21 shared task: Predicting Emphasis in Presentation Slides using Contextualized Embeddings
Viaarxiv icon

De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting

Add code
Dec 01, 2020
Figure 1 for De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting
Figure 2 for De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting
Figure 3 for De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting
Figure 4 for De-STT: De-entaglement of unwanted Nuisances and Biases in Speech to Text System using Adversarial Forgetting
Viaarxiv icon

MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings

Add code
Sep 06, 2020
Figure 1 for MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings
Figure 2 for MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings
Figure 3 for MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings
Figure 4 for MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings
Viaarxiv icon