Picture for Md Akmal Haidar

Md Akmal Haidar

Conformer with dual-mode chunked attention for joint online and offline ASR

Add code
Jun 22, 2022
Figure 1 for Conformer with dual-mode chunked attention for joint online and offline ASR
Figure 2 for Conformer with dual-mode chunked attention for joint online and offline ASR
Figure 3 for Conformer with dual-mode chunked attention for joint online and offline ASR
Figure 4 for Conformer with dual-mode chunked attention for joint online and offline ASR
Viaarxiv icon

CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation

Add code
Apr 15, 2022
Figure 1 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 2 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 3 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 4 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Viaarxiv icon

RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation

Add code
Oct 01, 2021
Figure 1 for RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Figure 2 for RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Figure 3 for RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Figure 4 for RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation
Viaarxiv icon

Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation

Add code
Mar 17, 2021
Figure 1 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 2 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 3 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 4 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Viaarxiv icon

Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks

Add code
Mar 10, 2021
Figure 1 for Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks
Figure 2 for Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks
Figure 3 for Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks
Figure 4 for Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks
Viaarxiv icon

Distilled embedding: non-linear embedding factorization using knowledge distillation

Add code
Oct 02, 2019
Figure 1 for Distilled embedding: non-linear embedding factorization using knowledge distillation
Figure 2 for Distilled embedding: non-linear embedding factorization using knowledge distillation
Figure 3 for Distilled embedding: non-linear embedding factorization using knowledge distillation
Figure 4 for Distilled embedding: non-linear embedding factorization using knowledge distillation
Viaarxiv icon