Picture for Alan Black

Alan Black

Two-Pass Low Latency End-to-End Spoken Language Understanding

Add code
Jul 14, 2022
Figure 1 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 2 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 3 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 4 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Viaarxiv icon

DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues

Add code
Jun 02, 2021
Figure 1 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 2 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 3 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 4 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Viaarxiv icon

Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data

Add code
Feb 24, 2021
Figure 1 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 2 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 3 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 4 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Viaarxiv icon

Reading Between the Lines: Exploring Infilling in Visual Narratives

Add code
Oct 26, 2020
Figure 1 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 2 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 3 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 4 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Viaarxiv icon

Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data

Add code
Sep 25, 2019
Figure 1 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 2 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 3 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 4 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Viaarxiv icon

Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

Add code
Feb 14, 2018
Figure 1 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 2 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 3 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Viaarxiv icon