Picture for Wael Hamza

Wael Hamza

Amazon Alexa AI

Towards ASR Robust Spoken Language Understanding Through In-Context Learning With Word Confusion Networks

Add code
Jan 05, 2024
Viaarxiv icon

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

Add code
Jun 14, 2023
Viaarxiv icon

Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data

Add code
Apr 04, 2023
Viaarxiv icon

Low-Resource Compositional Semantic Parsing with Concept Pretraining

Add code
Jan 30, 2023
Viaarxiv icon

CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing

Add code
Oct 14, 2022
Figure 1 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 2 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 3 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 4 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Viaarxiv icon

LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging

Add code
Sep 20, 2022
Figure 1 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 2 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 3 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 4 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Aug 03, 2022
Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Jun 15, 2022
Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

Training Naturalized Semantic Parsers with Very Little Data

Add code
May 04, 2022
Figure 1 for Training Naturalized Semantic Parsers with Very Little Data
Figure 2 for Training Naturalized Semantic Parsers with Very Little Data
Figure 3 for Training Naturalized Semantic Parsers with Very Little Data
Figure 4 for Training Naturalized Semantic Parsers with Very Little Data
Viaarxiv icon

Instilling Type Knowledge in Language Models via Multi-Task QA

Add code
Apr 28, 2022
Figure 1 for Instilling Type Knowledge in Language Models via Multi-Task QA
Figure 2 for Instilling Type Knowledge in Language Models via Multi-Task QA
Figure 3 for Instilling Type Knowledge in Language Models via Multi-Task QA
Figure 4 for Instilling Type Knowledge in Language Models via Multi-Task QA
Viaarxiv icon