Picture for Saleh Soltan

Saleh Soltan

GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning

Add code
Apr 14, 2024
Viaarxiv icon

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

Add code
Jun 14, 2023
Viaarxiv icon

CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing

Add code
Oct 14, 2022
Figure 1 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 2 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 3 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 4 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Viaarxiv icon

LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging

Add code
Sep 20, 2022
Figure 1 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 2 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 3 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 4 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Aug 03, 2022
Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Jun 15, 2022
Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding

Add code
Oct 08, 2020
Figure 1 for Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
Figure 2 for Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
Figure 3 for Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
Figure 4 for Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
Viaarxiv icon