Picture for Subendhu Rongali

Subendhu Rongali

University of Massachusetts Amherst

Low-Resource Compositional Semantic Parsing with Concept Pretraining

Add code
Jan 30, 2023
Figure 1 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 2 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 3 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 4 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Viaarxiv icon

Training Naturalized Semantic Parsers with Very Little Data

Add code
May 04, 2022
Figure 1 for Training Naturalized Semantic Parsers with Very Little Data
Figure 2 for Training Naturalized Semantic Parsers with Very Little Data
Figure 3 for Training Naturalized Semantic Parsers with Very Little Data
Figure 4 for Training Naturalized Semantic Parsers with Very Little Data
Viaarxiv icon

Improved Latent Tree Induction with Distant Supervision via Span Constraints

Add code
Sep 10, 2021
Figure 1 for Improved Latent Tree Induction with Distant Supervision via Span Constraints
Figure 2 for Improved Latent Tree Induction with Distant Supervision via Span Constraints
Figure 3 for Improved Latent Tree Induction with Distant Supervision via Span Constraints
Figure 4 for Improved Latent Tree Induction with Distant Supervision via Span Constraints
Viaarxiv icon

Exploring Transfer Learning For End-to-End Spoken Language Understanding

Add code
Dec 15, 2020
Figure 1 for Exploring Transfer Learning For End-to-End Spoken Language Understanding
Figure 2 for Exploring Transfer Learning For End-to-End Spoken Language Understanding
Figure 3 for Exploring Transfer Learning For End-to-End Spoken Language Understanding
Figure 4 for Exploring Transfer Learning For End-to-End Spoken Language Understanding
Viaarxiv icon

Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings

Add code
Oct 10, 2020
Figure 1 for Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
Figure 2 for Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
Figure 3 for Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
Figure 4 for Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
Viaarxiv icon

Improved Pretraining for Domain-specific Contextual Embedding Models

Add code
Apr 05, 2020
Figure 1 for Improved Pretraining for Domain-specific Contextual Embedding Models
Figure 2 for Improved Pretraining for Domain-specific Contextual Embedding Models
Viaarxiv icon

Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing

Add code
Jan 30, 2020
Figure 1 for Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing
Figure 2 for Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing
Figure 3 for Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing
Figure 4 for Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing
Viaarxiv icon

Taxonomy grounded aggregation of classifiers with different label sets

Add code
Dec 01, 2015
Figure 1 for Taxonomy grounded aggregation of classifiers with different label sets
Figure 2 for Taxonomy grounded aggregation of classifiers with different label sets
Figure 3 for Taxonomy grounded aggregation of classifiers with different label sets
Figure 4 for Taxonomy grounded aggregation of classifiers with different label sets
Viaarxiv icon