Picture for Lizhen Qu

Lizhen Qu

Language Independent Neuro-Symbolic Semantic Parsing for Form Understanding

Add code
May 08, 2023
Figure 1 for Language Independent Neuro-Symbolic Semantic Parsing for Form Understanding
Figure 2 for Language Independent Neuro-Symbolic Semantic Parsing for Form Understanding
Figure 3 for Language Independent Neuro-Symbolic Semantic Parsing for Form Understanding
Figure 4 for Language Independent Neuro-Symbolic Semantic Parsing for Form Understanding
Viaarxiv icon

SocialDial: A Benchmark for Socially-Aware Dialogue Systems

Add code
Apr 24, 2023
Viaarxiv icon

Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue Response Generation Models by Causal Discovery

Add code
Mar 02, 2023
Viaarxiv icon

Document Flattening: Beyond Concatenating Context for Document-Level Neural Machine Translation

Add code
Feb 16, 2023
Viaarxiv icon

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

Add code
Dec 20, 2022
Viaarxiv icon

Let's Negotiate! A Survey of Negotiation Dialogue Systems

Add code
Dec 18, 2022
Viaarxiv icon

Learning Object-Language Alignments for Open-Vocabulary Object Detection

Add code
Nov 27, 2022
Viaarxiv icon

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

Add code
Oct 11, 2022
Figure 1 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 2 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 3 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 4 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Viaarxiv icon

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

Add code
Mar 15, 2022
Figure 1 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 2 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 3 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 4 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Viaarxiv icon

Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation

Add code
Nov 10, 2021
Figure 1 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 2 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 3 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 4 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Viaarxiv icon