Picture for Luoqiu Li

Luoqiu Li

Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

Add code
Aug 31, 2021
Figure 1 for Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Figure 2 for Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Figure 3 for Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Figure 4 for Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Viaarxiv icon

OntoED: Low-resource Event Detection with Ontology Embedding

Add code
May 27, 2021
Figure 1 for OntoED: Low-resource Event Detection with Ontology Embedding
Figure 2 for OntoED: Low-resource Event Detection with Ontology Embedding
Figure 3 for OntoED: Low-resource Event Detection with Ontology Embedding
Figure 4 for OntoED: Low-resource Event Detection with Ontology Embedding
Viaarxiv icon

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

Add code
Apr 11, 2021
Figure 1 for Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Figure 2 for Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Figure 3 for Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Figure 4 for Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Viaarxiv icon

Text-guided Legal Knowledge Graph Reasoning

Add code
Apr 06, 2021
Figure 1 for Text-guided Legal Knowledge Graph Reasoning
Figure 2 for Text-guided Legal Knowledge Graph Reasoning
Figure 3 for Text-guided Legal Knowledge Graph Reasoning
Figure 4 for Text-guided Legal Knowledge Graph Reasoning
Viaarxiv icon

Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction

Add code
Sep 23, 2020
Figure 1 for Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction
Figure 2 for Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction
Figure 3 for Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction
Figure 4 for Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction
Viaarxiv icon