Picture for Ruofei Lai

Ruofei Lai

Debatrix: Multi-dimensinal Debate Judge with Iterative Chronological Analysis Based on LLM

Add code
Mar 12, 2024
Viaarxiv icon

Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation

Add code
Dec 21, 2023
Figure 1 for Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation
Figure 2 for Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation
Figure 3 for Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation
Figure 4 for Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation
Viaarxiv icon

Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining

Add code
Dec 01, 2023
Figure 1 for Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining
Figure 2 for Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining
Figure 3 for Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining
Figure 4 for Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining
Viaarxiv icon

IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions

Add code
Nov 30, 2023
Viaarxiv icon

Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection

Add code
Aug 30, 2023
Viaarxiv icon

WebBrain: Learning to Generate Factually Correct Articles for Queries by Grounding on Large Web Corpus

Add code
Apr 10, 2023
Viaarxiv icon

Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?

Add code
Sep 14, 2022
Figure 1 for Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?
Figure 2 for Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?
Figure 3 for Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?
Figure 4 for Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?
Viaarxiv icon

Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding

Add code
Aug 19, 2022
Figure 1 for Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Figure 2 for Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Figure 3 for Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Figure 4 for Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Viaarxiv icon

Towards More Effective and Economic Sparsely-Activated Model

Add code
Oct 14, 2021
Figure 1 for Towards More Effective and Economic Sparsely-Activated Model
Figure 2 for Towards More Effective and Economic Sparsely-Activated Model
Figure 3 for Towards More Effective and Economic Sparsely-Activated Model
Figure 4 for Towards More Effective and Economic Sparsely-Activated Model
Viaarxiv icon

YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker

Add code
Sep 14, 2021
Figure 1 for YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker
Figure 2 for YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker
Figure 3 for YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker
Figure 4 for YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker
Viaarxiv icon