Picture for Prafulla Kumar Choubey

Prafulla Kumar Choubey

Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency

Add code
Oct 22, 2024
Figure 1 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 2 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 3 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 4 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Viaarxiv icon

Do RAG Systems Cover What Matters? Evaluating and Optimizing Responses with Sub-Question Coverage

Add code
Oct 20, 2024
Viaarxiv icon

Lexical Repetitions Lead to Rote Learning: Unveiling the Impact of Lexical Overlap in Train and Test Reference Summaries

Add code
Nov 15, 2023
Viaarxiv icon

Embrace Divergence for Richer Insights: A Multi-document Summarization Benchmark and a Case Study on Summarizing Diverse Information from News Articles

Add code
Sep 17, 2023
Viaarxiv icon

XGen-7B Technical Report

Add code
Sep 07, 2023
Viaarxiv icon

Improving Factual Consistency in Summarization with Compression-Based Post-Editing

Add code
Nov 11, 2022
Viaarxiv icon

Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning

Add code
Oct 23, 2022
Viaarxiv icon

Conformal Predictor for Improving Zero-shot Text Classification Efficiency

Add code
Oct 23, 2022
Viaarxiv icon

P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts

Add code
Oct 14, 2021
Figure 1 for P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Figure 2 for P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Figure 3 for P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Figure 4 for P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Viaarxiv icon

MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization

Add code
Oct 14, 2021
Figure 1 for MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization
Figure 2 for MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization
Figure 3 for MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization
Figure 4 for MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization
Viaarxiv icon