Picture for Peng Fu

Peng Fu

Multimodal Hypothetical Summary for Retrieval-based Multi-image Question Answering

Add code
Dec 19, 2024
Viaarxiv icon

Reconstruction of Differentially Private Text Sanitization via Large Language Models

Add code
Oct 16, 2024
Figure 1 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 2 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 3 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 4 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Viaarxiv icon

Advancing Academic Knowledge Retrieval via LLM-enhanced Representation Similarity Fusion

Add code
Oct 14, 2024
Figure 1 for Advancing Academic Knowledge Retrieval via LLM-enhanced Representation Similarity Fusion
Figure 2 for Advancing Academic Knowledge Retrieval via LLM-enhanced Representation Similarity Fusion
Figure 3 for Advancing Academic Knowledge Retrieval via LLM-enhanced Representation Similarity Fusion
Figure 4 for Advancing Academic Knowledge Retrieval via LLM-enhanced Representation Similarity Fusion
Viaarxiv icon

Think out Loud: Emotion Deducing Explanation in Dialogues

Add code
Jun 07, 2024
Viaarxiv icon

Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning

Add code
Jun 06, 2024
Viaarxiv icon

Are Large Language Models Table-based Fact-Checkers?

Add code
Feb 04, 2024
Viaarxiv icon

Revisiting the Knowledge Injection Frameworks

Add code
Nov 02, 2023
Viaarxiv icon

Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering

Add code
Oct 26, 2022
Viaarxiv icon

Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension

Add code
Oct 26, 2022
Viaarxiv icon

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Add code
Oct 11, 2022
Figure 1 for A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
Figure 2 for A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
Figure 3 for A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
Figure 4 for A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
Viaarxiv icon