Picture for Martin Kuo

Martin Kuo

Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models

Add code
Apr 03, 2024
Figure 1 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 2 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 3 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Figure 4 for Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Viaarxiv icon

DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining

Add code
Nov 08, 2023
Viaarxiv icon

Towards Building the Federated GPT: Federated Instruction Tuning

Add code
May 09, 2023
Viaarxiv icon

Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding

Add code
Sep 16, 2020
Figure 1 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 2 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 3 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Figure 4 for Tag and Correct: Question aware Open Information Extraction with Two-stage Decoding
Viaarxiv icon