Picture for Shahriar Golchin

Shahriar Golchin

Memorization In In-Context Learning

Add code
Aug 21, 2024
Figure 1 for Memorization In In-Context Learning
Figure 2 for Memorization In In-Context Learning
Figure 3 for Memorization In In-Context Learning
Figure 4 for Memorization In In-Context Learning
Viaarxiv icon

Data Contamination Report from the 2024 CONDA Shared Task

Add code
Jul 31, 2024
Figure 1 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 2 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 3 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 4 for Data Contamination Report from the 2024 CONDA Shared Task
Viaarxiv icon

Grading Massive Open Online Courses Using Large Language Models

Add code
Jun 16, 2024
Viaarxiv icon

Large Language Models As MOOCs Graders

Add code
Feb 10, 2024
Viaarxiv icon

Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models

Add code
Nov 20, 2023
Figure 1 for Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Figure 2 for Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Figure 3 for Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Figure 4 for Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Viaarxiv icon

Time Travel in LLMs: Tracing Data Contamination in Large Language Models

Add code
Aug 16, 2023
Figure 1 for Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Figure 2 for Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Figure 3 for Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Figure 4 for Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Viaarxiv icon

Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords

Add code
Jul 14, 2023
Figure 1 for Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Figure 2 for Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Figure 3 for Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Figure 4 for Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords
Viaarxiv icon

A Compact Pretraining Approach for Neural Language Models

Add code
Aug 29, 2022
Figure 1 for A Compact Pretraining Approach for Neural Language Models
Figure 2 for A Compact Pretraining Approach for Neural Language Models
Figure 3 for A Compact Pretraining Approach for Neural Language Models
Figure 4 for A Compact Pretraining Approach for Neural Language Models
Viaarxiv icon