Picture for Yucheng Li

Yucheng Li

SCBench: A KV Cache-Centric Analysis of Long-Context Methods

Add code
Dec 13, 2024
Viaarxiv icon

On the Rigour of Scientific Writing: Criteria, Analysis, and Insights

Add code
Oct 07, 2024
Viaarxiv icon

Data Contamination Report from the 2024 CONDA Shared Task

Add code
Jul 31, 2024
Figure 1 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 2 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 3 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 4 for Data Contamination Report from the 2024 CONDA Shared Task
Viaarxiv icon

Fluorescence Diffraction Tomography using Explicit Neural Fields

Add code
Jul 23, 2024
Viaarxiv icon

MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

Add code
Jul 02, 2024
Figure 1 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 2 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 3 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 4 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Viaarxiv icon

Evaluating Large Language Models for Generalization and Robustness via Data Compression

Add code
Feb 04, 2024
Viaarxiv icon

Finding Challenging Metaphors that Confuse Pretrained Language Models

Add code
Jan 29, 2024
Viaarxiv icon

LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction

Add code
Dec 26, 2023
Viaarxiv icon

An Open Source Data Contamination Report for Llama Series Models

Add code
Oct 26, 2023
Viaarxiv icon

Compressing Context to Enhance Inference Efficiency of Large Language Models

Add code
Oct 09, 2023
Viaarxiv icon