Picture for Hongye Jin

Hongye Jin

Gradient Rewiring for Editable Graph Neural Network Training

Add code
Oct 21, 2024
Figure 1 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 2 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 3 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 4 for Gradient Rewiring for Editable Graph Neural Network Training
Viaarxiv icon

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

Add code
Oct 06, 2024
Figure 1 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 2 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 3 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 4 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Viaarxiv icon

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches

Add code
Jul 01, 2024
Figure 1 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 2 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 3 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 4 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Viaarxiv icon

KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache

Add code
Feb 05, 2024
Viaarxiv icon

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

Add code
Jan 02, 2024
Viaarxiv icon

Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering

Add code
Dec 29, 2023
Figure 1 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 2 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 3 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 4 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Viaarxiv icon

GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length

Add code
Oct 01, 2023
Figure 1 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 2 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 3 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 4 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Viaarxiv icon

Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

Add code
Apr 27, 2023
Viaarxiv icon

Weight Perturbation Can Help Fairness under Distribution Shift

Add code
Mar 06, 2023
Viaarxiv icon

Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity

Add code
Jan 31, 2023
Viaarxiv icon