Picture for Jiayi Yuan

Jiayi Yuan

Henry

Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models

Add code
Mar 20, 2025
Viaarxiv icon

Interpreting and Steering LLMs with Mutual Information-based Explanations on Sparse Autoencoders

Add code
Feb 21, 2025
Viaarxiv icon

Robot Learning with Super-Linear Scaling

Add code
Dec 02, 2024
Viaarxiv icon

InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma

Add code
Nov 15, 2024
Figure 1 for InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
Figure 2 for InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
Figure 3 for InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
Figure 4 for InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
Viaarxiv icon

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

Add code
Oct 06, 2024
Figure 1 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 2 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 3 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 4 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Viaarxiv icon

DHP Benchmark: Are LLMs Good NLG Evaluators?

Add code
Aug 25, 2024
Viaarxiv icon

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches

Add code
Jul 01, 2024
Figure 1 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 2 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 3 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 4 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Viaarxiv icon

Understanding Different Design Choices in Training Large Time Series Models

Add code
Jun 20, 2024
Figure 1 for Understanding Different Design Choices in Training Large Time Series Models
Figure 2 for Understanding Different Design Choices in Training Large Time Series Models
Figure 3 for Understanding Different Design Choices in Training Large Time Series Models
Figure 4 for Understanding Different Design Choices in Training Large Time Series Models
Viaarxiv icon

LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

Add code
Feb 29, 2024
Figure 1 for LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario
Figure 2 for LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario
Figure 3 for LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario
Figure 4 for LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario
Viaarxiv icon

KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache

Add code
Feb 05, 2024
Figure 1 for KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
Figure 2 for KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
Figure 3 for KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
Figure 4 for KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
Viaarxiv icon