Picture for Jae-Joon Kim

Jae-Joon Kim

Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

Add code
Feb 03, 2026
Viaarxiv icon

LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents

Add code
Feb 01, 2026
Viaarxiv icon

LiteStage: Latency-aware Layer Skipping for Multi-stage Reasoning

Add code
Oct 16, 2025
Viaarxiv icon

Retrospective Sparse Attention for Efficient Long-Context Generation

Add code
Aug 12, 2025
Figure 1 for Retrospective Sparse Attention for Efficient Long-Context Generation
Figure 2 for Retrospective Sparse Attention for Efficient Long-Context Generation
Figure 3 for Retrospective Sparse Attention for Efficient Long-Context Generation
Figure 4 for Retrospective Sparse Attention for Efficient Long-Context Generation
Viaarxiv icon

Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning

Add code
May 20, 2025
Figure 1 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 2 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 3 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 4 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Viaarxiv icon

FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation

Add code
Feb 03, 2025
Viaarxiv icon

COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators

Add code
Jan 12, 2025
Figure 1 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 2 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 3 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 4 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Viaarxiv icon

Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models

Add code
Jun 18, 2024
Figure 1 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 2 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 3 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 4 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Viaarxiv icon

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

Add code
Feb 14, 2024
Figure 1 for SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
Figure 2 for SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
Figure 3 for SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
Figure 4 for SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
Viaarxiv icon

Squeezing Large-Scale Diffusion Models for Mobile

Add code
Jul 03, 2023
Viaarxiv icon