Picture for Yingyan Celine Lin

Yingyan Celine Lin

LongMamba: Enhancing Mamba's Long Context Capabilities via Training-Free Receptive Field Enlargement

Add code
Apr 22, 2025
Viaarxiv icon

Scaling Laws of Graph Neural Networks for Atomistic Materials Modeling

Add code
Apr 10, 2025
Viaarxiv icon

Uni-Render: A Unified Accelerator for Real-Time Rendering Across Diverse Neural Renderers

Add code
Mar 31, 2025
Viaarxiv icon

GauRast: Enhancing GPU Triangle Rasterizers to Accelerate 3D Gaussian Splatting

Add code
Mar 20, 2025
Viaarxiv icon

MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators

Add code
Jan 06, 2025
Figure 1 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 2 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 3 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Figure 4 for MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Viaarxiv icon

Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers

Add code
Dec 22, 2024
Viaarxiv icon

AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment

Add code
Nov 15, 2024
Viaarxiv icon

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration

Add code
Jun 22, 2024
Figure 1 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 2 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 3 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Figure 4 for Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Viaarxiv icon

EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting

Add code
Jun 22, 2024
Viaarxiv icon