Picture for Hemanth Saratchandran

Hemanth Saratchandran

The Inlet Rank Collapse in Implicit Neural Representations: Diagnosis and Unified Remedy

Add code
Feb 02, 2026
Viaarxiv icon

Procedural Pretraining: Warming Up Language Models with Abstract Data

Add code
Jan 29, 2026
Viaarxiv icon

Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers

Add code
Nov 17, 2025
Figure 1 for Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
Figure 2 for Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
Figure 3 for Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
Figure 4 for Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
Viaarxiv icon

Data Denoising and Derivative Estimation for Data-Driven Modeling of Nonlinear Dynamical Systems

Add code
Sep 17, 2025
Viaarxiv icon

Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product

Add code
Aug 01, 2025
Figure 1 for Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product
Figure 2 for Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product
Figure 3 for Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product
Figure 4 for Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri--Rao Product
Viaarxiv icon

Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization

Add code
May 28, 2025
Figure 1 for Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Figure 2 for Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Figure 3 for Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Figure 4 for Compressing Sine-Activated Low-Rank Adapters through Post-Training Quantization
Viaarxiv icon

Transformers Pretrained on Procedural Data Contain Modular Structures for Algorithmic Reasoning

Add code
May 28, 2025
Viaarxiv icon

Leaner Transformers: More Heads, Less Depth

Add code
May 27, 2025
Viaarxiv icon

Structured Initialization for Vision Transformers

Add code
May 26, 2025
Figure 1 for Structured Initialization for Vision Transformers
Figure 2 for Structured Initialization for Vision Transformers
Figure 3 for Structured Initialization for Vision Transformers
Figure 4 for Structured Initialization for Vision Transformers
Viaarxiv icon

Enhancing Transformers Through Conditioned Embedded Tokens

Add code
May 19, 2025
Viaarxiv icon