Picture for Sarath Chandar

Sarath Chandar

CoPeP: Benchmarking Continual Pretraining for Protein Language Models

Add code
Mar 03, 2026
Viaarxiv icon

The Expressive Limits of Diagonal SSMs for State-Tracking

Add code
Mar 02, 2026
Viaarxiv icon

Squeezing More from the Stream : Learning Representation Online for Streaming Reinforcement Learning

Add code
Feb 10, 2026
Viaarxiv icon

LLMs Can't Play Hangman: On the Necessity of a Private Working Memory for Language Agents

Add code
Jan 11, 2026
Viaarxiv icon

Investigating the Multilingual Calibration Effects of Language Model Instruction-Tuning

Add code
Jan 04, 2026
Viaarxiv icon

Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models

Add code
Dec 16, 2025
Figure 1 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 2 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 3 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 4 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Viaarxiv icon

Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation

Add code
Oct 05, 2025
Figure 1 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 2 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 3 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 4 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Viaarxiv icon

Parity Requires Unified Input Dependence and Negative Eigenvalues in SSMs

Add code
Aug 10, 2025
Viaarxiv icon

Optimizers Qualitatively Alter Solutions And We Should Leverage This

Add code
Jul 16, 2025
Viaarxiv icon

Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models

Add code
Jun 12, 2025
Figure 1 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 2 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 3 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 4 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Viaarxiv icon