Picture for Anshumali Shrivastava

Anshumali Shrivastava

I3S: Importance Sampling Subspace Selection for Low-Rank Optimization in LLM Pretraining

Add code
Feb 09, 2025
Viaarxiv icon

SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching

Add code
Oct 08, 2024
Figure 1 for SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching
Figure 2 for SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching
Figure 3 for SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching
Figure 4 for SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching
Viaarxiv icon

LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid

Add code
Jul 14, 2024
Figure 1 for LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid
Figure 2 for LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid
Figure 3 for LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid
Figure 4 for LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid
Viaarxiv icon

IDentity with Locality: An ideal hash for gene sequence search

Add code
Jun 21, 2024
Figure 1 for IDentity with Locality: An ideal hash for gene sequence search
Figure 2 for IDentity with Locality: An ideal hash for gene sequence search
Figure 3 for IDentity with Locality: An ideal hash for gene sequence search
Figure 4 for IDentity with Locality: An ideal hash for gene sequence search
Viaarxiv icon

KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization

Add code
May 07, 2024
Figure 1 for KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
Figure 2 for KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
Figure 3 for KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
Figure 4 for KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization
Viaarxiv icon

NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention

Add code
Mar 02, 2024
Viaarxiv icon

Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model

Add code
Feb 27, 2024
Viaarxiv icon

Learning Scalable Structural Representations for Link Prediction with Bloom Signatures

Add code
Dec 28, 2023
Figure 1 for Learning Scalable Structural Representations for Link Prediction with Bloom Signatures
Figure 2 for Learning Scalable Structural Representations for Link Prediction with Bloom Signatures
Figure 3 for Learning Scalable Structural Representations for Link Prediction with Bloom Signatures
Figure 4 for Learning Scalable Structural Representations for Link Prediction with Bloom Signatures
Viaarxiv icon

Contractive error feedback for gradient compression

Add code
Dec 13, 2023
Figure 1 for Contractive error feedback for gradient compression
Figure 2 for Contractive error feedback for gradient compression
Figure 3 for Contractive error feedback for gradient compression
Figure 4 for Contractive error feedback for gradient compression
Viaarxiv icon

Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies

Add code
Nov 22, 2023
Viaarxiv icon