Picture for Tulika Mitra

Tulika Mitra

HALO: Hardware-aware quantization with low critical-path-delay weights for LLM acceleration

Add code
Feb 27, 2025
Viaarxiv icon

Condensed Sample-Guided Model Inversion for Knowledge Distillation

Add code
Aug 25, 2024
Figure 1 for Condensed Sample-Guided Model Inversion for Knowledge Distillation
Figure 2 for Condensed Sample-Guided Model Inversion for Knowledge Distillation
Figure 3 for Condensed Sample-Guided Model Inversion for Knowledge Distillation
Figure 4 for Condensed Sample-Guided Model Inversion for Knowledge Distillation
Viaarxiv icon

Generalizing Teacher Networks for Effective Knowledge Distillation Across Student Architectures

Add code
Jul 22, 2024
Viaarxiv icon

SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs

Add code
May 27, 2024
Figure 1 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 2 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 3 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Figure 4 for SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Viaarxiv icon

CRISP: Hybrid Structured Sparsity for Class-aware Model Pruning

Add code
Nov 24, 2023
Viaarxiv icon

Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs

Add code
Nov 21, 2023
Figure 1 for Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs
Figure 2 for Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs
Figure 3 for Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs
Figure 4 for Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs
Viaarxiv icon

InkStream: Real-time GNN Inference on Streaming Graphs via Incremental Update

Add code
Sep 20, 2023
Viaarxiv icon

Accelerating Edge AI with Morpher: An Integrated Design, Compilation and Simulation Framework for CGRAs

Add code
Sep 12, 2023
Viaarxiv icon

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Add code
Jan 09, 2022
Figure 1 for Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Figure 2 for Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Figure 3 for Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Figure 4 for Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Viaarxiv icon

Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

Add code
Aug 11, 2021
Figure 1 for Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Figure 2 for Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Figure 3 for Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Figure 4 for Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Viaarxiv icon