Picture for Ayan Chakrabarti

Ayan Chakrabarti

A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs

Add code
Oct 24, 2024
Figure 1 for A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs
Figure 2 for A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs
Figure 3 for A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs
Figure 4 for A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs
Viaarxiv icon

SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced Token Detection

Add code
Jan 24, 2024
Viaarxiv icon

SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models

Add code
Aug 14, 2023
Viaarxiv icon

Substance or Style: What Does Your Image Embedding Know?

Add code
Jul 10, 2023
Viaarxiv icon

Benchmarking Robustness to Adversarial Image Obfuscations

Add code
Jan 30, 2023
Viaarxiv icon

Adaptive Edge Offloading for Image Classification Under Rate Limit

Add code
Jul 31, 2022
Figure 1 for Adaptive Edge Offloading for Image Classification Under Rate Limit
Figure 2 for Adaptive Edge Offloading for Image Classification Under Rate Limit
Figure 3 for Adaptive Edge Offloading for Image Classification Under Rate Limit
Figure 4 for Adaptive Edge Offloading for Image Classification Under Rate Limit
Viaarxiv icon

PROVES: Establishing Image Provenance using Semantic Signatures

Add code
Oct 21, 2021
Figure 1 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 2 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 3 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 4 for PROVES: Establishing Image Provenance using Semantic Signatures
Viaarxiv icon

Leveraging redundancy in attention with Reuse Transformers

Add code
Oct 13, 2021
Figure 1 for Leveraging redundancy in attention with Reuse Transformers
Figure 2 for Leveraging redundancy in attention with Reuse Transformers
Figure 3 for Leveraging redundancy in attention with Reuse Transformers
Figure 4 for Leveraging redundancy in attention with Reuse Transformers
Viaarxiv icon

Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation

Add code
Jun 16, 2021
Figure 1 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 2 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 3 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 4 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Viaarxiv icon

Understanding Robustness of Transformers for Image Classification

Add code
Mar 26, 2021
Figure 1 for Understanding Robustness of Transformers for Image Classification
Figure 2 for Understanding Robustness of Transformers for Image Classification
Figure 3 for Understanding Robustness of Transformers for Image Classification
Figure 4 for Understanding Robustness of Transformers for Image Classification
Viaarxiv icon