Picture for Amir Yazdanbakhsh

Amir Yazdanbakhsh

Celine

CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming

Add code
Oct 27, 2024
Figure 1 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 2 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 3 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 4 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Viaarxiv icon

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

Add code
Jun 11, 2024
Viaarxiv icon

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization

Add code
Jun 11, 2024
Viaarxiv icon

Effective Interplay between Sparsity and Quantization: From Theory to Practice

Add code
May 31, 2024
Figure 1 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 2 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 3 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 4 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Viaarxiv icon

SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs

Add code
May 25, 2024
Viaarxiv icon

Tao: Re-Thinking DL-based Microarchitecture Simulation

Add code
Apr 16, 2024
Viaarxiv icon

DaCapo: Accelerating Continuous Learning in Autonomous Systems for Video Analytics

Add code
Mar 21, 2024
Viaarxiv icon

Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers

Add code
Feb 07, 2024
Figure 1 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 2 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 3 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 4 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Viaarxiv icon

USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models

Add code
Jan 03, 2024
Viaarxiv icon

JaxPruner: A concise library for sparsity research

Add code
May 02, 2023
Viaarxiv icon