Picture for Amir Yazdanbakhsh

Amir Yazdanbakhsh

Celine

QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture

Add code
Jan 06, 2025
Figure 1 for QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture
Figure 2 for QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture
Figure 3 for QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture
Figure 4 for QuArch: A Question-Answering Dataset for AI Agents in Computer Architecture
Viaarxiv icon

CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming

Add code
Oct 27, 2024
Figure 1 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 2 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 3 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Figure 4 for CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
Viaarxiv icon

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

Add code
Jun 11, 2024
Viaarxiv icon

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization

Add code
Jun 11, 2024
Figure 1 for ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
Figure 2 for ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
Figure 3 for ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
Figure 4 for ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
Viaarxiv icon

Effective Interplay between Sparsity and Quantization: From Theory to Practice

Add code
May 31, 2024
Figure 1 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 2 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 3 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Figure 4 for Effective Interplay between Sparsity and Quantization: From Theory to Practice
Viaarxiv icon

SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs

Add code
May 25, 2024
Figure 1 for SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Figure 2 for SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Figure 3 for SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Figure 4 for SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Viaarxiv icon

Tao: Re-Thinking DL-based Microarchitecture Simulation

Add code
Apr 16, 2024
Viaarxiv icon

DaCapo: Accelerating Continuous Learning in Autonomous Systems for Video Analytics

Add code
Mar 21, 2024
Viaarxiv icon

Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers

Add code
Feb 07, 2024
Figure 1 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 2 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 3 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Figure 4 for Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Viaarxiv icon

USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models

Add code
Jan 03, 2024
Viaarxiv icon