Picture for Ashkan Khakzar

Ashkan Khakzar

Effortless Efficiency: Low-Cost Pruning of Diffusion Models

Add code
Dec 03, 2024
Viaarxiv icon

Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models

Add code
Nov 09, 2024
Viaarxiv icon

Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models

Add code
Oct 09, 2024
Figure 1 for Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models
Figure 2 for Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models
Figure 3 for Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models
Figure 4 for Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models
Viaarxiv icon

The Cognitive Revolution in Interpretability: From Explaining Behavior to Interpreting Representations and Algorithms

Add code
Aug 11, 2024
Viaarxiv icon

Learning Visual Prompts for Guiding the Attention of Vision Transformers

Add code
Jun 05, 2024
Figure 1 for Learning Visual Prompts for Guiding the Attention of Vision Transformers
Figure 2 for Learning Visual Prompts for Guiding the Attention of Vision Transformers
Figure 3 for Learning Visual Prompts for Guiding the Attention of Vision Transformers
Figure 4 for Learning Visual Prompts for Guiding the Attention of Vision Transformers
Viaarxiv icon

Latent Guard: a Safety Framework for Text-to-image Generation

Add code
Apr 11, 2024
Viaarxiv icon

On Discprecncies between Perturbation Evaluations of Graph Neural Network Attributions

Add code
Jan 01, 2024
Viaarxiv icon

A Survey on Transferability of Adversarial Examples across Deep Neural Networks

Add code
Oct 26, 2023
Viaarxiv icon

AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments

Add code
Oct 10, 2023
Viaarxiv icon

A Dual-Perspective Approach to Evaluating Feature Attribution Methods

Add code
Aug 17, 2023
Viaarxiv icon