Picture for Hila Chefer

Hila Chefer

Still-Moving: Customized Video Generation without Customized Video Data

Add code
Jul 11, 2024
Viaarxiv icon

Lumiere: A Space-Time Diffusion Model for Video Generation

Add code
Feb 05, 2024
Viaarxiv icon

The Hidden Language of Diffusion Models

Add code
Jun 06, 2023
Viaarxiv icon

Discriminative Class Tokens for Text-to-Image Diffusion Models

Add code
Mar 30, 2023
Viaarxiv icon

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

Add code
Jan 31, 2023
Viaarxiv icon

Optimizing Relevance Maps of Vision Transformers Improves Robustness

Add code
Jun 02, 2022
Figure 1 for Optimizing Relevance Maps of Vision Transformers Improves Robustness
Figure 2 for Optimizing Relevance Maps of Vision Transformers Improves Robustness
Figure 3 for Optimizing Relevance Maps of Vision Transformers Improves Robustness
Figure 4 for Optimizing Relevance Maps of Vision Transformers Improves Robustness
Viaarxiv icon

No Token Left Behind: Explainability-Aided Image Classification and Generation

Add code
Apr 11, 2022
Figure 1 for No Token Left Behind: Explainability-Aided Image Classification and Generation
Figure 2 for No Token Left Behind: Explainability-Aided Image Classification and Generation
Figure 3 for No Token Left Behind: Explainability-Aided Image Classification and Generation
Figure 4 for No Token Left Behind: Explainability-Aided Image Classification and Generation
Viaarxiv icon

Image-Based CLIP-Guided Essence Transfer

Add code
Oct 26, 2021
Figure 1 for Image-Based CLIP-Guided Essence Transfer
Figure 2 for Image-Based CLIP-Guided Essence Transfer
Figure 3 for Image-Based CLIP-Guided Essence Transfer
Viaarxiv icon

Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

Add code
Mar 29, 2021
Figure 1 for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Figure 2 for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Figure 3 for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Figure 4 for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Viaarxiv icon

Transformer Interpretability Beyond Attention Visualization

Add code
Dec 17, 2020
Figure 1 for Transformer Interpretability Beyond Attention Visualization
Figure 2 for Transformer Interpretability Beyond Attention Visualization
Figure 3 for Transformer Interpretability Beyond Attention Visualization
Figure 4 for Transformer Interpretability Beyond Attention Visualization
Viaarxiv icon