DiTFastAttnV2: Head-wise Attention Compression for Multi-Modality Diffusion Transformers

Add code
Mar 28, 2025
Figure 1 for DiTFastAttnV2: Head-wise Attention Compression for Multi-Modality Diffusion Transformers
Figure 2 for DiTFastAttnV2: Head-wise Attention Compression for Multi-Modality Diffusion Transformers
Figure 3 for DiTFastAttnV2: Head-wise Attention Compression for Multi-Modality Diffusion Transformers
Figure 4 for DiTFastAttnV2: Head-wise Attention Compression for Multi-Modality Diffusion Transformers

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: