Picture for Hongjie Wang

Hongjie Wang

Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models

Add code
May 08, 2024
Figure 1 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 2 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 3 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Figure 4 for Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Viaarxiv icon

Exploring Communication Technologies, Standards, and Challenges in Electrified Vehicle Charging

Add code
Mar 25, 2024
Viaarxiv icon

Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers

Add code
May 27, 2023
Figure 1 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 2 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 3 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Figure 4 for Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Viaarxiv icon

A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision

Add code
Mar 02, 2020
Figure 1 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 2 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 3 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 4 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Viaarxiv icon