Picture for Yushu Wu

Yushu Wu

Fast and Memory-Efficient Video Diffusion Using Streamlined Inference

Add code
Nov 02, 2024
Figure 1 for Fast and Memory-Efficient Video Diffusion Using Streamlined Inference
Figure 2 for Fast and Memory-Efficient Video Diffusion Using Streamlined Inference
Figure 3 for Fast and Memory-Efficient Video Diffusion Using Streamlined Inference
Figure 4 for Fast and Memory-Efficient Video Diffusion Using Streamlined Inference
Viaarxiv icon

Rethinking Token Reduction for State Space Models

Add code
Oct 16, 2024
Figure 1 for Rethinking Token Reduction for State Space Models
Figure 2 for Rethinking Token Reduction for State Space Models
Figure 3 for Rethinking Token Reduction for State Space Models
Figure 4 for Rethinking Token Reduction for State Space Models
Viaarxiv icon

Exploring Token Pruning in Vision State Space Models

Add code
Sep 27, 2024
Figure 1 for Exploring Token Pruning in Vision State Space Models
Figure 2 for Exploring Token Pruning in Vision State Space Models
Figure 3 for Exploring Token Pruning in Vision State Space Models
Figure 4 for Exploring Token Pruning in Vision State Space Models
Viaarxiv icon

Search for Efficient Large Language Models

Add code
Sep 25, 2024
Figure 1 for Search for Efficient Large Language Models
Figure 2 for Search for Efficient Large Language Models
Figure 3 for Search for Efficient Large Language Models
Figure 4 for Search for Efficient Large Language Models
Viaarxiv icon

Digital Avatars: Framework Development and Their Evaluation

Add code
Aug 07, 2024
Figure 1 for Digital Avatars: Framework Development and Their Evaluation
Figure 2 for Digital Avatars: Framework Development and Their Evaluation
Figure 3 for Digital Avatars: Framework Development and Their Evaluation
Viaarxiv icon

SF-V: Single Forward Video Generation Model

Add code
Jun 06, 2024
Viaarxiv icon

"It is okay to be uncommon": Quantizing Sound Event Detection Networks on Hardware Accelerators with Uncommon Sub-Byte Support

Add code
Apr 05, 2024
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Add code
Dec 09, 2022
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Add code
Jul 25, 2022
Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search

Add code
Aug 18, 2021
Figure 1 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 2 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 3 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 4 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Viaarxiv icon