Picture for Junyi Wu

Junyi Wu

EMO-X: Efficient Multi-Person Pose and Shape Estimation in One-Stage

Add code
Apr 11, 2025
Viaarxiv icon

SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting

Add code
Apr 11, 2025
Viaarxiv icon

X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction

Add code
Mar 11, 2025
Viaarxiv icon

QuantCache: Adaptive Importance-Guided Quantization with Hierarchical Latent and Layer Caching for Video Generation

Add code
Mar 09, 2025
Viaarxiv icon

Dataset Quantization with Active Learning based Adaptive Sampling

Add code
Jul 09, 2024
Viaarxiv icon

Visual Grounding with Attention-Driven Constraint Balancing

Add code
Jul 03, 2024
Viaarxiv icon

HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator

Add code
Jun 05, 2024
Figure 1 for HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator
Figure 2 for HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator
Figure 3 for HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator
Figure 4 for HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator
Viaarxiv icon

PTQ4DiT: Post-training Quantization for Diffusion Transformers

Add code
May 25, 2024
Viaarxiv icon

On the Faithfulness of Vision Transformer Explanations

Add code
Apr 01, 2024
Viaarxiv icon

Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer

Add code
Mar 21, 2024
Figure 1 for Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Figure 2 for Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Figure 3 for Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Figure 4 for Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Viaarxiv icon