Picture for Boxun Xu

Boxun Xu

Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search

Add code
Dec 07, 2024
Figure 1 for Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search
Figure 2 for Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search
Figure 3 for Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search
Figure 4 for Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search
Viaarxiv icon

Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head Attention Spiking Transformers

Add code
Dec 07, 2024
Viaarxiv icon

Spiking Transformer Hardware Accelerators in 3D Integration

Add code
Nov 11, 2024
Viaarxiv icon

ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of Large Language Models

Add code
Jun 26, 2024
Viaarxiv icon

DISTA: Denoising Spiking Transformer with intrinsic plasticity and spatiotemporal attention

Add code
Nov 15, 2023
Viaarxiv icon

UPAR: A Kantian-Inspired Prompting Framework for Enhancing Large Language Model Capabilities

Add code
Sep 30, 2023
Viaarxiv icon