Picture for Zifan He

Zifan He

InTAR: Inter-Task Auto-Reconfigurable Accelerator Design for High Data Volume Variation in DNNs

Add code
Feb 12, 2025
Viaarxiv icon

Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference

Add code
Sep 25, 2024
Figure 1 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 2 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 3 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 4 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Viaarxiv icon

Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference

Add code
Jul 12, 2024
Figure 1 for Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Figure 2 for Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Figure 3 for Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Figure 4 for Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Viaarxiv icon

HMT: Hierarchical Memory Transformer for Long Context Language Processing

Add code
May 09, 2024
Viaarxiv icon