Picture for Neha Prakriya

Neha Prakriya

Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference

Add code
Sep 25, 2024
Figure 1 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 2 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 3 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 4 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Viaarxiv icon

Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review

Add code
Sep 10, 2024
Figure 1 for Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review
Figure 2 for Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review
Figure 3 for Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review
Figure 4 for Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review
Viaarxiv icon

Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference

Add code
Jul 12, 2024
Viaarxiv icon

HMT: Hierarchical Memory Transformer for Long Context Language Processing

Add code
May 09, 2024
Viaarxiv icon