Picture for Yilong Zhao

Yilong Zhao

BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching

Add code
Nov 25, 2024
Viaarxiv icon

XGrammar: Flexible and Efficient Structured Generation Engine for Large Language Models

Add code
Nov 22, 2024
Viaarxiv icon

A Large Language Model-based Framework for Semi-Structured Tender Document Retrieval-Augmented Generation

Add code
Oct 04, 2024
Figure 1 for A Large Language Model-based Framework for Semi-Structured Tender Document Retrieval-Augmented Generation
Figure 2 for A Large Language Model-based Framework for Semi-Structured Tender Document Retrieval-Augmented Generation
Figure 3 for A Large Language Model-based Framework for Semi-Structured Tender Document Retrieval-Augmented Generation
Figure 4 for A Large Language Model-based Framework for Semi-Structured Tender Document Retrieval-Augmented Generation
Viaarxiv icon

Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference

Add code
Jun 16, 2024
Figure 1 for Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference
Figure 2 for Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference
Figure 3 for Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference
Figure 4 for Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference
Viaarxiv icon

Atom: Low-bit Quantization for Efficient and Accurate LLM Serving

Add code
Nov 07, 2023
Viaarxiv icon

Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals

Add code
Jan 30, 2022
Figure 1 for Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals
Figure 2 for Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals
Figure 3 for Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals
Figure 4 for Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals
Viaarxiv icon

SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network

Add code
Mar 02, 2021
Figure 1 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 2 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 3 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 4 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Viaarxiv icon

An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM

Add code
Aug 29, 2019
Figure 1 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 2 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 3 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 4 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Viaarxiv icon