Picture for Mao Yang

Mao Yang

RedStone: Curating General, Code, Math, and QA Data for Large Language Models

Add code
Dec 04, 2024
Viaarxiv icon

SPFresh: Incremental In-Place Update for Billion-Scale Vector Search

Add code
Oct 18, 2024
Figure 1 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 2 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 3 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Figure 4 for SPFresh: Incremental In-Place Update for Billion-Scale Vector Search
Viaarxiv icon

SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs

Add code
Oct 17, 2024
Figure 1 for SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Figure 2 for SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Figure 3 for SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Figure 4 for SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Viaarxiv icon

VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models

Add code
Sep 25, 2024
Viaarxiv icon

LUT Tensor Core: Lookup Table Enables Efficient Low-Bit LLM Inference Acceleration

Add code
Aug 12, 2024
Viaarxiv icon

Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers

Add code
Aug 12, 2024
Viaarxiv icon

T-MAC: CPU Renaissance via Table Lookup for Low-Bit LLM Deployment on Edge

Add code
Jun 25, 2024
Viaarxiv icon

MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels

Add code
May 13, 2024
Figure 1 for MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Figure 2 for MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Figure 3 for MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Figure 4 for MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Viaarxiv icon

LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens

Add code
Feb 21, 2024
Figure 1 for LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Figure 2 for LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Figure 3 for LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Figure 4 for LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Viaarxiv icon

Boosting LLM Reasoning: Push the Limits of Few-shot Learning with Reinforced In-Context Pruning

Add code
Dec 26, 2023
Viaarxiv icon