Picture for Zhanpeng Zeng

Zhanpeng Zeng

Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models

Add code
Jun 13, 2024
Viaarxiv icon

LookupFFN: Making Transformers Compute-lite for CPU inference

Add code
Mar 12, 2024
Viaarxiv icon

IM-Unpack: Training and Inference with Arbitrarily Low Precision Integers

Add code
Mar 12, 2024
Viaarxiv icon

FrameQuant: Flexible Low-Bit Quantization for Transformers

Add code
Mar 10, 2024
Viaarxiv icon

Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens

Add code
May 07, 2023
Figure 1 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 2 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 3 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 4 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Viaarxiv icon

Multi Resolution Analysis (MRA) for Approximate Self-Attention

Add code
Jul 21, 2022
Figure 1 for Multi Resolution Analysis (MRA) for Approximate Self-Attention
Figure 2 for Multi Resolution Analysis (MRA) for Approximate Self-Attention
Figure 3 for Multi Resolution Analysis (MRA) for Approximate Self-Attention
Figure 4 for Multi Resolution Analysis (MRA) for Approximate Self-Attention
Viaarxiv icon

You Only Sample Once: Linear Cost Self-Attention Via Bernoulli Sampling

Add code
Nov 18, 2021
Figure 1 for You Only Sample  Once: Linear Cost Self-Attention Via Bernoulli Sampling
Figure 2 for You Only Sample  Once: Linear Cost Self-Attention Via Bernoulli Sampling
Figure 3 for You Only Sample  Once: Linear Cost Self-Attention Via Bernoulli Sampling
Figure 4 for You Only Sample  Once: Linear Cost Self-Attention Via Bernoulli Sampling
Viaarxiv icon

Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention

Add code
Mar 05, 2021
Figure 1 for Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Figure 2 for Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Figure 3 for Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Figure 4 for Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Viaarxiv icon