Picture for Yerlan Idelbayev

Yerlan Idelbayev

SnapGen: Taming High-Resolution Text-to-Image Models for Mobile Devices with Efficient Architectures and Training

Add code
Dec 12, 2024
Viaarxiv icon

Efficient Training with Denoised Neural Weights

Add code
Jul 16, 2024
Figure 1 for Efficient Training with Denoised Neural Weights
Figure 2 for Efficient Training with Denoised Neural Weights
Figure 3 for Efficient Training with Denoised Neural Weights
Figure 4 for Efficient Training with Denoised Neural Weights
Viaarxiv icon

BitsFusion: 1.99 bits Weight Quantization of Diffusion Model

Add code
Jun 06, 2024
Figure 1 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 2 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 3 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 4 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Viaarxiv icon

TextCraftor: Your Text Encoder Can be Image Quality Controller

Add code
Mar 27, 2024
Figure 1 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 2 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 3 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 4 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Viaarxiv icon

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Add code
Jan 11, 2024
Viaarxiv icon

Model compression as constrained optimization, with application to neural nets. Part V: combining compressions

Add code
Jul 09, 2021
Figure 1 for Model compression as constrained optimization, with application to neural nets. Part V: combining compressions
Figure 2 for Model compression as constrained optimization, with application to neural nets. Part V: combining compressions
Figure 3 for Model compression as constrained optimization, with application to neural nets. Part V: combining compressions
Figure 4 for Model compression as constrained optimization, with application to neural nets. Part V: combining compressions
Viaarxiv icon

A flexible, extensible software framework for model compression based on the LC algorithm

Add code
May 15, 2020
Figure 1 for A flexible, extensible software framework for model compression based on the LC algorithm
Figure 2 for A flexible, extensible software framework for model compression based on the LC algorithm
Figure 3 for A flexible, extensible software framework for model compression based on the LC algorithm
Figure 4 for A flexible, extensible software framework for model compression based on the LC algorithm
Viaarxiv icon

Structured Multi-Hashing for Model Compression

Add code
Nov 25, 2019
Figure 1 for Structured Multi-Hashing for Model Compression
Figure 2 for Structured Multi-Hashing for Model Compression
Figure 3 for Structured Multi-Hashing for Model Compression
Figure 4 for Structured Multi-Hashing for Model Compression
Viaarxiv icon

Model compression as constrained optimization, with application to neural nets. Part II: quantization

Add code
Jul 13, 2017
Figure 1 for Model compression as constrained optimization, with application to neural nets. Part II: quantization
Figure 2 for Model compression as constrained optimization, with application to neural nets. Part II: quantization
Figure 3 for Model compression as constrained optimization, with application to neural nets. Part II: quantization
Figure 4 for Model compression as constrained optimization, with application to neural nets. Part II: quantization
Viaarxiv icon