Picture for Qing Jin

Qing Jin

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Add code
Jan 11, 2024
Viaarxiv icon

SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds

Add code
Jun 03, 2023
Viaarxiv icon

PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

Add code
Sep 18, 2022
Figure 1 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 2 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 3 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 4 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Viaarxiv icon

F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

Add code
Feb 10, 2022
Figure 1 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 2 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 3 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 4 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Viaarxiv icon

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

Add code
Oct 26, 2021
Figure 1 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 2 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 3 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 4 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Viaarxiv icon

CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference

Add code
Jul 06, 2021
Figure 1 for CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference
Figure 2 for CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference
Figure 3 for CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference
Figure 4 for CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference
Viaarxiv icon

Teachers Do More Than Teach: Compressing Image-to-Image Models

Add code
Mar 05, 2021
Figure 1 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 2 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 3 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 4 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Viaarxiv icon

Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?

Add code
Feb 19, 2021
Figure 1 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 2 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 3 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 4 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Viaarxiv icon

6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration

Add code
Dec 01, 2020
Figure 1 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 2 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 3 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 4 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Viaarxiv icon

FracBits: Mixed Precision Quantization via Fractional Bit-Widths

Add code
Jul 04, 2020
Figure 1 for FracBits: Mixed Precision Quantization via Fractional Bit-Widths
Figure 2 for FracBits: Mixed Precision Quantization via Fractional Bit-Widths
Figure 3 for FracBits: Mixed Precision Quantization via Fractional Bit-Widths
Figure 4 for FracBits: Mixed Precision Quantization via Fractional Bit-Widths
Viaarxiv icon