Picture for Jihoon Yang

Jihoon Yang

RAMiT: Reciprocal Attention Mixing Transformer for Lightweight Image Restoration

Add code
May 22, 2023
Viaarxiv icon

Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training

Add code
Apr 04, 2023
Viaarxiv icon

N-Gram in Swin Transformers for Efficient Lightweight Image Super-Resolution

Add code
Nov 21, 2022
Viaarxiv icon

StatAssist & GradBoost: A Study on Optimal INT8 Quantization-aware Training from Scratch

Add code
Jun 17, 2020
Figure 1 for StatAssist & GradBoost: A Study on Optimal INT8 Quantization-aware Training from Scratch
Figure 2 for StatAssist & GradBoost: A Study on Optimal INT8 Quantization-aware Training from Scratch
Figure 3 for StatAssist & GradBoost: A Study on Optimal INT8 Quantization-aware Training from Scratch
Figure 4 for StatAssist & GradBoost: A Study on Optimal INT8 Quantization-aware Training from Scratch
Viaarxiv icon

Abstractive Text Classification Using Sequence-to-convolution Neural Networks

Add code
Jun 24, 2018
Figure 1 for Abstractive Text Classification Using Sequence-to-convolution Neural Networks
Figure 2 for Abstractive Text Classification Using Sequence-to-convolution Neural Networks
Figure 3 for Abstractive Text Classification Using Sequence-to-convolution Neural Networks
Figure 4 for Abstractive Text Classification Using Sequence-to-convolution Neural Networks
Viaarxiv icon