Picture for Jihun Yun

Jihun Yun

LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding

Add code
Oct 04, 2024
Figure 1 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 2 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 3 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 4 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Viaarxiv icon

TEDDY: Trimming Edges with Degree-based Discrimination strategY

Add code
Feb 02, 2024
Viaarxiv icon

Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss

Add code
Sep 05, 2021
Figure 1 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 2 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 3 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 4 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Viaarxiv icon

A General Family of Stochastic Proximal Gradient Methods for Deep Learning

Add code
Jul 15, 2020
Figure 1 for A General Family of Stochastic Proximal Gradient Methods for Deep Learning
Figure 2 for A General Family of Stochastic Proximal Gradient Methods for Deep Learning
Figure 3 for A General Family of Stochastic Proximal Gradient Methods for Deep Learning
Figure 4 for A General Family of Stochastic Proximal Gradient Methods for Deep Learning
Viaarxiv icon

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization

Add code
Nov 29, 2019
Figure 1 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 2 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 3 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 4 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Viaarxiv icon

Stochastic Gradient Methods with Block Diagonal Matrix Adaptation

Add code
May 26, 2019
Figure 1 for Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Figure 2 for Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Figure 3 for Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Figure 4 for Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Viaarxiv icon