Picture for Lukasz Lew

Lukasz Lew

Robust Training of Neural Networks at Arbitrary Precision and Sparsity

Add code
Sep 14, 2024
Figure 1 for Robust Training of Neural Networks at Arbitrary Precision and Sparsity
Figure 2 for Robust Training of Neural Networks at Arbitrary Precision and Sparsity
Figure 3 for Robust Training of Neural Networks at Arbitrary Precision and Sparsity
Figure 4 for Robust Training of Neural Networks at Arbitrary Precision and Sparsity
Viaarxiv icon

Custom Gradient Estimators are Straight-Through Estimators in Disguise

Add code
May 08, 2024
Viaarxiv icon

PikeLPN: Mitigating Overlooked Inefficiencies of Low-Precision Neural Networks

Add code
Mar 29, 2024
Viaarxiv icon

4-bit Conformer with Native Quantization Aware Training for Speech Recognition

Add code
Mar 29, 2022
Figure 1 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 2 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 3 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 4 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Viaarxiv icon

PokeBNN: A Binary Pursuit of Lightweight Accuracy

Add code
Nov 30, 2021
Figure 1 for PokeBNN: A Binary Pursuit of Lightweight Accuracy
Figure 2 for PokeBNN: A Binary Pursuit of Lightweight Accuracy
Figure 3 for PokeBNN: A Binary Pursuit of Lightweight Accuracy
Figure 4 for PokeBNN: A Binary Pursuit of Lightweight Accuracy
Viaarxiv icon

Pareto-Optimal Quantized ResNet Is Mostly 4-bit

Add code
May 07, 2021
Figure 1 for Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Figure 2 for Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Figure 3 for Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Figure 4 for Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Viaarxiv icon