Picture for Itay Hubara

Itay Hubara

Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks

Add code
Oct 02, 2024
Figure 1 for Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Figure 2 for Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Figure 3 for Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Figure 4 for Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Viaarxiv icon

Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

Add code
Jan 25, 2024
Figure 1 for Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Figure 2 for Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Figure 3 for Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Figure 4 for Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Viaarxiv icon

Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients

Add code
Mar 21, 2022
Figure 1 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 2 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 3 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 4 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Viaarxiv icon

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Add code
Feb 16, 2021
Figure 1 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 2 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 3 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 4 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Viaarxiv icon

Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming

Add code
Jun 14, 2020
Figure 1 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 2 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 3 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 4 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Viaarxiv icon

The Knowledge Within: Methods for Data-Free Model Compression

Add code
Dec 03, 2019
Figure 1 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 2 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 3 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 4 for The Knowledge Within: Methods for Data-Free Model Compression
Viaarxiv icon

MLPerf Inference Benchmark

Add code
Nov 06, 2019
Figure 1 for MLPerf Inference Benchmark
Figure 2 for MLPerf Inference Benchmark
Figure 3 for MLPerf Inference Benchmark
Figure 4 for MLPerf Inference Benchmark
Viaarxiv icon

Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency

Add code
Aug 12, 2019
Figure 1 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 2 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 3 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 4 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Viaarxiv icon

Augment your batch: better training with larger batches

Add code
Jan 27, 2019
Figure 1 for Augment your batch: better training with larger batches
Figure 2 for Augment your batch: better training with larger batches
Figure 3 for Augment your batch: better training with larger batches
Figure 4 for Augment your batch: better training with larger batches
Viaarxiv icon

Scalable Methods for 8-bit Training of Neural Networks

Add code
Jun 17, 2018
Figure 1 for Scalable Methods for 8-bit Training of Neural Networks
Figure 2 for Scalable Methods for 8-bit Training of Neural Networks
Figure 3 for Scalable Methods for 8-bit Training of Neural Networks
Viaarxiv icon