Picture for Victor Quétu

Victor Quétu

Memory-Optimized Once-For-All Network

Add code
Sep 05, 2024
Figure 1 for Memory-Optimized Once-For-All Network
Figure 2 for Memory-Optimized Once-For-All Network
Figure 3 for Memory-Optimized Once-For-All Network
Figure 4 for Memory-Optimized Once-For-All Network
Viaarxiv icon

LaCoOT: Layer Collapse through Optimal Transport

Add code
Jun 13, 2024
Viaarxiv icon

The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth

Add code
Apr 27, 2024
Viaarxiv icon

NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer

Add code
Apr 24, 2024
Viaarxiv icon

The Quest of Finding the Antidote to Sparse Double Descent

Add code
Aug 31, 2023
Viaarxiv icon

Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?

Add code
Aug 18, 2023
Viaarxiv icon

Sparse Double Descent in Vision Transformers: real or phantom threat?

Add code
Jul 26, 2023
Viaarxiv icon

Dodging the Sparse Double Descent

Add code
Mar 02, 2023
Viaarxiv icon

Can we avoid Double Descent in Deep Neural Networks?

Add code
Mar 02, 2023
Viaarxiv icon