Picture for Kamil Adamczewski

Kamil Adamczewski

Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient

Add code
Feb 07, 2025
Figure 1 for Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
Figure 2 for Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
Figure 3 for Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
Figure 4 for Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
Viaarxiv icon

Shapley Pruning for Neural Network Compression

Add code
Jul 19, 2024
Viaarxiv icon

Joint or Disjoint: Mixing Training Regimes for Early-Exit Models

Add code
Jul 19, 2024
Viaarxiv icon

AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale

Add code
Apr 04, 2024
Viaarxiv icon

Scaling Laws for Fine-Grained Mixture of Experts

Add code
Feb 12, 2024
Figure 1 for Scaling Laws for Fine-Grained Mixture of Experts
Figure 2 for Scaling Laws for Fine-Grained Mixture of Experts
Figure 3 for Scaling Laws for Fine-Grained Mixture of Experts
Figure 4 for Scaling Laws for Fine-Grained Mixture of Experts
Viaarxiv icon

Pre-Pruning and Gradient-Dropping Improve Differentially Private Image Classification

Add code
Jun 19, 2023
Viaarxiv icon

Lidar Line Selection with Spatially-Aware Shapley Value for Cost-Efficient Depth Completion

Add code
Mar 21, 2023
Viaarxiv icon

Differential Privacy Meets Neural Network Pruning

Add code
Mar 08, 2023
Viaarxiv icon

Differentially Private Neural Tangent Kernels for Privacy-Preserving Data Generation

Add code
Mar 03, 2023
Viaarxiv icon

Revisiting Random Channel Pruning for Neural Network Compression

Add code
May 11, 2022
Figure 1 for Revisiting Random Channel Pruning for Neural Network Compression
Figure 2 for Revisiting Random Channel Pruning for Neural Network Compression
Figure 3 for Revisiting Random Channel Pruning for Neural Network Compression
Figure 4 for Revisiting Random Channel Pruning for Neural Network Compression
Viaarxiv icon