Picture for Matthieu Arzel

Matthieu Arzel

IMT Atlantique - MEE, Lab-STICC\_2AI

FLoCoRA: Federated learning compression with low-rank adaptation

Add code
Jun 20, 2024
Viaarxiv icon

PEFSL: A deployment Pipeline for Embedded Few-Shot Learning on a FPGA SoC

Add code
Apr 30, 2024
Viaarxiv icon

Federated learning compression designed for lightweight communications

Add code
Oct 23, 2023
Viaarxiv icon

Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU

Add code
Jun 13, 2022
Figure 1 for Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU
Figure 2 for Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU
Viaarxiv icon

Leveraging Structured Pruning of Convolutional Neural Networks

Add code
Jun 13, 2022
Figure 1 for Leveraging Structured Pruning of Convolutional Neural Networks
Figure 2 for Leveraging Structured Pruning of Convolutional Neural Networks
Figure 3 for Leveraging Structured Pruning of Convolutional Neural Networks
Figure 4 for Leveraging Structured Pruning of Convolutional Neural Networks
Viaarxiv icon

Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay

Add code
Dec 22, 2020
Figure 1 for Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay
Figure 2 for Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay
Figure 3 for Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay
Figure 4 for Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay
Viaarxiv icon

Efficient Hardware Implementation of Incremental Learning and Inference on Chip

Add code
Nov 18, 2019
Figure 1 for Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Figure 2 for Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Figure 3 for Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Figure 4 for Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Viaarxiv icon

Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks

Add code
Dec 29, 2018
Figure 1 for Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks
Figure 2 for Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks
Figure 3 for Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks
Figure 4 for Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks
Viaarxiv icon

Transfer Incremental Learning using Data Augmentation

Add code
Oct 04, 2018
Figure 1 for Transfer Incremental Learning using Data Augmentation
Figure 2 for Transfer Incremental Learning using Data Augmentation
Figure 3 for Transfer Incremental Learning using Data Augmentation
Figure 4 for Transfer Incremental Learning using Data Augmentation
Viaarxiv icon