Picture for Wei-Ming Chen

Wei-Ming Chen

Tiny Machine Learning: Progress and Futures

Add code
Mar 29, 2024
Viaarxiv icon

PockEngine: Sparse and Efficient Fine-tuning in a Pocket

Add code
Oct 26, 2023
Viaarxiv icon

On-Device Training Under 256KB Memory

Add code
Jul 14, 2022
Figure 1 for On-Device Training Under 256KB Memory
Figure 2 for On-Device Training Under 256KB Memory
Figure 3 for On-Device Training Under 256KB Memory
Figure 4 for On-Device Training Under 256KB Memory
Viaarxiv icon

Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

Add code
May 03, 2022
Figure 1 for Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Figure 2 for Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Figure 3 for Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Figure 4 for Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Viaarxiv icon

MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning

Add code
Oct 28, 2021
Figure 1 for MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Figure 2 for MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Figure 3 for MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Figure 4 for MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Viaarxiv icon

MCUNet: Tiny Deep Learning on IoT Devices

Add code
Jul 20, 2020
Figure 1 for MCUNet: Tiny Deep Learning on IoT Devices
Figure 2 for MCUNet: Tiny Deep Learning on IoT Devices
Figure 3 for MCUNet: Tiny Deep Learning on IoT Devices
Figure 4 for MCUNet: Tiny Deep Learning on IoT Devices
Viaarxiv icon