Picture for Franck Cappello

Franck Cappello

Deep Optimizer States: Towards Scalable Training of Transformer Models Using Interleaved Offloading

Add code
Oct 26, 2024
Viaarxiv icon

FT K-Means: A High-Performance K-Means on GPU with Fault Tolerance

Add code
Aug 02, 2024
Viaarxiv icon

DataStates-LLM: Lazy Asynchronous Checkpointing for Large Language Models

Add code
Jun 15, 2024
Viaarxiv icon

Understanding The Effectiveness of Lossy Compression in Machine Learning Training Sets

Add code
Mar 23, 2024
Viaarxiv icon

SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks

Add code
Sep 07, 2023
Figure 1 for SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
Figure 2 for SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
Figure 3 for SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
Figure 4 for SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
Viaarxiv icon

Exploring Autoencoder-Based Error-Bounded Compression for Scientific Data

Add code
May 25, 2021
Figure 1 for Exploring Autoencoder-Based Error-Bounded Compression for Scientific Data
Figure 2 for Exploring Autoencoder-Based Error-Bounded Compression for Scientific Data
Figure 3 for Exploring Autoencoder-Based Error-Bounded Compression for Scientific Data
Figure 4 for Exploring Autoencoder-Based Error-Bounded Compression for Scientific Data
Viaarxiv icon

Algorithm-Based Fault Tolerance for Convolutional Neural Networks

Add code
Mar 27, 2020
Figure 1 for Algorithm-Based Fault Tolerance for Convolutional Neural Networks
Figure 2 for Algorithm-Based Fault Tolerance for Convolutional Neural Networks
Figure 3 for Algorithm-Based Fault Tolerance for Convolutional Neural Networks
Figure 4 for Algorithm-Based Fault Tolerance for Convolutional Neural Networks
Viaarxiv icon

DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

Add code
Jan 26, 2019
Figure 1 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 2 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 3 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 4 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Viaarxiv icon