Picture for Jiadi Jiang

Jiadi Jiang

EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models

Add code
Dec 10, 2024
Figure 1 for EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Figure 2 for EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Figure 3 for EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Figure 4 for EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Viaarxiv icon

AGD: an Auto-switchable Optimizer using Stepwise Gradient Difference for Preconditioning Matrix

Add code
Dec 04, 2023
Viaarxiv icon

Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term

Add code
May 25, 2023
Figure 1 for Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Figure 2 for Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Figure 3 for Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Figure 4 for Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Viaarxiv icon