Picture for Zheng Chai

Zheng Chai

Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models

Add code
Jan 04, 2024
Viaarxiv icon

Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction

Add code
Aug 25, 2023
Viaarxiv icon

Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization

Add code
May 31, 2022
Figure 1 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 2 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 3 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 4 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Viaarxiv icon

LOF: Structure-Aware Line Tracking based on Optical Flow

Add code
Sep 17, 2021
Figure 1 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 2 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 3 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 4 for LOF: Structure-Aware Line Tracking based on Optical Flow
Viaarxiv icon

Asynchronous Federated Learning for Sensor Data with Concept Drift

Add code
Sep 01, 2021
Figure 1 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 2 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 3 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 4 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Viaarxiv icon

Method Towards CVPR 2021 Image Matching Challenge

Add code
Aug 11, 2021
Figure 1 for Method Towards CVPR 2021 Image Matching Challenge
Figure 2 for Method Towards CVPR 2021 Image Matching Challenge
Viaarxiv icon

Method Towards CVPR 2021 SimLocMatch Challenge

Add code
Aug 11, 2021
Figure 1 for Method Towards CVPR 2021 SimLocMatch Challenge
Viaarxiv icon

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework

Add code
May 20, 2021
Figure 1 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 2 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 3 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 4 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Viaarxiv icon

FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data

Add code
Oct 12, 2020
Figure 1 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 2 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 3 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 4 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Viaarxiv icon

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

Add code
Sep 16, 2020
Figure 1 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 2 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 3 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Viaarxiv icon