Picture for Jingyang Zhu

Jingyang Zhu

Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning

Add code
Nov 23, 2024
Viaarxiv icon

Hierarchical Learning and Computing over Space-Ground Integrated Networks

Add code
Aug 26, 2024
Viaarxiv icon

Satellite Federated Edge Learning: Architecture Design and Convergence Analysis

Add code
Apr 02, 2024
Viaarxiv icon

Over-the-Air Federated Learning and Optimization

Add code
Oct 16, 2023
Viaarxiv icon

Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation

Add code
Apr 03, 2021
Figure 1 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 2 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 3 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 4 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Viaarxiv icon

Fast Convergence Algorithm for Analog Federated Learning

Add code
Oct 30, 2020
Figure 1 for Fast Convergence Algorithm for Analog Federated Learning
Figure 2 for Fast Convergence Algorithm for Analog Federated Learning
Figure 3 for Fast Convergence Algorithm for Analog Federated Learning
Viaarxiv icon

SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity

Add code
Nov 03, 2017
Figure 1 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 2 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 3 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Figure 4 for SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Viaarxiv icon