Picture for Zhezhi He

Zhezhi He

Obtaining Optimal Spiking Neural Network in Sequence Learning via CRNN-SNN Conversion

Add code
Aug 18, 2024
Viaarxiv icon

Scaling Virtual World with Delta-Engine

Add code
Aug 11, 2024
Viaarxiv icon

BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation

Add code
Jul 12, 2024
Viaarxiv icon

SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN

Add code
Jun 05, 2024
Viaarxiv icon

CLLMs: Consistency Large Language Models

Add code
Mar 08, 2024
Viaarxiv icon

Model Extraction Attacks on Split Federated Learning

Add code
Mar 13, 2023
Viaarxiv icon

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

Add code
May 09, 2022
Figure 1 for ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Figure 2 for ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Figure 3 for ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Figure 4 for ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Viaarxiv icon

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Add code
Mar 09, 2022
Figure 1 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 2 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 3 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Figure 4 for CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction
Viaarxiv icon

N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores

Add code
Dec 15, 2021
Figure 1 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 2 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 3 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 4 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Viaarxiv icon

SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network

Add code
Mar 02, 2021
Figure 1 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 2 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 3 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Figure 4 for SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network
Viaarxiv icon