Picture for Xiaoxin Cui

Xiaoxin Cui

A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training

Add code
Dec 10, 2024
Figure 1 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 2 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 3 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Figure 4 for A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training
Viaarxiv icon

Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization

Add code
May 16, 2022
Figure 1 for Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization
Figure 2 for Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization
Figure 3 for Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization
Figure 4 for Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization
Viaarxiv icon

Integer-Only Neural Network Quantization Scheme Based on Shift-Batch-Normalization

Add code
May 28, 2021
Figure 1 for Integer-Only Neural Network Quantization Scheme Based on Shift-Batch-Normalization
Figure 2 for Integer-Only Neural Network Quantization Scheme Based on Shift-Batch-Normalization
Figure 3 for Integer-Only Neural Network Quantization Scheme Based on Shift-Batch-Normalization
Figure 4 for Integer-Only Neural Network Quantization Scheme Based on Shift-Batch-Normalization
Viaarxiv icon