Picture for Zhengyu Ma

Zhengyu Ma

ETTFS: An Efficient Training Framework for Time-to-First-Spike Neuron

Add code
Oct 31, 2024
Viaarxiv icon

SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition

Add code
Jun 21, 2024
Figure 1 for SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition
Figure 2 for SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition
Figure 3 for SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition
Figure 4 for SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition
Viaarxiv icon

Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection

Add code
May 16, 2024
Figure 1 for Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Figure 2 for Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Figure 3 for Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Figure 4 for Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Viaarxiv icon

Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods

Add code
May 06, 2024
Figure 1 for Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods
Figure 2 for Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods
Figure 3 for Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods
Figure 4 for Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods
Viaarxiv icon

QKFormer: Hierarchical Spiking Transformer using Q-K Attention

Add code
Mar 25, 2024
Figure 1 for QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Figure 2 for QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Figure 3 for QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Figure 4 for QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Viaarxiv icon

Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder

Add code
Feb 28, 2024
Viaarxiv icon

Auto-Spikformer: Spikformer Architecture Search

Add code
Jun 01, 2023
Viaarxiv icon

Temporal Contrastive Learning for Spiking Neural Networks

Add code
May 23, 2023
Viaarxiv icon

Enhancing the Performance of Transformer-based Spiking Neural Networks by SNN-optimized Downsampling with Precise Gradient Backpropagation

Add code
May 19, 2023
Viaarxiv icon

Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability

Add code
Apr 25, 2023
Figure 1 for Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability
Figure 2 for Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability
Figure 3 for Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability
Figure 4 for Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability
Viaarxiv icon