Picture for Chenxi Gu

Chenxi Gu

SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT

Add code
Aug 30, 2023
Figure 1 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 2 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 3 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 4 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Viaarxiv icon

Watermarking Pre-trained Language Models with Backdooring

Add code
Oct 14, 2022
Figure 1 for Watermarking Pre-trained Language Models with Backdooring
Figure 2 for Watermarking Pre-trained Language Models with Backdooring
Figure 3 for Watermarking Pre-trained Language Models with Backdooring
Figure 4 for Watermarking Pre-trained Language Models with Backdooring
Viaarxiv icon