Picture for Cenyuan Zhang

Cenyuan Zhang

Enhancing the Capability and Robustness of Large Language Models through Reinforcement Learning-Driven Query Refinement

Add code
Jul 01, 2024
Viaarxiv icon

Towards Biologically Plausible Computing: A Comprehensive Comparison

Add code
Jun 23, 2024
Viaarxiv icon

Promoting Data and Model Privacy in Federated Learning through Quantized LoRA

Add code
Jun 16, 2024
Viaarxiv icon

Decoding Continuous Character-based Language from Non-invasive Brain Recordings

Add code
Mar 19, 2024
Viaarxiv icon

Advancing Parameter Efficiency in Fine-tuning via Representation Editing

Add code
Feb 28, 2024
Viaarxiv icon

Aligning Large Language Models with Human Preferences through Representation Engineering

Add code
Dec 26, 2023
Viaarxiv icon

SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network

Add code
Oct 12, 2023
Figure 1 for SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network
Figure 2 for SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network
Figure 3 for SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network
Figure 4 for SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network
Viaarxiv icon

SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT

Add code
Aug 30, 2023
Figure 1 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 2 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 3 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Figure 4 for SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT
Viaarxiv icon

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

Add code
Jun 11, 2022
Figure 1 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 2 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 3 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Figure 4 for Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Viaarxiv icon

Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models

Add code
Jun 01, 2021
Figure 1 for Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models
Figure 2 for Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models
Figure 3 for Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models
Viaarxiv icon