Picture for Yongchang Hao

Yongchang Hao

Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models

Add code
Dec 11, 2024
Viaarxiv icon

NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks

Add code
Oct 28, 2024
Viaarxiv icon

Ginger: An Efficient Curvature Approximation with Linear Complexity for General Neural Networks

Add code
Feb 05, 2024
Viaarxiv icon

Flora: Low-Rank Adapters Are Secretly Gradient Compressors

Add code
Feb 05, 2024
Viaarxiv icon

Teacher Forcing Recovers Reward Functions for Text Generation

Add code
Oct 17, 2022
Figure 1 for Teacher Forcing Recovers Reward Functions for Text Generation
Figure 2 for Teacher Forcing Recovers Reward Functions for Text Generation
Figure 3 for Teacher Forcing Recovers Reward Functions for Text Generation
Figure 4 for Teacher Forcing Recovers Reward Functions for Text Generation
Viaarxiv icon

An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation

Add code
Sep 29, 2022
Figure 1 for An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation
Figure 2 for An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation
Figure 3 for An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation
Figure 4 for An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation
Viaarxiv icon

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

Add code
Mar 16, 2022
Figure 1 for Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
Figure 2 for Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
Figure 3 for Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
Figure 4 for Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
Viaarxiv icon

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

Add code
Oct 24, 2020
Figure 1 for Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
Figure 2 for Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
Figure 3 for Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
Figure 4 for Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
Viaarxiv icon