Picture for Mingyi Hong

Mingyi Hong

Safeguarding Text-to-Image Generation via Inference-Time Prompt-Noise Optimization

Add code
Dec 05, 2024
Viaarxiv icon

Downlink MIMO Channel Estimation from Bits: Recoverability and Algorithm

Add code
Nov 25, 2024
Viaarxiv icon

Unraveling the Gradient Descent Dynamics of Transformers

Add code
Nov 12, 2024
Viaarxiv icon

Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate

Add code
Oct 29, 2024
Viaarxiv icon

DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction

Add code
Oct 04, 2024
Figure 1 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 2 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 3 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 4 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Viaarxiv icon

DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction

Add code
Aug 24, 2024
Viaarxiv icon

Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback

Add code
Jun 11, 2024
Viaarxiv icon

SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining

Add code
Jun 04, 2024
Figure 1 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 2 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 3 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 4 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Viaarxiv icon

Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization

Add code
May 29, 2024
Viaarxiv icon

Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment

Add code
May 29, 2024
Figure 1 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 2 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 3 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 4 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Viaarxiv icon