Picture for Charlie Hou

Charlie Hou

PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs

Add code
Jun 05, 2024
Viaarxiv icon

On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?

Add code
Feb 29, 2024
Viaarxiv icon

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity

Add code
Jul 31, 2023
Viaarxiv icon

Privately Customizing Prefinetuning to Better Match User Data in Federated Learning

Add code
Feb 23, 2023
Figure 1 for Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
Figure 2 for Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
Figure 3 for Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
Figure 4 for Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
Viaarxiv icon

Reducing the Communication Cost of Federated Learning through Multistage Optimization

Add code
Aug 16, 2021
Figure 1 for Reducing the Communication Cost of Federated Learning through Multistage Optimization
Figure 2 for Reducing the Communication Cost of Federated Learning through Multistage Optimization
Figure 3 for Reducing the Communication Cost of Federated Learning through Multistage Optimization
Figure 4 for Reducing the Communication Cost of Federated Learning through Multistage Optimization
Viaarxiv icon

Efficient Algorithms for Federated Saddle Point Optimization

Add code
Feb 12, 2021
Figure 1 for Efficient Algorithms for Federated Saddle Point Optimization
Figure 2 for Efficient Algorithms for Federated Saddle Point Optimization
Figure 3 for Efficient Algorithms for Federated Saddle Point Optimization
Figure 4 for Efficient Algorithms for Federated Saddle Point Optimization
Viaarxiv icon