Picture for Chaosheng Dong

Chaosheng Dong

Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning

Add code
Apr 22, 2024
Figure 1 for Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Figure 2 for Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Figure 3 for Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Figure 4 for Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Viaarxiv icon

Towards Generalized Inverse Reinforcement Learning

Add code
Feb 11, 2024
Figure 1 for Towards Generalized Inverse Reinforcement Learning
Figure 2 for Towards Generalized Inverse Reinforcement Learning
Figure 3 for Towards Generalized Inverse Reinforcement Learning
Figure 4 for Towards Generalized Inverse Reinforcement Learning
Viaarxiv icon

Bandit Learning to Rank with Position-Based Click Models: Personalized and Equal Treatments

Add code
Nov 08, 2023
Viaarxiv icon

Federated Multi-Objective Learning

Add code
Oct 15, 2023
Figure 1 for Federated Multi-Objective Learning
Figure 2 for Federated Multi-Objective Learning
Figure 3 for Federated Multi-Objective Learning
Figure 4 for Federated Multi-Objective Learning
Viaarxiv icon

G-STO: Sequential Main Shopping Intention Detection via Graph-Regularized Stochastic Transformer

Add code
Jun 25, 2023
Viaarxiv icon

AdaSelection: Accelerating Deep Learning Training through Data Subsampling

Add code
Jun 19, 2023
Figure 1 for AdaSelection: Accelerating Deep Learning Training through Data Subsampling
Figure 2 for AdaSelection: Accelerating Deep Learning Training through Data Subsampling
Figure 3 for AdaSelection: Accelerating Deep Learning Training through Data Subsampling
Figure 4 for AdaSelection: Accelerating Deep Learning Training through Data Subsampling
Viaarxiv icon

Multi-Label Learning to Rank through Multi-Objective Optimization

Add code
Jul 08, 2022
Figure 1 for Multi-Label Learning to Rank through Multi-Objective Optimization
Figure 2 for Multi-Label Learning to Rank through Multi-Objective Optimization
Figure 3 for Multi-Label Learning to Rank through Multi-Objective Optimization
Figure 4 for Multi-Label Learning to Rank through Multi-Objective Optimization
Viaarxiv icon

Incentivized Bandit Learning with Self-Reinforcing User Preferences

Add code
May 31, 2021
Figure 1 for Incentivized Bandit Learning with Self-Reinforcing User Preferences
Figure 2 for Incentivized Bandit Learning with Self-Reinforcing User Preferences
Figure 3 for Incentivized Bandit Learning with Self-Reinforcing User Preferences
Figure 4 for Incentivized Bandit Learning with Self-Reinforcing User Preferences
Viaarxiv icon

One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning

Add code
Apr 27, 2021
Figure 1 for One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
Figure 2 for One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
Figure 3 for One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
Figure 4 for One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning
Viaarxiv icon

Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds

Add code
Oct 22, 2020
Figure 1 for Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds
Figure 2 for Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds
Figure 3 for Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds
Figure 4 for Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds
Viaarxiv icon