Picture for Qiyuan Deng

Qiyuan Deng

Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model

Add code
Mar 13, 2025
Figure 1 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 2 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 3 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 4 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Viaarxiv icon

XRoute Environment: A Novel Reinforcement Learning Environment for Routing

Add code
May 23, 2023
Viaarxiv icon

Fed-TDA: Federated Tabular Data Augmentation on Non-IID Data

Add code
Nov 22, 2022
Viaarxiv icon