Picture for Jiancan Wu

Jiancan Wu

Unified Parameter-Efficient Unlearning for LLMs

Add code
Nov 30, 2024
Viaarxiv icon

RosePO: Aligning LLM-based Recommenders with Human Values

Add code
Oct 16, 2024
Figure 1 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 2 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 3 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 4 for RosePO: Aligning LLM-based Recommenders with Human Values
Viaarxiv icon

$α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs

Add code
Oct 14, 2024
Figure 1 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 2 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 3 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 4 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Viaarxiv icon

Text-guided Diffusion Model for 3D Molecule Generation

Add code
Oct 04, 2024
Viaarxiv icon

Customizing Language Models with Instance-wise LoRA for Sequential Recommendation

Add code
Aug 19, 2024
Viaarxiv icon

Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization

Add code
Aug 03, 2024
Viaarxiv icon

Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number

Add code
Jul 29, 2024
Figure 1 for Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number
Figure 2 for Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number
Figure 3 for Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number
Figure 4 for Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number
Viaarxiv icon

Reinforced Prompt Personalization for Recommendation with Large Language Models

Add code
Jul 24, 2024
Viaarxiv icon

$β$-DPO: Direct Preference Optimization with Dynamic $β$

Add code
Jul 11, 2024
Viaarxiv icon

Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

Add code
Jul 10, 2024
Figure 1 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 2 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 3 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 4 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Viaarxiv icon