Picture for Jitao Sang

Jitao Sang

OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning

Add code
Dec 22, 2024
Viaarxiv icon

o1-Coder: an o1 Replication for Coding

Add code
Nov 29, 2024
Figure 1 for o1-Coder: an o1 Replication for Coding
Figure 2 for o1-Coder: an o1 Replication for Coding
Figure 3 for o1-Coder: an o1 Replication for Coding
Figure 4 for o1-Coder: an o1 Replication for Coding
Viaarxiv icon

Don't Command, Cultivate: An Exploratory Study of System-2 Alignment

Add code
Nov 26, 2024
Figure 1 for Don't Command, Cultivate: An Exploratory Study of System-2 Alignment
Figure 2 for Don't Command, Cultivate: An Exploratory Study of System-2 Alignment
Figure 3 for Don't Command, Cultivate: An Exploratory Study of System-2 Alignment
Figure 4 for Don't Command, Cultivate: An Exploratory Study of System-2 Alignment
Viaarxiv icon

VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive Decoding

Add code
Nov 24, 2024
Viaarxiv icon

Debiasing Vison-Language Models with Text-Only Training

Add code
Oct 12, 2024
Viaarxiv icon

AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models

Add code
Oct 07, 2024
Figure 1 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 2 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 3 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 4 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Viaarxiv icon

ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models

Add code
Sep 14, 2024
Figure 1 for ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models
Figure 2 for ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models
Figure 3 for ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models
Figure 4 for ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models
Viaarxiv icon

A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers

Add code
Aug 19, 2024
Viaarxiv icon

KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions

Add code
Jul 08, 2024
Figure 1 for KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions
Figure 2 for KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions
Figure 3 for KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions
Figure 4 for KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions
Viaarxiv icon

DenoiseReID: Denoising Model for Representation Learning of Person Re-Identification

Add code
Jun 13, 2024
Viaarxiv icon