Picture for Jiahui Gao

Jiahui Gao

Contrast Similarity-Aware Dual-Pathway Mamba for Multivariate Time Series Node Classification

Add code
Nov 19, 2024
Viaarxiv icon

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

Add code
Oct 22, 2024
Figure 1 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 2 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 3 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 4 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Viaarxiv icon

Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning

Add code
Oct 18, 2024
Viaarxiv icon

ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom

Add code
Oct 18, 2024
Figure 1 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 2 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 3 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Figure 4 for ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
Viaarxiv icon

CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration

Add code
Sep 17, 2024
Figure 1 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 2 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 3 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 4 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Viaarxiv icon

Jailbreaking as a Reward Misspecification Problem

Add code
Jun 20, 2024
Viaarxiv icon

Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment

Add code
May 01, 2024
Figure 1 for Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Figure 2 for Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Figure 3 for Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Figure 4 for Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Viaarxiv icon

Learning From Correctness Without Prompting Makes LLM Efficient Reasoner

Add code
Mar 28, 2024
Figure 1 for Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Figure 2 for Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Figure 3 for Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Figure 4 for Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Viaarxiv icon

Learning to Edit: Aligning LLMs with Knowledge Editing

Add code
Feb 19, 2024
Figure 1 for Learning to Edit: Aligning LLMs with Knowledge Editing
Figure 2 for Learning to Edit: Aligning LLMs with Knowledge Editing
Figure 3 for Learning to Edit: Aligning LLMs with Knowledge Editing
Figure 4 for Learning to Edit: Aligning LLMs with Knowledge Editing
Viaarxiv icon

Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models

Add code
Feb 12, 2024
Figure 1 for Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
Figure 2 for Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
Figure 3 for Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
Figure 4 for Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
Viaarxiv icon