Picture for Xiting Wang

Xiting Wang

Controlling Large Language Models Through Concept Activation Vectors

Add code
Jan 10, 2025
Viaarxiv icon

Large Language Models show both individual and collective creativity comparable to humans

Add code
Dec 04, 2024
Viaarxiv icon

Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning

Add code
Oct 31, 2024
Figure 1 for Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning
Figure 2 for Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning
Figure 3 for Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning
Figure 4 for Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning
Viaarxiv icon

BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain

Add code
Sep 30, 2024
Figure 1 for BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain
Figure 2 for BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain
Figure 3 for BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain
Figure 4 for BSharedRAG: Backbone Shared Retrieval-Augmented Generation for the E-commerce Domain
Viaarxiv icon

See or Guess: Counterfactually Regularized Image Captioning

Add code
Aug 29, 2024
Figure 1 for See or Guess: Counterfactually Regularized Image Captioning
Figure 2 for See or Guess: Counterfactually Regularized Image Captioning
Figure 3 for See or Guess: Counterfactually Regularized Image Captioning
Figure 4 for See or Guess: Counterfactually Regularized Image Captioning
Viaarxiv icon

RATT: A Thought Structure for Coherent and Correct LLM Reasoning

Add code
Jun 09, 2024
Viaarxiv icon

Prototypical Reward Network for Data-Efficient RLHF

Add code
Jun 06, 2024
Viaarxiv icon

RATT: AThought Structure for Coherent and Correct LLMReasoning

Add code
Jun 04, 2024
Viaarxiv icon

Evaluating Concept-based Explanations of Language Models: A Study on Faithfulness and Readability

Add code
Apr 30, 2024
Viaarxiv icon

Uncovering Safety Risks in Open-source LLMs through Concept Activation Vector

Add code
Apr 18, 2024
Viaarxiv icon