Picture for Carol Chen

Carol Chen

CoLLAP: Contrastive Long-form Language-Audio Pretraining with Musical Temporal Structure Augmentation

Add code
Oct 03, 2024
Viaarxiv icon

Futga: Towards Fine-grained Music Understanding through Temporally-enhanced Generative Augmentation

Add code
Jul 29, 2024
Viaarxiv icon

Question Decomposition Improves the Faithfulness of Model-Generated Reasoning

Add code
Jul 25, 2023
Figure 1 for Question Decomposition Improves the Faithfulness of Model-Generated Reasoning
Figure 2 for Question Decomposition Improves the Faithfulness of Model-Generated Reasoning
Figure 3 for Question Decomposition Improves the Faithfulness of Model-Generated Reasoning
Figure 4 for Question Decomposition Improves the Faithfulness of Model-Generated Reasoning
Viaarxiv icon

Towards Measuring the Representation of Subjective Global Opinions in Language Models

Add code
Jun 28, 2023
Figure 1 for Towards Measuring the Representation of Subjective Global Opinions in Language Models
Figure 2 for Towards Measuring the Representation of Subjective Global Opinions in Language Models
Figure 3 for Towards Measuring the Representation of Subjective Global Opinions in Language Models
Figure 4 for Towards Measuring the Representation of Subjective Global Opinions in Language Models
Viaarxiv icon

Constitutional AI: Harmlessness from AI Feedback

Add code
Dec 15, 2022
Figure 1 for Constitutional AI: Harmlessness from AI Feedback
Figure 2 for Constitutional AI: Harmlessness from AI Feedback
Figure 3 for Constitutional AI: Harmlessness from AI Feedback
Figure 4 for Constitutional AI: Harmlessness from AI Feedback
Viaarxiv icon

Toy Models of Superposition

Add code
Sep 21, 2022
Viaarxiv icon

Mitigating harm in language models with conditional-likelihood filtration

Add code
Sep 04, 2021
Figure 1 for Mitigating harm in language models with conditional-likelihood filtration
Figure 2 for Mitigating harm in language models with conditional-likelihood filtration
Figure 3 for Mitigating harm in language models with conditional-likelihood filtration
Figure 4 for Mitigating harm in language models with conditional-likelihood filtration
Viaarxiv icon