Picture for Taylor Sorensen

Taylor Sorensen

Value Profiles for Encoding Human Variation

Add code
Mar 19, 2025
Viaarxiv icon

Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models

Add code
Mar 15, 2025
Viaarxiv icon

Can Language Models Reason about Individualistic Human Values and Preferences?

Add code
Oct 04, 2024
Viaarxiv icon

Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration

Add code
Jun 22, 2024
Viaarxiv icon

A Roadmap to Pluralistic Alignment

Add code
Feb 07, 2024
Viaarxiv icon

NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation

Add code
Dec 10, 2023
Viaarxiv icon

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Add code
Sep 02, 2023
Viaarxiv icon

Towards Coding Social Science Datasets with Language Models

Add code
Jun 03, 2023
Viaarxiv icon

Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing

Add code
May 26, 2023
Figure 1 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 2 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 3 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 4 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Viaarxiv icon

Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

Add code
Oct 06, 2022
Figure 1 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 2 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 3 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 4 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Viaarxiv icon