Picture for Taylor Sorensen

Taylor Sorensen

Can Language Models Reason about Individualistic Human Values and Preferences?

Add code
Oct 04, 2024
Viaarxiv icon

Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration

Add code
Jun 22, 2024
Viaarxiv icon

A Roadmap to Pluralistic Alignment

Add code
Feb 07, 2024
Viaarxiv icon

NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation

Add code
Dec 10, 2023
Viaarxiv icon

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Add code
Sep 02, 2023
Viaarxiv icon

Towards Coding Social Science Datasets with Language Models

Add code
Jun 03, 2023
Viaarxiv icon

Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing

Add code
May 26, 2023
Figure 1 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 2 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 3 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 4 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Viaarxiv icon

Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

Add code
Oct 06, 2022
Figure 1 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 2 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 3 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 4 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Viaarxiv icon

An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels

Add code
Mar 21, 2022
Figure 1 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 2 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 3 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 4 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Viaarxiv icon

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Add code
Dec 06, 2021
Figure 1 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 2 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 3 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 4 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Viaarxiv icon