Picture for David Wingate

David Wingate

Features that Make a Difference: Leveraging Gradients for Improved Dictionary Learning

Add code
Nov 15, 2024
Viaarxiv icon

Towards Coding Social Science Datasets with Language Models

Add code
Jun 03, 2023
Viaarxiv icon

AI Chat Assistants can Improve Conversations about Divisive Topics

Add code
Feb 21, 2023
Viaarxiv icon

Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

Add code
Oct 06, 2022
Figure 1 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 2 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 3 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Figure 4 for Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
Viaarxiv icon

Out of One, Many: Using Language Models to Simulate Human Samples

Add code
Sep 14, 2022
Figure 1 for Out of One, Many: Using Language Models to Simulate Human Samples
Figure 2 for Out of One, Many: Using Language Models to Simulate Human Samples
Figure 3 for Out of One, Many: Using Language Models to Simulate Human Samples
Figure 4 for Out of One, Many: Using Language Models to Simulate Human Samples
Viaarxiv icon

An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels

Add code
Mar 21, 2022
Figure 1 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 2 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 3 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Figure 4 for An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Viaarxiv icon

Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning

Add code
Oct 05, 2021
Figure 1 for Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Figure 2 for Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Figure 3 for Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Figure 4 for Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Viaarxiv icon

Towards Neural Programming Interfaces

Add code
Dec 10, 2020
Figure 1 for Towards Neural Programming Interfaces
Figure 2 for Towards Neural Programming Interfaces
Figure 3 for Towards Neural Programming Interfaces
Figure 4 for Towards Neural Programming Interfaces
Viaarxiv icon

Human-robot co-manipulation of extended objects: Data-driven models and control from analysis of human-human dyads

Add code
Jan 03, 2020
Figure 1 for Human-robot co-manipulation of extended objects: Data-driven models and control from analysis of human-human dyads
Figure 2 for Human-robot co-manipulation of extended objects: Data-driven models and control from analysis of human-human dyads
Figure 3 for Human-robot co-manipulation of extended objects: Data-driven models and control from analysis of human-human dyads
Figure 4 for Human-robot co-manipulation of extended objects: Data-driven models and control from analysis of human-human dyads
Viaarxiv icon

Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning

Add code
Oct 03, 2019
Figure 1 for Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
Figure 2 for Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
Figure 3 for Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
Figure 4 for Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning
Viaarxiv icon