Picture for Joseph Jay Williams

Joseph Jay Williams

Opportunities for Adaptive Experiments to Enable Continuous Improvement that Trades-off Instructor and Researcher Incentives

Add code
Oct 18, 2023
Viaarxiv icon

Using Adaptive Bandit Experiments to Increase and Investigate Engagement in Mental Health

Add code
Oct 13, 2023
Viaarxiv icon

Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception

Add code
Oct 13, 2023
Viaarxiv icon

ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models

Add code
Oct 10, 2023
Viaarxiv icon

Getting too personal(ized): The importance of feature choice in online adaptive algorithms

Add code
Sep 06, 2023
Viaarxiv icon

Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning

Add code
Nov 22, 2022
Viaarxiv icon

Using Adaptive Experiments to Rapidly Help Students

Add code
Aug 10, 2022
Viaarxiv icon

Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits

Add code
Aug 10, 2022
Figure 1 for Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits
Figure 2 for Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits
Figure 3 for Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits
Figure 4 for Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits
Viaarxiv icon

Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions

Add code
Mar 04, 2022
Figure 1 for Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions
Figure 2 for Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions
Figure 3 for Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions
Figure 4 for Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions
Viaarxiv icon

Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments

Add code
Mar 26, 2021
Figure 1 for Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
Figure 2 for Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
Figure 3 for Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
Figure 4 for Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
Viaarxiv icon