Picture for Kush R. Varshney

Kush R. Varshney

Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI

Add code
Oct 20, 2024
Viaarxiv icon

Value Alignment from Unstructured Text

Add code
Aug 19, 2024
Figure 1 for Value Alignment from Unstructured Text
Figure 2 for Value Alignment from Unstructured Text
Figure 3 for Value Alignment from Unstructured Text
Figure 4 for Value Alignment from Unstructured Text
Viaarxiv icon

Contextual Moral Value Alignment Through Context-Based Aggregation

Add code
Mar 19, 2024
Viaarxiv icon

A resource-constrained stochastic scheduling algorithm for homeless street outreach and gleaning edible food

Add code
Mar 15, 2024
Viaarxiv icon

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Mar 09, 2024
Viaarxiv icon

Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations

Add code
Mar 08, 2024
Viaarxiv icon

Rethinking Machine Unlearning for Large Language Models

Add code
Feb 15, 2024
Viaarxiv icon

Empathy and the Right to Be an Exception: What LLMs Can and Cannot Do

Add code
Jan 25, 2024
Viaarxiv icon

Decolonial AI Alignment: Viśesadharma, Argument, and Artistic Expression

Add code
Sep 10, 2023
Viaarxiv icon

Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models

Add code
May 22, 2023
Viaarxiv icon