Picture for Swabha Swayamdipta

Swabha Swayamdipta

Every Language Model Has a Forgery-Resistant Signature

Add code
Oct 15, 2025
Viaarxiv icon

ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations

Add code
Jun 17, 2025
Figure 1 for ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations
Figure 2 for ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations
Figure 3 for ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations
Figure 4 for ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations
Viaarxiv icon

Teaching Models to Understand (but not Generate) High-risk Data

Add code
May 05, 2025
Viaarxiv icon

Improving LLM Personas via Rationalization with Psychological Scaffolds

Add code
Apr 25, 2025
Viaarxiv icon

Evaluating Evaluation Metrics -- The Mirage of Hallucination Detection

Add code
Apr 25, 2025
Viaarxiv icon

Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification

Add code
Apr 15, 2025
Figure 1 for Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Figure 2 for Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Figure 3 for Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Figure 4 for Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Viaarxiv icon

Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge

Add code
Mar 06, 2025
Figure 1 for Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge
Figure 2 for Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge
Figure 3 for Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge
Figure 4 for Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge
Viaarxiv icon

Political-LLM: Large Language Models in Political Science

Add code
Dec 09, 2024
Figure 1 for Political-LLM: Large Language Models in Political Science
Figure 2 for Political-LLM: Large Language Models in Political Science
Figure 3 for Political-LLM: Large Language Models in Political Science
Figure 4 for Political-LLM: Large Language Models in Political Science
Viaarxiv icon

Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?

Add code
Aug 26, 2024
Figure 1 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 2 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 3 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 4 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Viaarxiv icon

Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression

Add code
Jul 18, 2024
Figure 1 for Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression
Figure 2 for Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression
Figure 3 for Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression
Figure 4 for Out-of-Distribution Detection through Soft Clustering with Non-Negative Kernel Regression
Viaarxiv icon