Picture for Shashank Sonkar

Shashank Sonkar

Do LLMs Make Mistakes Like Students? Exploring Natural Alignment between Language Models and Human Error Patterns

Add code
Feb 21, 2025
Viaarxiv icon

The Imitation Game for Educational AI

Add code
Feb 21, 2025
Viaarxiv icon

LLM-based Cognitive Models of Students with Misconceptions

Add code
Oct 17, 2024
Figure 1 for LLM-based Cognitive Models of Students with Misconceptions
Figure 2 for LLM-based Cognitive Models of Students with Misconceptions
Figure 3 for LLM-based Cognitive Models of Students with Misconceptions
Figure 4 for LLM-based Cognitive Models of Students with Misconceptions
Viaarxiv icon

MalAlgoQA: A Pedagogical Approach for Evaluating Counterfactual Reasoning Abilities

Add code
Jul 01, 2024
Viaarxiv icon

Many-Shot Regurgitation (MSR) Prompting

Add code
May 13, 2024
Viaarxiv icon

Regressive Side Effects of Training Language Models to Mimic Student Misconceptions

Add code
Apr 23, 2024
Viaarxiv icon

Marking: Visual Grading with Highlighting Errors and Annotating Missing Bits

Add code
Apr 22, 2024
Viaarxiv icon

Automated Long Answer Grading with RiceChem Dataset

Add code
Apr 22, 2024
Viaarxiv icon

Pedagogical Alignment of Large Language Models

Add code
Feb 07, 2024
Viaarxiv icon

Novice Learner and Expert Tutor: Evaluating Math Reasoning Abilities of Large Language Models with Misconceptions

Add code
Oct 03, 2023
Viaarxiv icon