Abstract:Claims about whether large language model (LLM) chatbots "reason" are typically debated using curated benchmarks and laboratory-style evaluation protocols. This paper offers a complementary perspective: a student-led field experiment embedded as a midterm project in UNIV 182 (AI4All) at George Mason University, a Mason Core course designed for undergraduates across disciplines with no expected prior STEM exposure. Student teams designed their own reasoning tasks, ran them on widely used consumer chatbots representative of current capabilities, and evaluated both (i) answer correctness and (ii) the validity of the chatbot's stated reasoning (for example, cases where an answer is correct but the explanation is not, or vice versa). Across eight teams that reported standardized scores, students contributed 80 original reasoning prompts spanning six categories: pattern completion, transformation rules, spatial/visual reasoning, quantitative reasoning, relational/logic reasoning, and analogical reasoning. These prompts yielded 320 model responses plus follow-up explanations. Aggregating team-level results, OpenAI GPT-5 and Claude 4.5 achieved the highest mean answer accuracy (86.2% and 83.8%), followed by Grok 4 (82.5%) and Perplexity (73.1%); explanation validity showed a similar ordering (81.2%, 80.0%, 77.5%, 66.2%). Qualitatively, teams converged on a consistent error signature: strong performance on short, structured math and pattern items but reduced reliability on spatial/visual reasoning and multi-step transformations, with frequent "sound right but reason wrong" explanations. The assignment's primary contribution is pedagogical: it operationalizes AI literacy as experimental practice (prompt design, measurement, rater disagreement, and interpretability/grounding) while producing a reusable, student-generated corpus of reasoning probes grounded in authentic end-user interaction.
Abstract:We consider the problem of inverting the artifacts associated with scanning a page from an open book, i.e. "xeroxing." The process typically leads to a non-uniform combination of distortion, blurring and darkening owing to the fact that the page is bound to a stiff spine that causes the sheet of paper to be bent inhomogeneously. Complementing purely data-driven approaches, we use knowledge about the geometry and elasticity of the curved sheet to pose and solve a minimal physically consistent inverse problem to reconstruct the image. Our results rely on 3 dimensionless parameters, all of which can be measured for a scanner, and show that we can improve on the data-driven approaches. More broadly, our results might serve as a "textbook" example and a tutorial of how knowledge of generative mechanisms can speed up the solution of inverse problems.