Picture for Stefano Palminteri

Stefano Palminteri

Evolving choice hysteresis in reinforcement learning: comparing the adaptive value of positivity bias and gradual perseveration

Add code
Oct 25, 2024
Viaarxiv icon

The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making

Add code
Oct 09, 2024
Figure 1 for The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making
Figure 2 for The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making
Figure 3 for The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making
Figure 4 for The Moral Turing Test: Evaluating Human-LLM Alignment in Moral Decision-Making
Viaarxiv icon

Assessing Contamination in Large Language Models: Introducing the LogProber method

Add code
Aug 26, 2024
Viaarxiv icon

Large Language Models are Biased Reinforcement Learners

Add code
May 19, 2024
Viaarxiv icon

Inferring the Phylogeny of Large Language Models and Predicting their Performances in Benchmarks

Add code
Apr 06, 2024
Viaarxiv icon

Modelling crypto markets by multi-agent reinforcement learning

Add code
Feb 16, 2024
Viaarxiv icon

Relative Value Biases in Large Language Models

Add code
Jan 25, 2024
Viaarxiv icon

Studying and improving reasoning in humans and machines

Add code
Sep 21, 2023
Viaarxiv icon