Picture for Mehdi Khamassi

Mehdi Khamassi

ISIR

Morality is Contextual: Learning Interpretable Moral Contexts from Human Data with Probabilistic Clustering and Large Language Models

Add code
Dec 24, 2025
Viaarxiv icon

Semantic Deception: When Reasoning Models Can't Compute an Addition

Add code
Dec 23, 2025
Viaarxiv icon

Strong and weak alignment of large language models with human values

Add code
Aug 05, 2024
Viaarxiv icon

Purpose for Open-Ended Learning Robots: A Computational Taxonomy, Definition, and Operationalisation

Add code
Mar 04, 2024
Viaarxiv icon

DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics

Add code
May 13, 2020
Figure 1 for DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Figure 2 for DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Figure 3 for DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Figure 4 for DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Viaarxiv icon

Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies

Add code
May 06, 2020
Figure 1 for Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies
Figure 2 for Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies
Figure 3 for Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies
Figure 4 for Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies
Viaarxiv icon

How to reduce computation time while sparing performance during robot navigation? A neuro-inspired architecture for autonomous shifting between model-based and model-free learning

Add code
Apr 30, 2020
Figure 1 for How to reduce computation time while sparing performance during robot navigation? A neuro-inspired architecture for autonomous shifting between model-based and model-free learning
Figure 2 for How to reduce computation time while sparing performance during robot navigation? A neuro-inspired architecture for autonomous shifting between model-based and model-free learning
Figure 3 for How to reduce computation time while sparing performance during robot navigation? A neuro-inspired architecture for autonomous shifting between model-based and model-free learning
Figure 4 for How to reduce computation time while sparing performance during robot navigation? A neuro-inspired architecture for autonomous shifting between model-based and model-free learning
Viaarxiv icon

A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task

Add code
Dec 01, 2018
Figure 1 for A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task
Figure 2 for A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task
Figure 3 for A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task
Figure 4 for A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task
Viaarxiv icon

Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays

Add code
Aug 13, 2018
Figure 1 for Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
Figure 2 for Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
Figure 3 for Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
Figure 4 for Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
Viaarxiv icon

Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task

Add code
Nov 02, 2017
Figure 1 for Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task
Figure 2 for Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task
Figure 3 for Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task
Figure 4 for Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task
Viaarxiv icon