Abstract:The increasing applications of AI systems require personalized explanations for their behaviors to various stakeholders since the stakeholders may have various knowledge and backgrounds. In general, a conversation between explainers and explainees not only allows explainers to obtain the explainees' background, but also allows explainees to better understand the explanations. In this paper, we propose an approach for an explainer to communicate personalized explanations to an explainee through having consecutive conversations with the explainee. We prove that the conversation terminates due to the explainee's justification of the initial claim as long as there exists an explanation for the initial claim that the explainee understands and the explainer is aware of.
Abstract:When we say "I know why he was late", we know not only the fact that he was late, but also an explanation of this fact. We propose a logical framework of "knowing why" inspired by the existing formal studies on why-questions, scientific explanation, and justification logic. We introduce the Ky_i operator into the language of epistemic logic to express "agent i knows why phi" and propose a Kripke-style semantics of such expressions in terms of knowing an explanation of phi. We obtain two sound and complete axiomatizations w.r.t. two different model classes depending on different assumptions about introspection.