Abstract:We present an extension-based approach for computing and verifying preferences in an abstract argumentation system. Although numerous argumentation semantics have been developed previously for identifying acceptable sets of arguments from an argumentation framework, there is a lack of justification behind their acceptability based on implicit argument preferences. Preference-based argumentation frameworks allow one to determine what arguments are justified given a set of preferences. Our research considers the inverse of the standard reasoning problem, i.e., given an abstract argumentation framework and a set of justified arguments, we compute what the possible preferences over arguments are. Furthermore, there is a need to verify (i.e., assess) that the computed preferences would lead to the acceptable sets of arguments. This paper presents a novel approach and algorithm for exhaustively computing and enumerating all possible sets of preferences (restricted to three identified cases) for a conflict-free set of arguments in an abstract argumentation framework. We prove the soundness, completeness and termination of the algorithm. The research establishes that preferences are determined using an extension-based approach after the evaluation phase (acceptability of arguments) rather than stated beforehand. In this work, we focus our research study on grounded, preferred and stable semantics. We show that the complexity of computing sets of preferences is exponential in the number of arguments, and thus, describe an approximate approach and algorithm to compute the preferences. Furthermore, we present novel algorithms for verifying (i.e., assessing) the computed preferences. We provide details of the implementation of the algorithms (source code has been made available), various experiments performed to evaluate the algorithms and the analysis of the results.
Abstract:Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
Abstract:Artificial Intelligence (AI) is being increasingly used to develop systems that produce intelligent solutions. However, there is a major concern that whether the systems built will be trusted by humans. In order to establish trust in AI systems, there is a need for the user to understand the reasoning behind their solutions and therefore, the system should be able to explain and justify its output. In this paper, we use argumentation to provide explanations in the domain of AI planning. We present argument schemes to create arguments that explain a plan and its components; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Finally, we present some properties of the plan arguments.
Abstract:Various structured argumentation frameworks utilize preferences as part of their standard inference procedure to enable reasoning with preferences. In this paper, we consider an inverse of the standard reasoning problem, seeking to identify what preferences over assumptions could lead to a given set of conclusions being drawn. We ground our work in the Assumption-Based Argumentation (ABA) framework, and present an algorithm which computes and enumerates all possible sets of preferences over the assumptions in the system from which a desired conflict free set of conclusions can be obtained under a given semantic. After describing our algorithm, we establish its soundness, completeness and complexity.