Abstract:This volume contains revised versions of the papers selected for the second volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
Abstract:In this paper, we extend previous work on distributed reasoning using Contextual Defeasible Logic (CDL), which enables decentralised distributed reasoning based on a distributed knowledge base, such that the knowledge from different knowledge bases may conflict with each other. However, there are many use case scenarios that are not possible to represent in this model. One kind of such scenarios are the ones that require that agents share and reason with relevant knowledge when issuing a query to others. Another kind of scenarios are those in which the bindings among the agents (defined by means of mapping rules) are not static, such as in knowledge-intensive and dynamic environments. This work presents a multi-agent model based on CDL that not only allows agents to reason with their local knowledge bases and mapping rules, but also allows agents to reason about relevant knowledge (focus) -- which are not known by the agents a priori -- in the context of a specific query. We present a use case scenario, some formalisations of the model proposed, and an initial implementation based on the BDI (Belief-Desire-Intention) agent model.
Abstract:By considering rational agents, we focus on the problem of selecting goals out of a set of incompatible ones. We consider three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal, the instrumental (or based on resources), and the superfluity. We represent the agent's plans by means of structured arguments whose premises are pervaded with uncertainty. We measure the strength of these arguments in order to determine the set of compatible goals. We propose two novel ways for calculating the strength of these arguments, depending on the kind of incompatibility that exists between them. The first one is the logical strength value, it is denoted by a three-dimensional vector, which is calculated from a probabilistic interval associated with each argument. The vector represents the precision of the interval, the location of it, and the combination of precision and location. This type of representation and treatment of the strength of a structured argument has not been defined before by the state of the art. The second way for calculating the strength of the argument is based on the cost of the plans (regarding the necessary resources) and the preference of the goals associated with the plans. Considering our novel approach for measuring the strength of structured arguments, we propose a semantics for the selection of plans and goals that is based on Dung's abstract argumentation theory. Finally, we make a theoretical evaluation of our proposal.
Abstract:Some abstract argumentation approaches consider that arguments have a degree of uncertainty, which impacts on the degree of uncertainty of the extensions obtained from a abstract argumentation framework (AAF) under a semantics. In these approaches, both the uncertainty of the arguments and of the extensions are modeled by means of precise probability values. However, in many real life situations the exact probabilities values are unknown and sometimes there is a need for aggregating the probability values of different sources. In this paper, we tackle the problem of calculating the degree of uncertainty of the extensions considering that the probability values of the arguments are imprecise. We use credal sets to model the uncertainty values of arguments and from these credal sets, we calculate the lower and upper bounds of the extensions. We study some properties of the suggested approach and illustrate it with an scenario of decision making.
Abstract:During the first step of practical reasoning, i.e. deliberation or goals selection, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions. In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal. In this article, we use an argumentation-based approach for generating explanations about that reasoning path. Besides, we aim to enrich the explanations with information about emerging conflicts during the selection process and how such conflicts were resolved. We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations. Finally, we apply our proposal to the cleaner world scenario.
Abstract:An intelligent agent may in general pursue multiple procedural goals simultaneously, which may lead to arise some conflicts (incompatibilities) among them. In this paper, we focus on the incompatibilities that emerge due to resources limitations. Thus, the contribution of this article is twofold. On one hand, we give an algorithm for identifying resource incompatibilities from a set of pursued goals and, on the other hand, we propose two ways for selecting those goals that will continue to be pursued: (i) the first is based on abstract argumentation theory, and (ii) the second based on two algorithms developed by us. We illustrate our proposal using examples throughout the article.
Abstract:Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions, behaviours and reasoning that produce their choices to the humans (or other systems) with which they interact. In this paper, we focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning, specifically, about the goals he decides to commit to. Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision and use argumentation semantics to determine acceptable arguments (reasons). We propose two types of explanations: the partial one and the complete one. We apply our proposal to a scenario of rescue robots.
Abstract:During the first step of practical reasoning, i.e. deliberation, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. An intelligent agent may in general generate multiple pursuable goals, which may be incompatible among them. In this paper, we focus on the definition, identification and resolution of these incompatibilities. The suggested approach considers the three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal incompatibility, the instrumental or resources incompatibility and the superfluity. We characterise computationally these forms of incompatibility by means of arguments that represent the plans that allow an agent to achieve his goals. Thus, the incompatibility among goals is defined based on the conflicts among their plans, which are represented by means of attacks in an argumentation framework. We also work on the problem of goals selection; we propose to use abstract argumentation theory to deal with this problem, i.e. by applying argumentation semantics. We use a modified version of the "cleaner world" scenario in order to illustrate the performance of our proposal.