Abstract:In the last decades, there has been a deceleration in the rates of poverty reduction, suggesting that traditional redistributive approaches to poverty mitigation could be losing effectiveness, and alternative insights to advance the number one UN Sustainable Development Goal are required. The criminalization of poor people has been denounced by several NGOs, and an increasing number of voices suggest that discrimination against the poor (a phenomenon known as \emph{aporophobia}) could be an impediment to mitigating poverty. In this paper, we present the novel Aporophobia Agent-Based Model (AABM) to provide evidence of the correlation between aporophobia and poverty computationally. We present our use case built with real-world demographic data and poverty-mitigation public policies (either enforced or under parliamentary discussion) for the city of Barcelona. We classify policies as discriminatory or non-discriminatory against the poor, with the support of specialized NGOs, and we observe the results in the AABM in terms of the impact on wealth inequality. The simulation provides evidence of the relationship between aporophobia and the increase of wealth inequality levels, paving the way for a new generation of poverty reduction policies that act on discrimination and tackle poverty as a societal problem (not only a problem of the poor).
Abstract:Machine Ethics (ME) is concerned with the design of Artificial Moral Agents (AMAs), i.e. autonomous agents capable of reasoning and behaving according to moral values. Previous approaches have treated values as labels associated with some actions or states of the world, rather than as integral components of agent reasoning. It is also common to disregard that a value-guided agent operates alongside other value-guided agents in an environment governed by norms, thus omitting the social dimension of AMAs. In this blue sky paper, we propose a new AMA paradigm grounded in moral and social psychology, where values are instilled into agents as context-dependent goals. These goals intricately connect values at individual levels to norms at a collective level by evaluating the outcomes most incentivized by the norms in place. We argue that this type of normative reasoning, where agents are endowed with an understanding of norms' moral implications, leads to value-awareness in autonomous agents. Additionally, this capability paves the way for agents to align the norms enforced in their societies with respect to the human values instilled in them, by complementing the value-based reasoning on norms with agreement mechanisms to help agents collectively agree on the best set of norms that suit their human values. Overall, our agent model goes beyond the treatment of values as inert labels by connecting them to normative reasoning and to the social functionalities needed to integrate value-aware agents into our modern hybrid human-computer societies.
Abstract:The Institutional Analysis and Development (IAD) framework is a conceptual toolbox put forward by Elinor Ostrom and colleagues in an effort to identify and delineate the universal common variables that structure the immense variety of human interactions. The framework identifies rules as one of the core concepts to determine the structure of interactions, and acknowledges their potential to steer a community towards more beneficial and socially desirable outcomes. This work presents the first attempt to turn the IAD framework into a computational model to allow communities of agents to formally perform what-if analysis on a given rule configuration. To do so, we define the Action Situation Language -- or ASL -- whose syntax is hgighly tailored to the components of the IAD framework and that we use to write descriptions of social interactions. ASL is complemented by a game engine that generates its semantics as an extensive-form game. These models, then, can be analyzed with the standard tools of game theory to predict which outcomes are being most incentivized, and evaluated according to their socially relevant properties.