Abstract:Understanding the emergence of prosocial behaviours among self-interested individuals is an important problem in many scientific disciplines. Various mechanisms have been proposed to explain the evolution of such behaviours, primarily seeking the conditions under which a given mechanism can induce highest levels of cooperation. As these mechanisms usually involve costs that alter individual payoffs, it is however possible that aiming for highest levels of cooperation might be detrimental for social welfare -- the later broadly defined as the total population payoff, taking into account all costs involved for inducing increased prosocial behaviours. Herein, by comparatively analysing the social welfare and cooperation levels obtained from stochastic evolutionary models of two well-established mechanisms of prosocial behaviour, namely, peer and institutional incentives, we demonstrate exactly that. We show that the objectives of maximising cooperation levels and the objectives of maximising social welfare are often misaligned. We argue for the need of adopting social welfare as the main optimisation objective when designing and implementing evolutionary mechanisms for social and collective goods.
Abstract:There is general agreement that some form of regulation is necessary both for AI creators to be incentivised to develop trustworthy systems, and for users to actually trust those systems. But there is much debate about what form these regulations should take and how they should be implemented. Most work in this area has been qualitative, and has not been able to make formal predictions. Here, we propose that evolutionary game theory can be used to quantitatively model the dilemmas faced by users, AI creators, and regulators, and provide insights into the possible effects of different regulatory regimes. We show that creating trustworthy AI and user trust requires regulators to be incentivised to regulate effectively. We demonstrate the effectiveness of two mechanisms that can achieve this. The first is where governments can recognise and reward regulators that do a good job. In that case, if the AI system is not too risky for users then some level of trustworthy development and user trust evolves. We then consider an alternative solution, where users can condition their trust decision on the effectiveness of the regulators. This leads to effective regulation, and consequently the development of trustworthy AI and user trust, provided that the cost of implementing regulations is not too high. Our findings highlight the importance of considering the effect of different regulatory regimes from an evolutionary game theoretic perspective.
Abstract:This brief discusses evolutionary game theory as a powerful and unified mathematical tool to study evolution of collective behaviours. It summarises some of my recent research directions using evolutionary game theory methods, which include i) the analysis of statistical properties of the number of (stable) equilibria in a random evolutionary game, and ii) the modelling of safety behaviours' evolution and the risk posed by advanced Artificial Intelligence technologies in a technology development race. Finally, it includes an outlook and some suggestions for future researchers.
Abstract:Joint commitment was argued to "make our social world" (Gilbert, 2014) and to separate us from other primates. 'Joint' entails that neither of us promises anything, unless the other promises as well. When we need to coordinate for the best mutual outcome, any commitment is beneficial. However, when we are tempted to free-ride (i.e. in social dilemmas), commitment serves no obvious purpose. We show that a reputation system, which judges action in social dilemmas only after joint commitment, can prevent free-riding. Keeping commitments builds trust. We can selectively enter joint commitments with trustworthy individuals to ensure their cooperation (since they will now be judged). We simply do not commit to cooperate with those we do not trust, and hence can freely defect without losing the trust of others. This principle might be the reason for pointedly public joint commitments, such as marriage. It is especially relevant to our evolutionary past, in which no mechanisms existed to enforce commitments reliably and impartially (e.g. via a powerful and accountable government). Much research from anthropology, philosophy and psychology made the assumption that past collaborations were mutually beneficial and had little possibilities to free-ride, for which there is little support. Our evolutionary game theory approach proves that this assumption is not necessary, because free-riding could have been dealt with joint commitments and reputation.
Abstract:As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making, and social interactions. Existing theoretical research has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. In this paper, resorting to methods from evolutionary game theory, we study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game in both well-mixed and structured populations. We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only help those considered worthy/cooperative, especially in slow-moving societies where change is viewed with caution or resistance (small intensities of selection). Intuitively, in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs.
Abstract:In the context of rapid discoveries by leaders in AI, governments must consider how to design regulation that matches the increasing pace of new AI capabilities. Regulatory Markets for AI is a proposal designed with adaptability in mind. It involves governments setting outcome-based targets for AI companies to achieve, which they can show by purchasing services from a market of private regulators. We use an evolutionary game theory model to explore the role governments can play in building a Regulatory Market for AI systems that deters reckless behaviour. We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal. These 'Bounty Incentives' only reward private regulators for catching unsafe behaviour. We argue that AI companies will likely learn to tailor their behaviour to how much effort regulators invest, discouraging regulators from innovating. Instead, we recommend that governments always reward regulators, except when they find that those regulators failed to detect unsafe behaviour that they should have. These 'Vigilant Incentives' could encourage private regulators to find innovative ways to evaluate cutting-edge AI systems.
Abstract:Building ethical machines may involve bestowing upon them the emotional capacity to self-evaluate and repent on their actions. While reparative measures, such as apologies, are often considered as possible strategic interactions, the explicit evolution of the emotion of guilt as a behavioural phenotype is not yet well understood. Here, we study the co-evolution of social and non-social guilt of homogeneous or heterogeneous populations, including well-mixed, lattice and scale-free networks. Socially aware guilt comes at a cost, as it requires agents to make demanding efforts to observe and understand the internal state and behaviour of others, while non-social guilt only requires the awareness of the agents' own state and hence incurs no social cost. Those choosing to be non-social are however more sensitive to exploitation by other agents due to their social unawareness. Resorting to methods from evolutionary game theory, we study analytically, and through extensive numerical and agent-based simulations, whether and how such social and non-social guilt can evolve and deploy, depending on the underlying structure of the populations, or systems, of agents. The results show that, in both lattice and scale-free networks, emotional guilt prone strategies are dominant for a larger range of the guilt and social costs incurred, compared to the well-mixed population setting, leading therefore to significantly higher levels of cooperation for a wider range of the costs. In structured population settings, both social and non-social guilt can evolve and deploy through clustering with emotional prone strategies, allowing them to be protected from exploiters, especially in case of non-social (less costly) strategies. Overall, our findings provide important insights into the design and engineering of self-organised and distributed cooperative multi-agent systems.
Abstract:Institutions and investors are constantly faced with the challenge of appropriately distributing endowments. No budget is limitless and optimising overall spending without sacrificing positive outcomes has been approached and resolved using several heuristics. To date, prior works have failed to consider how to encourage fairness in a population where social diversity is ubiquitous, and in which investors can only partially observe the population. Herein, by incorporating social diversity in the Ultimatum game through heterogeneous graphs, we investigate the effects of several interference mechanisms which assume incomplete information and flexible standards of fairness. We quantify the role of diversity and show how it reduces the need for information gathering, allowing us to relax a strict, costly interference process. Furthermore, we find that the influence of certain individuals, expressed by different network centrality measures, can be exploited to further reduce spending if minimal fairness requirements are lowered. Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness. Overall, our analysis provides novel insights to guide institutional policies in socially diverse complex systems.
Abstract:The mechanisms of emergence and evolution of collective behaviours in dynamical Multi-Agent Systems (MAS) of multiple interacting agents, with diverse behavioral strategies in co-presence, have been undergoing mathematical study via Evolutionary Game Theory (EGT). Their systematic study also resorts to agent-based modelling and simulation (ABM) techniques, thus enabling the study of aforesaid mechanisms under a variety of conditions, parameters, and alternative virtual games. This paper summarises some main research directions and challenges tackled in our group, using methods from EGT and ABM. These range from the introduction of cognitive and emotional mechanisms into agents' implementation in an evolving MAS, to the cost-efficient interference for promoting prosocial behaviours in complex networks, to the regulation and governance of AI safety development ecology, and to the equilibrium analysis of random evolutionary multi-player games. This brief aims to sensitize the reader to EGT based issues, results and prospects, which are accruing in importance for the modeling of minds with machines and the engineering of prosocial behaviours in dynamical MAS, with impact on our understanding of the emergence and stability of collective behaviours. In all cases, important open problems in MAS research as viewed or prioritised by the group are described.
Abstract:With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse as well as the hidden biases in their creation have led to a demand for regulation to address such issues. Yet blindly regulating an innovation process that is not well understood, may stifle this process and reduce benefits that society may gain from the generated technology, even under the best intentions. In this paper, starting from a baseline model that captures the fundamental dynamics of a race for domain supremacy using AI technology, we demonstrate how socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e. potentially unsafe, behaviours. As an alternative to resolve the detrimental effect of over-regulation, we propose a voluntary commitment approach wherein technologists have the freedom of choice between independently pursuing their course of actions or establishing binding agreements to act safely, with sanctioning of those that do not abide to what they pledged. Overall, this work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes in all scenarios envisageable in a short-term race towards domain supremacy through AI technology. These results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.