Abstract:Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laymen -- all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.
Abstract:Throughout their history, homo sapiens have used technologies to better satisfy their needs. The relation between needs and technology is so fundamental that the US National Research Council defined the distinguishing characteristic of technology as its goal "to make modifications in the world to meet human needs". Artificial intelligence (AI) is one of the most promising emerging technologies of our time. Similar to other technologies, AI is expected "to meet [human] needs". In this article, we reflect on the relationship between needs and AI, and call for the realisation of needs-aware AI systems. We argue that re-thinking needs for, through, and by AI can be a very useful means towards the development of realistic approaches for Sustainable, Human-centric, Accountable, Lawful, and Ethical (HALE) AI systems. We discuss some of the most critical gaps, barriers, enablers, and drivers of co-creating future AI-based socio-technical systems in which [human] needs are well considered and met. Finally, we provide an overview of potential threats and HALE considerations that should be carefully taken into account, and call for joint, immediate, and interdisciplinary efforts and collaborations.
Abstract:Many boundaries are, and will continue to, shape the future of Artificial Intelligence (AI). We push on these boundaries in order to make progress, but they are both pliable and resilient--always creating new boundaries of what AI can (or should) achieve. Among these are technical boundaries (such as processing capacity), psychological boundaries (such as human trust in AI systems), ethical boundaries (such as with AI weapons), and conceptual boundaries (such as the AI people can imagine). It is within this final category while it can play a fundamental role in all other boundaries} that we find the construct of needs and the limitations that our current concept of need places on the future AI.