Abstract:Our information and communication environment has fallen short of the ideals that networked global communication might have served. Identifying all the causes of its pathologies is difficult, but existing recommender systems very likely play a contributing role. In this paper, which draws on the normative tools of philosophy of computing, informed by empirical and technical insights from natural language processing and recommender systems, we make the moral case for an alternative approach. We argue that existing recommenders incentivise mass surveillance, concentrate power, fall prey to narrow behaviourism, and compromise user agency. Rather than just trying to avoid algorithms entirely, or to make incremental improvements to the current paradigm, researchers and engineers should explore an alternative paradigm: the use of language model (LM) agents to source and curate content that matches users' preferences and values, expressed in natural language. The use of LM agents for recommendation poses its own challenges, including those related to candidate generation, computational efficiency, preference modelling, and prompt injection. Nonetheless, if implemented successfully LM agents could: guide us through the digital public sphere without relying on mass surveillance; shift power away from platforms towards users; optimise for what matters instead of just for behavioural proxies; and scaffold our agency instead of undermining it.
Abstract:Some have criticised Generative AI Systems for replicating the familiar pathologies of already widely-deployed AI systems. Other critics highlight how they foreshadow vastly more powerful future systems, which might threaten humanity's survival. The first group says there is nothing new here; the other looks through the present to a perhaps distant horizon. In this paper, I instead pay attention to what makes these particular systems distinctive: both their remarkable scientific achievement, and the most likely and consequential ways in which they will change society over the next five to ten years. In particular, I explore the potential societal impacts and normative questions raised by the looming prospect of 'Generative Agents', in which multimodal large language models (LLMs) form the executive centre of complex, tool-using AI systems that can take unsupervised sequences of actions towards some goal.
Abstract:As rapid advances in Artificial Intelligence and the rise of some of history's most potent corporations meet the diminished neoliberal state, people are increasingly subject to power exercised by means of automated systems. Machine learning and related computational technologies now underpin vital government services. They connect consumers and producers in new algorithmic markets. They determine how we find out about everything from how to vote to where to get vaccinated, and whose speech is amplified, reduced, or restricted. And a new wave of products based on Large Language Models (LLMs) will further transform our economic and political lives. Automatic Authorities are automated computational systems used to exercise power over us by determining what we may know, what we may have, and what our options will be. In response to their rise, scholars working on the societal impacts of AI and related technologies have advocated shifting attention from how to make AI systems beneficial or fair towards a critical analysis of these new power relations. But power is everywhere, and is not necessarily bad. On what basis should we object to new or intensified power relations, and what can be done to justify them? This paper introduces the philosophical materials with which to formulate these questions, and offers preliminary answers. It starts by pinning down the concept of power, focusing on the ability that some agents have to shape others' lives. It then explores how AI enables and intensifies the exercise of power so understood, and sketches three problems with power and three ways to solve those problems. It emphasises, in particular, that justifying power requires more than satisfying substantive justificatory criteria; standards of proper authority and procedural legitimacy must also be met. We need to know not only what power may be used for, but how it may be used, and by whom.
Abstract:Foundation models are powerful technologies: how they are released publicly directly shapes their societal impact. In this position paper, we focus on open foundation models, defined here as those with broadly available model weights (e.g. Llama 2, Stable Diffusion XL). We identify five distinctive properties (e.g. greater customizability, poor monitoring) of open foundation models that lead to both their benefits and risks. Open foundation models present significant benefits, with some caveats, that span innovation, competition, the distribution of decision-making power, and transparency. To understand their risks of misuse, we design a risk assessment framework for analyzing their marginal risk. Across several misuse vectors (e.g. cyberattacks, bioweapons), we find that current research is insufficient to effectively characterize the marginal risk of open foundation models relative to pre-existing technologies. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements about misuse risks by revealing that past work has focused on different subsets of the framework with different assumptions, and articulates a way forward for more constructive debate. Overall, our work helps support a more grounded assessment of the societal impact of open foundation models by outlining what research is needed to empirically validate their theoretical benefits and risks.