Abstract:This paper presents a community-centered study of cultural limitations of text-to-image (T2I) models in the South Asian context. We theorize these failures using scholarship on dominant media regimes of representations and locate them within participants' reporting of their existing social marginalizations. We thus show how generative AI can reproduce an outsiders gaze for viewing South Asian cultures, shaped by global and regional power inequities. By centering communities as experts and soliciting their perspectives on T2I limitations, our study adds rich nuance into existing evaluative frameworks and deepens our understanding of the culturally-specific ways AI technologies can fail in non-Western and Global South settings. We distill lessons for responsible development of T2I models, recommending concrete pathways forward that can allow for recognition of structural inequalities.
Abstract:Work integrating conversations around AI and Disability is vital and valued, particularly when done through a lens of fairness. Yet at the same time, analyzing the ethical implications of AI for disabled people solely through the lens of a singular idea of "fairness" risks reinforcing existing power dynamics, either through reinforcing the position of existing medical gatekeepers, or promoting tools and techniques that benefit otherwise-privileged disabled people while harming those who are rendered outliers in multiple ways. In this paper we present two case studies from within computer vision - a subdiscipline of AI focused on training algorithms that can "see" - of technologies putatively intended to help disabled people but, through failures to consider structural injustices in their design, are likely to result in harms not addressed by a "fairness" framing of ethics. Drawing on disability studies and critical data science, we call on researchers into AI ethics and disability to move beyond simplistic notions of fairness, and towards notions of justice.