Abstract:We investigate and observe the behaviour and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution.
Abstract:With there being many technical and ethical issues with artificial intelligence (AI) that involve marginalized communities, there is a growing interest for design methods used with marginalized people that may be transferable to the design of AI technologies. Participatory design (PD) is a design method that is often used with marginalized communities for the design of social development, policy, IT and other matters and solutions. However, there are issues with the current PD, raising concerns when it is applied to the design of technologies, including AI technologies. This paper argues for the use of PD for the design of AI technologies, and introduces and proposes a new PD, which we call agile participatory design, that not only can could be used for the design of AI and data-driven technologies, but also overcomes issues surrounding current PD and its use in the design of such technologies.
Abstract:Tracking sexual violence is a challenging task. In this paper, we present a supervised learning-based automated sexual violence report tracking model that is more scalable, and reliable than its crowdsource based counterparts. We define the sexual violence report tracking problem by considering victim, perpetrator contexts and the nature of the violence. We find that our model could identify sexual violence reports with a precision and recall of 80.4% and 83.4%, respectively. Moreover, we also applied the model during and after the \#MeToo movement. Several interesting findings are discovered which are not easily identifiable from a shallow analysis.