Abstract:A consensus report produced for the Air Force Research Laboratory by the National Academies of Sciences, Engineering, and Mathematics documented a prevalent and increasing desire to support human-Artificial Intelligence (AI) teaming across military service branches. Sonalysts has begun an internal initiative to explore the training of human-AI teams. The first step in this effort is to develop a Synthetic Task Environment (STE) that is capable of facilitating research on human-AI teams. We decided to use Joint All-Domain Command and Control (JADC2) as a focus point for developing the STE because the volume of sensor inputs and decision options within the JADC2 concept likely requires the use of AI systems to enable timely decisions. Given this focus, we engaged a number of Subject-Matter Experts (SMEs) with Command and Control experience to gain insight into developing a STE that embodied the teaming challenges associated with JADC2. This report documents our initial engagement with those stakeholders. The research team identified thirteen Sonalysts employees with military backgrounds and Command and Control experience, and invited them to participate. Twelve respondents completed the survey. The team then analyzed the responses to identify themes that emerged and topics that would benefit from further analysis. The results indicated that our SMEs were amenable to research using tasks that were analogous to those encountered in military environments, as long as they required teams to process a great deal of incoming data to arrive at complex decisions. The SMEs felt that the testbed should support 'teams of teams" that represented a matrixed organization, and that it should support a robust array to spoken, text-based, and face-to-face communications.
Abstract:A consensus report produced for the Air Force Research Laboratory (AFRL) by the National Academies of Sciences, Engineering, and Mathematics documented a prevalent and increasing desire to support human-Artificial Intelligence (AI) teaming across military service branches. Sonalysts has begun an internal initiative to explore the training of Human-AI teams. The first step in this effort is to develop a Synthetic Task Environment (STE) that is capable of facilitating research on Human-AI teams. Our goal is to create a STE that offers a task environment that could support the breadth of research that stakeholders plan to perform within this domain. As a result, we wanted to sample the priorities of the relevant research community broadly, and the effort documented in this report is our initial attempt to do so. We created a survey that featured two types of questions. The first asked respondents to report their agreement with STE features that we anticipated might be important. The second represented open-ended questions that asked respondents to specify their priorities within several dimensions of the anticipated STE. The research team invited nineteen researchers from academic and Government labs to participate, and 11 were able to complete the survey. The team analyzed their responses to identify themes that emerged and topics that would benefit from further analysis. The most significant finding of the survey was that a number of researchers felt that various open-source STEs that would meet our needs already exist. Researchers also emphasized the need for automated transcription and coding tools to ease the burden of assessing inter-team communications; the importance of robust data capture and export capabilities; and the desirability of extensive flexibility across many aspects of the tool.