Abstract:Artificial intelligence (AI) and machine learning (ML) have become increasingly vital in the development of novel defense and intelligence capabilities across all domains of warfare. An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models. It is imperative that AI/ML models can defend against these attacks. A2I/AML defenses will help provide the necessary assurance of these advanced capabilities that use AI/ML models. The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities via new A2I/AML defenses by fostering a collaborative environment across the U.S. Department of Defense and U.S. Intelligence Community. The A2IWG aims to identify specific challenges that it can help solve or address more directly, with initial focus on three topics: AI Trusted Robustness, AI System Security, and AI/ML Architecture Vulnerabilities.
Abstract:There has been considerable and growing interest in applying machine learning for cyber defenses. One promising approach has been to apply natural language processing techniques to analyze logs data for suspicious behavior. A natural question arises to how robust these systems are to adversarial attacks. Defense against sophisticated attack is of particular concern for cyber defenses. In this paper, we develop a testing framework to evaluate adversarial robustness of machine learning cyber defenses, particularly those focused on log data. Our framework uses techniques from deep reinforcement learning and adversarial natural language processing. We validate our framework using a publicly available dataset and demonstrate that our adversarial attack does succeed against the target systems, revealing a potential vulnerability. We apply our framework to analyze the influence of different levels of dropout regularization and find that higher dropout levels increases robustness. Moreover 90% dropout probability exhibited the highest level of robustness by a significant margin, which suggests unusually high dropout may be necessary to properly protect against adversarial attacks.