Abstract:Developers and quality assurance testers often rely on manual testing to test accessibility features throughout the product lifecycle. Unfortunately, manual testing can be tedious, often has an overwhelming scope, and can be difficult to schedule amongst other development milestones. Recently, Large Language Models (LLMs) have been used for a variety of tasks including automation of UIs, however to our knowledge no one has yet explored their use in controlling assistive technologies for the purposes of supporting accessibility testing. In this paper, we explore the requirements of a natural language based accessibility testing workflow, starting with a formative study. From this we build a system that takes as input a manual accessibility test (e.g., ``Search for a show in VoiceOver'') and uses an LLM combined with pixel-based UI Understanding models to execute the test and produce a chaptered, navigable video. In each video, to help QA testers we apply heuristics to detect and flag accessibility issues (e.g., Text size not increasing with Large Text enabled, VoiceOver navigation loops). We evaluate this system through a 10 participant user study with accessibility QA professionals who indicated that the tool would be very useful in their current work and performed tests similarly to how they would manually test the features. The study also reveals insights for future work on using LLMs for accessibility testing.
Abstract:As the role of information and communication technologies gradually increases in our lives, source code security becomes a significant issue to protect against malicious attempts Furthermore with the advent of data-driven techniques, there is now a growing interest in leveraging machine learning and natural language processing as a source code assurance method to build trustworthy systems Therefore training our future software developers to write secure source code is in high demand In this thesis we propose a framework including learning modules and hands on labs to guide future IT professionals towards developing secure programming habits and mitigating source code vulnerabilities at the early stages of the software development lifecycle In this thesis our goal is to design learning modules with a set of hands on labs that will introduce students to secure programming practices using source code and log file analysis tools to predict and identify vulnerabilities In a Secure Coding Education framework we will improve students skills and awareness on source code vulnerabilities detection tools and mitigation techniques integrate concepts of source code vulnerabilities from Function API and library level to bad programming habits and practices leverage deep learning NLP and static analysis tools for log file analysis to introduce the root cause of source code vulnerabilities