Maynooth University
Abstract:Extensive research on formal verification of machine learning (ML) systems indicates that learning from data alone often fails to capture underlying background knowledge. A variety of verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties, however, these verifiers typically assume a trained network with fixed weights. ML-enabled autonomous systems are required to not only detect incorrect predictions, but should also possess the ability to self-correct, continuously improving and adapting. A promising approach for creating ML models that inherently satisfy constraints is to encode background knowledge as logical constraints that guide the learning process via so-called differentiable logics. In this research preview, we compare and evaluate various logics from the literature in weakly-supervised contexts, presenting our findings and highlighting open problems for future work. Our experimental results are broadly consistent with results reported previously in literature; however, learning with differentiable logics introduces a new hyperparameter that is difficult to tune and has significant influence on the effectiveness of the logics.
Abstract:Software verification is an important tool in establishing the reliability of critical systems. One potential area of application is in the field of robotics, as robots take on more tasks in both day-to-day areas and highly specialised domains. Robots are usually given a plan to follow, if there are errors in this plan the robot will not perform reliably. The capability to check plans for errors in advance could prevent this. Python is a popular programming language in the robotics domain, through the use of the Robot Operating System (ROS) and various other libraries. Python's Turtle package provides a mobile agent, which we formally model here using Communicating Sequential Processes (CSP). Our interactive toolchain CSP2Turtle with CSP model and Python components, enables Turtle plans to be verified in CSP before being executed in Python. This means that certain classes of errors can be avoided, and provides a starting point for more detailed verification of Turtle programs and more complex robotic systems. We illustrate our approach with examples of robot navigation and obstacle avoidance in a 2D grid-world.
Abstract:The Why3 IDE and verification system facilitates the use of a wide range of Satisfiability Modulo Theories (SMT) solvers through a driver-based architecture. We present Where4: a portfolio-based approach to discharge Why3 proof obligations. We use data analysis and machine learning techniques on static metrics derived from program source code. Our approach benefits software engineers by providing a single utility to delegate proof obligations to the solvers most likely to return a useful result. It does this in a time-efficient way using existing Why3 and solver installations - without requiring low-level knowledge about SMT solver operation from the user.