FoMLAS2023:Papers with Abstracts

Papers
Abstract. Machine learning components, such as neural networks, gradually make their way into software; and, when the software is critically safe, the machine learning components must be verifiably safe. This gives rise to the problem of neural network verification. The community has been making rapid progress in developing methods for incorporating logical specifications into neural networks, both in training and verification. However, to truly unlock the ability to verify real-world neural network-enhanced systems we believe the following is necessary:
1. The specification should be written once and should automatically work with training and verification tools. 2. The specification should be written in a manner independent of any particular neural network training/inference platform. 3. The specification should be able to be written as a high-level property over the problem space, rather than a property over the input space (of the neural network). 4. After verification the specification should be exportable to general interactive theorem provers so that its proof can be incorporated into proofs about the larger systems around the neural network.
In this tutorial we presented Vehicle, a tool that allows users to do all of this. We provided an introduction to the Vehicle specification language, and then walked attendees through using it to express a variety of famous and not-so-famous specifications. Subsequently we demonstrate how a specification can be compiled down to i) queries for network verifiers, ii) Tensorflow graphs to be used as loss functions during training and iii) cross-compiled to Agda, a main-stream interactive theorem prover. Finally we discussed some of the technical challenges in the implementation as well as some of the outstanding problems.
Abstract. Owing to their remarkable learning capabilities and performance in real-world applica- tions, the use of machine learning systems based on Deep Neural Networks (DNNs) has been continuously increasing. However, various case studies and empirical findings in the literature suggest that slight variations to DNN inputs can lead to erroneous and undesir- able DNN behavior. This has led to considerable interest in their formal analysis, aiming to provide guarantees regarding a given DNN’s behavior. Existing frameworks provide robust- ness and/or safety guarantees for the trained DNNs, using satisfiability solving and linear programming. We proposed FANNet, the first model checking-based framework for analyz- ing a broader range of DNN properties. However, the state-space explosion associated with model checking entails a scalability problem, making the FANNet applicable only to small DNNs. This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal DNN analysis. Compared to the state-of-the-art FANNet, this enables our new model checking-based framework to reduce the verification’s timing overhead by a factor of up to 8000, making the framework ap- plicable to DNNs even with approximately 80 times more network parameters. This in turn allows the analysis of DNN safety properties using the new framework, in addition to all the DNN properties already included with FANNet. The framework is shown to be efficiently able to analyze properties of DNNs trained on healthcare datasets as well as the well-acknowledged ACAS Xu networks.
Abstract. In this paper, we propose an innovative approach that incorporates formal verification methods into the training process of stochastic Reinforcement Learning (RL) agents. Our method allows for the analysis and improvement of the learning process of these agents. Specifically, we demonstrate the capability to evaluate RL policies (prediction) and opti- mize them (control) using different model checkers. The integration of formal verification tools with stochastic RL agents strengthens the applicability and potential of our approach, paving the way for more robust and reliable learning systems.
Abstract. Private inference on neural networks requires running all the computation on encrypted data. Unfortunately, neural networks contain a large number of non-arithmetic operations, such as ReLU activation functions and max pooling layers, which incur a high latency cost in their encrypted form. To address this issue, the majority of private inference methods re- place some or all of the non-arithmetic operations with a polynomial approximation. This step introduces approximation errors that can substantially alter the output of the neural network and decrease its predictive performance. In this paper, we propose a Lipschitz- Guided Abstraction Refinement method (LiGAR), which provides strong guarantees on the global approximation error. Our method is iterative, and leverages state-of-the-art Lipschitz constant estimation techniques to produce increasingly tighter bounds on the worst-case error. At each iteration, LiGAR designs the least expensive polynomial approx- imation by solving the dual of the corresponding optimization problem. Our preliminary experiments show that LiGAR can easily converge to the optimum on medium-sized neural networks.
Abstract. The interest in the verification of neural networks has been growing steadily in recent years and there have been several advancements in theory, algorithms and tools for the verification of neural networks. Also propelled by VNNCOMP — the annual competition of tools for the verification of neural networks — the community is making steady progress to close the gap with practical applications. In this scenario, we believe that researchers and practitioners should rely on some commonly accepted standard to describe (trained) networks and their properties, as well as a toolset to visualize and to convert from common formats to such standard. The purpose of VNN-LIB and CoCoNet is precisely to provide such standard and toolset, respectively. In this paper we briefly describe the principles and design choices behind the current version of VNN-LIB standard, and we give an overview of the current and planned capabilities of CoCoNet.
Abstract. Verification of machine learning models used in Natural Language Processing (NLP) is known to be a hard problem. In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NLP. Here, we study technical reasons that underlie this problem. Based on this analysis, we propose practical methods and heuristics for preparing NLP datasets and models in a way that renders them amenable to state-of-the-art verification methods. We implement these methods as a Python library called ANTONIO that links to the neural network verifiers ERAN and Marabou. We perform evaluation of the tool using an NLP dataset R-U-A-Robot suggested as a benchmark for verifying legally critical NLP applications. We hope that, thanks to its general applicability, this work will open novel possibilities for including NLP verification problems into neural network verification competitions, and will popularise NLP problems within this community.
Abstract. Current approaches to neural network verification focus on specifications that target small regions around known input data points, such as local robustness. Thus, using these approaches, we can not obtain guarantees for inputs that are not close to known inputs. Yet, it is highly likely that a neural network will encounter such truly unseen inputs during its application. We study global specifications that — when satisfied — provide guarantees for all potential inputs. We introduce a hyperproperty formalism that allows for expressing global specifications such as monotonicity, Lipschitz continuity, global robustness, and dependency fairness. Our formalism enables verifying global specifications using existing neural network verification approaches by leveraging capabilities for verifying general computational graphs. Thereby, we extend the scope of guarantees that can be provided using existing methods. Recent success in verifying specific global specifications shows that attaining strong guarantees for all potential data points is feasible.