VSL 2014: VIENNA SUMMER OF LOGIC 2014
CAV ON SUNDAY, JULY 20TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:45-10:15 Session 123: FLoC Plenary Talk (joint with 9 other meetings)
Location: FH, Hörsaal 1
08:45
FLoC Plenary Talk: Electronic voting: how logic can help?

ABSTRACT. Electronic voting should offer at least the same guarantees than traditional paper-based voting systems. In order to achieve this, electronic voting protocols make use of cryptographic primitives, as in the more traditional case of authentication or key exchange protocols. All these protocols are notoriously difficult to design and flaws may be found years after their first release. Formal models, such as process algebra, Horn clauses, or constraint systems, have been successfully applied to automatically analyze traditional protocols and discover flaws. Electronic voting protocols however significantly increase the difficulty of the analysis task. Indeed, they involve for example new and sophisticated cryptographic primitives, new dedicated security properties, and new execution structures.

After an introduction to electronic voting, we will describe the current techniques for e-voting protocols analysis and review the key challenges towards a fully automated verification.

10:15-10:45Coffee Break
10:15-18:00 Session 126A: FLoC Olympic Games Big Screen: CASC Automated Theorem Proving System Competition (CASC-J7) (joint with 9 other meetings)
Location: FH, 2nd floor
10:15
FLoC Olympic Games Big Screen: 7th IJCAR ATP System Competition (CASC-J7)

ABSTRACT. The CADE and IJCAR conferences are the major forums for the presentation of new research in all aspects of automated deduction. In order to stimulate ATP research and system development, and to expose ATP systems within and beyond the ATP community, the CADE ATP System Competition (CASC) is held at each CADE and IJCAR conference. CASC-J7 will be held on 20th July 2014, during the 7th International Joint Conference on Automated Reasoning (IJCAR, which incorporates CADE), CASC evaluates the performance of sound, fully automatic, ATP systems. The evaluation is in terms of:

  • the number of problems solved,
  • the number of problems solved with a solution output, and
  • the average runtime for problems solved;

in the context of:

  • a bounded number of eligible problems,
  • chosen from the TPTP Problem Library, and
  • specified time limits on solution attempts.

The competition organizer is Geoff Sutcliffe. The competition is overseen by a panel of knowledgeable researchers who are not participating in the event. The panel members are Bernhard Beckert, Maria Paola Bonacina, and Aart Middeldorp.

10:15-18:00 Session 126B: FLoC Olympic Games Big Screen: Termination Competition (termComp) (joint with 9 other meetings)
Location: FH, 2nd floor
10:15
FLoC Olympic Games Big Screen: Termination Competition (termCOMP 2014)
SPEAKER: Albert Rubio

ABSTRACT. The termination competition focuses on automated termination analysis for all kinds of programming paradigms, including categories for term rewriting, imperative programming, logic programming, and functional programming. Moreover, there are also categories for automated complexity analysis. In all categories, participation of tools providing certified proofs is welcome.

In every category, all participating tools of that category are run on a randomly selected subset of the available problems. The goal of the termination competition is to demonstrate the power of the leading tools in each of these areas.

The 2014 execution of the competition is organized by Johannes Waldmann and Stefan von der Krone. The Steering Committee is formed by Jürgen Giesl, Frederic Mesnard, Albert Rubio (Chair), René Thiemann and Johannes Waldmann.

10:45-13:05 Session 127C: Model Checking and Testing
Location: FH, Hörsaal 1
10:45
Engineering a Static Verification Tool for GPU Kernels

ABSTRACT. We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for GPU kernels written in OpenCL and CUDA. We have published papers on the top-level research ideas underpinning GPUVerify, but embedding theses ideas in a tool that can be applied to real-world examples with a reasonable degree of efficiency and automation has required significant optimisation effort and a number of important engineering decisions. We describe these decisions and optimisations, and demonstrate their effectiveness using data experimental gathered for a set of more than 500 benchmarks, encompassing all the major publicly available benchmark suites and a commercial benchmark suite. Our experiments plot the progress of GPUVerify, starting from a sound concept that did not scale well in practice, through a series of improvements and optimisations, to a fully functional and relatively efficient analysis tool. We also report on our efforts to encourage uptake of the tool by industry through features to reduce the rate of false positives and provide clear error messages when verification fails. Finally, we describe our engagement with industry to promote usage of the tool beyond academia and the fruitful feedback this has brought. Our hope is that this engineering and experience report will serve the verification community by helping to inform future tooling efforts.

11:05
Lazy Annotation Revisited

ABSTRACT. Lazy Annotation is a method of software model checking that performs a backtracking search for a symbolic counterexample. When the search backtracks, the program is annotated with a learned fact that constrains future search. In this sense, the method is closely analogous to conflict-driven clause learning in Boolean satisfiability solvers. In this paper, we develop several improvements to the basic lazy annotation approach. These include a decision heuristic, a form of non-chronological backtracking, and an adaptation to large-block encodings that exploits the power of modern satisfiability modulo theories solvers.

The resulting algorithm is compared both theoretically and experimentally to two approaches that rely on similar principles but use different learning strategies: unfolding-based bounded model checking and property-driven reachability. The lazy annotation approach is seen to be significantly more robust over a collection of benchmarks from the SV-COMP 2013 software model checking competition and the Microsoft Static Driver Verifier regression suite.

11:25
Interpolating Property Directed Reachability
SPEAKER: Yakir Vizel

ABSTRACT. Current SAT-based Model Checking is based on two major approaches: Interpolation-based (IMC) (global, with unrollings) and Property Directed Reachability/IC3 (PDR) (local, without unrollings). IMC generates candidate invariants using interpolation over an unrolling of a system, without putting any restrictions on the SAT-solver's search. PDR generates candidate invariants by a local search over a single instantiation of the transition relation, effectively guiding the SAT solver's search. The two techniques are considered to be orthogonal and have different strength and limitations. In this paper, we present a new technique, called AVY, that effectively combines the key insights of the two approaches. Like \Imc, it uses unrollings and interpolants to construct an initial candidate invariant, and, like PDR, it uses local inductive generalization to keep the invariants in compact clausal form. On the one hand, AVY is an incremental IMC extended with a local search for CNF interpolants. On the other, it is PDR extended with a global search for bounded counterexamples. We implemented the technique using ABC and have evaluated it on the HWMCC'12 and HWMCC'13 benchmark-suites. Our results show that the prototype significantly outperforms PDR and McMillan's interpolation algorithm (as implemented in ABC) on a large number of test cases.

11:45
Verifying Relative Error Bounds using Symbolic Simulation
SPEAKER: unknown

ABSTRACT. In this paper we consider the problem of formally verifying hardware that is specified to compute reciprocal, reciprocal square root, and power-of-two functions on floating point numbers to within a given relative error. Such specifications differ from the common case in which any given input is specified to have exactly one correct output. Our approach is based on symbolic simulation with binary decision diagrams, and involves two distinct steps. First, we prove a lemma that reduces the relative error specification to several inequalities that involve reasoning about natural numbers only. The most complex of these inequalities asserts that the product of several naturals is less-than/greater-than another natural. Second, we invoke one of several customized algorithms that decides the inequality, without performing the expensive symbolic multiplications directly. We demonstrate the effectiveness of our approach on a next-generation Intel processor design and report encouraging time and space metrics for these proofs.

12:05
Regression Test Selection for Distributed Software Histories

ABSTRACT. Regression test selection analyzes incremental changes to a codebase and chooses to run only those tests whose behavior may be affected by the latest changes in the code. By focusing on a small subset of all the tests, the testing process runs faster and can be more tightly integrated into the development process. Existing techniques for regression test selection consider two versions of the code at a time, effectively assuming a development process where changes to the code occur in a linear sequence.

Modern development processes that use distributed version-control systems are more complex. Software version histories are generally modeled as directed graphs; in addition to version changes occurring linearly, multiple versions can be related by other commands, e.g., branch, merge, rebase, cherry-pick, revert, etc. This paper describes a regression test-selection technique for software developed using modern distributed version-control systems. By modeling different branch or merge commands directly in our technique, it computes safe test sets that can be substantially smaller than applying previous techniques to a linearization of the software history.

We evaluate our technique on software histories of several large open-source projects. The results are encouraging: our technique obtained an average of 10.89× reduction in the number of tests over an existing technique while still selecting all tests whose behavior may differ.

12:25
GPU-Based Graph Decomposition into Strongly Connected and Maximal End Components
SPEAKER: Anton Wijs

ABSTRACT. This paper presents parallel algorithms for component decomposition of graph structures on General Purpose Graphics Processing Units (GPUs). In particular, we consider the problem of decomposing sparse graphs into strongly connected components, and decomposing stochastic games (such as MDPs) into maximal end components. These problems are key ingredients of many (probabilistic) model-checking algorithms. We explain the main rationales behind our GPU-algorithms, and show a significant speed-up over the sequential counterparts in several case studies.

12:45
Software Verification in the Google App-Engine Cloud

ABSTRACT. Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework CPAchecker to it. We provide our new verification service as a web front-end to users who wish to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.

12:55
The nuXmv Symbolic Model Checker
SPEAKER: Marco Roveri

ABSTRACT. This paper describes the nuXmv symbolic model checker for finite- and infinite-state synchronous transition systems. nuXmv is the evolution of the NuSMV open source model checker. It builds on and extends NuSMV along two main directions. For finite-state systems it complements the basic verification techniques of NuSMV with state-of-the-art verification algorithms. For infinite-state systems, it extends the NuSMV language with new data types, namely Integers and Reals, and it provides advanced SMT-based model checking techniques.

Besides extended functionalities, nuXmv has been optimized in terms of performance to be competitive with the state of the art. nuXmv has been used in several industrial projects as verification back-end, and it is the basis for several extensions to cope with requirements analysis, contract based design, model checking of hybrid systems, safety assessment, and software model checking.

13:00-14:30Lunch Break
14:30-16:00 Session 129C: Biology and Hybrid Systems
Location: FH, Hörsaal 1
14:30
Hardware Model Checking Competition 2014 CAV Edition
SPEAKER: Armin Biere

ABSTRACT. Presentation of the results of the 7th International Hardware Model Checking Competition in 2014, CAV Edition.

14:40
Analyzing and Synthesizing Genomic Logic Functions

ABSTRACT. Deciphering the developmental program of an embryo is a fundamental question in biology. Landmark papers by Peter et al. [Peter and Davidson, Nature, 2011; Peter, Faure and Davidson, PNAS, 2012] have recently shown how computational models of gene regulatory networks provide system-level causal understanding of the developmental processes of the sea urchin, and enable powerful predictive capabilities. A crucial aspect of their work is empirically deriving plausible models that explain all the known experimental data, a task that becomes infeasible in practice due to the inherent complexity of the biological systems. We present a generic SMT-based approach to analyse and synthesize real models. We apply our approach to the sea urchin embryo, and successfully improve the state-of-the-art by providing, for the first time, biologists with models that perfectly explains all known data. Furthermore, we show how infeasibility results on certain classes of models can be used to drive new biological insights.

15:00
Finding instability in biological models

ABSTRACT. The stability of biological models is an important test for establishing their soundness and accuracy. Stability in biological systems represents the ability of a robust system to always return to homeostasis. In recent work, modular approaches for proving stability have been found to be swift and scalable. If stability is however not proved, the currently available techniques apply an exhaustive search through the unstable state space to find loops. This search is frequently prohibitively computationally expensive, limiting its usefulness. Here we present a new modular approach eliminating the need for an exhaustive search for loops. Using models of biological systems we show that the technique finds loops significantly faster than brute force approaches. Furthermore, for a subset of stable systems which are resistant to modular proofs, we observe a speed up of up to 3 orders of magnitude as the exhaustive searches for loops which cause instability are avoided. With our new procedure we are able to prove instability and stability in a number of realistic biological models, including adaptation in bacterial chemotaxis, the lambda phage lysogeny/lysis switch, voltage gated channel opening and cAMP oscillations in the slime mold Dictyostelium discoideum. This new approach will support the development of new clinically relevant tools for industrial biomedicine.

15:20
Invariant verification of nonlinear hybrid automata networks of cardiac cells
SPEAKER: Zhenqi Huang

ABSTRACT. Verification algorithms for networks of nonlinear hybrid automata (HA) can aid our understanding and our ability to control biological processes such as cardiac arrhythmia, memory formation, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. It uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with $5$ cells (20 continuous variables, 29^5 locations) typically in less than 15 minutes for upto reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

15:40
Diamonds are a Girl's Best Friend: Partial Order Reduction for Timed Automata With Abstractions
SPEAKER: Henri Hansen

ABSTRACT. A major obstacle for using partial order reduction in the context of real time verification is that the presence of clocks and clock constraints breaks the usual \emph{diamond structure} of otherwise independent transitions. This is especially true when information of the relative values of clocks is preserved in the form of diagonal constraints. However, when diagonal constraints are relaxed by a suitable abstraction, some diamond structure is preserved in the zone graph. In this article, we introduce a variant of the stubborn set method for reducing an abstracted zone graph. Our method works with all abstractions, but are especially suited to to situations where one abstract execution can simulate several permutations of the corresponding concrete execution, even though it might not be able to simulate the permutations of the abstract execution. We define independence relations with respect to this ``hidden'' diamond structure, and define stubborn sets using these relations. We provide a reference implementation for verifying timed language inclusion, to demonstrate the effectiveness of our method.

14:30-16:00 Session 129G: FLoC Olympic Games: Answer Set Programming Modeling Competition 2014 (joint with 9 other meetings)
Location: FH, Hörsaal 2
14:30
FLoC Olympic Games: Answer Set Programming Modeling Competition 2014
SPEAKER: unknown

ABSTRACT. Answer Set Programming (ASP) is a well-established paradigm of declarative programming that has been developed in the field of logic programming and nonmonotonic reasoning. The ASP Modeling Competition 2014, held on-site collocated with the 30th International Conference on Logic Programming (ICLP), follows the spirit of Prolog Programming Contests from previous ICLP editions. That is, complex problems of various kinds are waiting to be modeled, using the ASP-Core-2 standard input language of ASP systems. Dauntless programmers, novices and experts, are encouraged to team up, take the challenge and the glory.

16:00-16:30Coffee Break
16:30-17:10 Session 130C: Hybrid Systems
Location: FH, Hörsaal 1
16:30
Reachability Analysis of Hybrid Systems Using Symbolic Orthogonal Projections

ABSTRACT. This talk deals with reachability analysis of hybrid systems with continuous dynamics described by linear differential inclusions and arbitrary linear maps for discrete updates. The invariants, guards, and sets of reachable states are given as convex polyhedra. Our reachability algorithm is based on a novel representation class for convex polyhedra, the symbolic orthogonal projections (sops), on which various geometric operations, including convex hulls, Minkowski sums, linear maps, and intersections, can be performed efficiently and exactly. The capability to represent intersections of convex polyhedra exactly is superior to support function-based approaches like the LGG-algorithm of Le Guernic and Girard.

Accompanied by some simple examples, we address the problem of the monotonic growth of the exact representation and propose a combination of our reachability algorithm with the LGG-algorithm. This results in an efficient method of better accuracy than the LGG-algorithm and its productive implementation in SpaceEx.

16:50
Verifying LTL properties of hybrid systems with K-liveness

ABSTRACT. The verification of liveness properties is an important challenge in the design of real-time and hybrid systems. In contrast to the verification of safety properties, for which there are several solutions available, there are really few tools that support liveness properties such as general LTL formulas for real-time systems, even in the case of timed automata. In the context of finite-state model checking, K-Liveness is a recently proposed algorithm that tackles the problem by proving that an accepting condition can be visited at most K times. K-Liveness has shown to be very efficient, thanks also to its tight integration with IC3, probably the current most effective technique for safety verification. Unfortunately, the approach is neither complete nor effective (even for simple properties) in the case of infinite-state systems with continuous time.

In this paper, we extend K-Liveness to deal with LTL for hybrid systems. On the theoretical side, we show how to extend the reduction from LTL to the reachability of an accepting condition in order to make the algorithm work with continuous time. In particular, we prove that the new reduction is complete for rectangular hybrid automata, in the sense that the LTL property holds if and only if there exists K such that the accepting condition is visited at most K times. On the practical side, we present an efficient integration of K-Liveness in an SMT-version of IC3, and demonstrate its effectiveness on several benchmarks.

17:10-18:00 Session 131: Invited Talk
Location: FH, Hörsaal 1
17:10
Automated Testing (preliminary title)