VSL 2014: VIENNA SUMMER OF LOGIC 2014
CAV ON SATURDAY, JULY 19TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:45-10:15 Session 106A: FLoC Panel (joint with 9 other meetings)
Location: FH, Hörsaal 1
08:45
FLoC Panel: Computational Complexity and Logic: Theory vs. Experiments
SPEAKER: unknown

ABSTRACT. The success of SAT solving, Answer Set Programming, and Model Checking has changed our view of computational complexity and (un)decidability. Program committees are increasingly discussing the value of theoretical and empirical complexity evaluations. What is the intellectual value of a completeness result? Which standards do we apply to reproducibility of experiments? What is the role of competitions? What are the minimal requirements for an experimental proof of concept?

10:15-10:45Coffee Break
10:45-13:05 Session 109A: Software Verification
Location: FH, Hörsaal 1
10:45
The Spirit of Ghost Code
SPEAKER: unknown

ABSTRACT. In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this article, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

11:05
SMT-based Model Checking for Recursive Programs

ABSTRACT. We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both "over-" and "under-approximations" of procedure summaries. Under-approximations are used to propagate information over procedure calls. Over-approximations are used to block infeasible counterexamples and detect convergence. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE "lazily". We use existing interpolation techniques to over-approximate QE and introduce "Model-Based Projection" to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.

11:25
Property-Directed Shape Analysis

ABSTRACT. This paper addresses the problem of automatically generating quantified invariants for programs that manipulate singly and doubly linked-list data structures. Our algorithm is property-directed -- i.e., its choices are driven by the properties to be proven. The algorithm is able to establish that a correct program has no memory-safety violations -- i.e., there are no null-pointer dereferences, no memory leaks, etc. -- and that data-structure invariants are preserved. For programs with errors, the algorithm produces concrete counterexamples.

More broadly, the paper describes how to integrate IC3/PDR with full predicate abstraction. The analysis method is complete in the following sense: if an inductive invariant that proves that the program satisfies a given property is expressible as a Boolean combination of a given set of predicates, then the analysis will find such an invariant. To the best of our knowledge, this method represents the first shape-analysis algorithm that is capable of (i) reporting concrete counterexamples, or alternatively (ii) establishing that the predicates in use is not capable of proving the property in question.

11:45
Shape Analysis via Second-Order Bi-Abduction
SPEAKER: unknown

ABSTRACT. We present a new modular shape analysis that can synthesize heap memory specification on a per method basis. We rely on a second-order bi-abduction mechanism that can give interpretations to unknown shape predicates. There are several novel features in our shape analysis. Firstly, it is grounded on second-order bi-abduction. Secondly, we distinguish unknown pre-predicates in pre-conditions, from unknown post-predicates in post-condition; since the former may be strengthened, while the latter may be weakened. Thirdly, we provide a new heap guard mechanism to support more precise preconditions for heap specification. Lastly, we formalise a set of derivation and normalization rules to give concise definitions for unknown predicates. Our approach has been proven sound and is implemented on top of an existing automated verification system. We show its versatility in synthesizing a wide range of intricate shape specifications.

12:05
ICE: A Robust Learning Framework for Synthesizing Invariants
SPEAKER: unknown

ABSTRACT. We introduce a new paradigm for using black-box learning to synthesize invariants called ICE-learning that learns using examples, counter-examples, and implications, and show that it allows building honest teachers and convergent mechanisms for invariant synthesis. We observe that existing algorithms for black-box abstract interpretation can be interpreted as monotonic ICE learning algorithms, develop two classes of new non-monotonic ICE-learning domains, one for learning numerical invariants for scalar variables and one for quantified invariants for arrays and dynamic lists, and establish convergence results for them. We implement these ICE learning algorithms in a prototype verifier and show that the robustness that it brings is practical and effective.

12:25
From Invariant Checking to Invariant Inference Using Randomized Search
SPEAKER: Rahul Sharma

ABSTRACT. We describe a general framework C2I for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants, C2I generates an inference procedure that iteratively invokes two phases. The search phase uses randomized search to discover candidate invariants and the validate phase uses the checker to either prove or refute that the candidate is an actual invariant. To demonstrate the applicability of C2I, we use it to generate inference procedures that prove safety properties of numerical programs, prove non-termination of numerical programs, prove functional specifications of array manipulating programs, prove safety properties of string manipulating programs, and prove functional specifications of heap manipulating programs that use linked list data structures. We show that performance is competitive with state-of-the-art invariant inference techniques for more specialized domains. To the best of our knowledge, this work is the first to demonstrate such a generally applicable invariant inference technique.

12:45
SMACK: Decoupling Source Language Details from Verifier Implementations

ABSTRACT. A major obstacle to putting software verification research into practice is the high cost of developing the infrastructure enabling the application of verification algorithms to actual production code, in all of its complexity. Handling an entire programming language is a huge endeavor which few researchers are willing to undertake, and even fewer could invest the effort to implement their verification algorithms for many source languages. To decouple the implementations of verification algorithms from the details of source languages, and enable rapid prototyping on production code, we have developed SMACK. At its core, SMACK is a translator from the LLVM compiler's popular intermediate representation (IR) into the Boogie intermediate verification language (IVL). Sourcing LLVM IR exploits an increasing number of compiler frontends, optimizations, and analyses. Targeting Boogie exploits a canonical platform which simplifies the implementation of algorithms for verification, model checking, and abstract interpretation. Our initial experience verifying C language programs is encouraging: SMACK is competitive in SV-COMP benchmarks, is able to translate large programs (100 KLOC), and is used in several verification research prototypes.

12:55
Syntax-Guided Synthesis Competition Results
SPEAKER: Rajeev Alur
10:45-12:45 Session 109D: FLoC Inter-Conference Topic: Security. Software Security (joint with CSF)
Location: FH, Hörsaal 6
10:45
Declarative Policies for Capability Control
SPEAKER: unknown

ABSTRACT. In capability-safe languages, components can access a resource only if they possess a capability for that resource. As a result, a programmer can prevent an untrusted component from accessing a sensitive resource by ensuring that the component never acquires the corresponding capability. In order to reason about which components may use a sensitive resource it is necessary to reason about how capabilities propagate through a system. This may be difficult, or, in the case of dynamically composed code, impossible to do before running the system.

To counter this situation, we propose extensions to capability-safe languages that restrict the use of capabilities according to declarative policies. We introduce two independently useful semantic security policies to regulate capabilities and describe language-based mechanisms that enforce them. Access control policies restrict which components may use a capability and are enforced using higher-order contracts. Integrity policies restrict which components may influence (directly or indirectly) the use of a capability and are enforced using an information-flow type system. Finally, we describe how programmers can dynamically and soundly combine components that enforce access control or integrity policies with components that enforce different policies or even no policy at all.

11:15
Portable Software Fault Isolation
SPEAKER: Joshua Kroll

ABSTRACT. We present a set of novel techniques for architecture portable software fault isolation (SFI) together with a working, proved-sound prototype in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the CompCert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of CompCert, we can obtain binary modules that satisfy the SFI memory safety policy for any of CompCert's supported architectures (currently: PowerPC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems. Specifically, we prove that any program output by our transformer will be safe with respect to a modified Cminor operational semantics that respects the SFI policy. Further, we arrange that this safety extends to assembly code when our transformer is composed with CompCert. Finally, we argue that our transformer does not modify the observable behavior of safe executions in the Cminor semantics.

11:45
Certificates for Verifiable Forensics
SPEAKER: Corin Pitcher

ABSTRACT. Digital forensics reports typically document the search process that has led to a conclusion; the primary means to verify the report is to repeat the search process. We believe that, as a result, the Trusted Computing Base for digital forensics is unnecessarily large and opaque.

We advocate the use of forensic certificates as intermediate artifacts between search and verification. Because a forensic certificate has a precise semantics, it can be verified without knowledge of the search process and forensic tools used to create it. In addition, this precision opens up avenues for the analysis of forensic specifications. We present a case study using the specification of a “deleted” file.

We propose a verification architecture that addresses the enormous size of digital forensics data sets. As a proof of concept, we consider a computer intrusion case study, drawn from the Honeynet project. Our Coq formalization yields a verifiable certificate of the correctness of the underlying forensic analysis. The Coq development for this case study is available at http://fpl.cs.depaul.edu/projects/forensics/.

12:15
Information flow monitoring as abstract interpretation for relational logic

ABSTRACT. A number of systems have been developed for dynamic information flow control (IFC). In such systems, the security policy is expressed by labeling input and output channels; it is enforced by tracking and checking labels on data. Some systems have been proved to enforce some form of noninterference (NI), formalized as a property of two runs of the program. In practice, NI is too strong and it is desirable to enforce some relaxation of NI that allows downgrading under constraints that have been classified as `what', `where', `who', or `when' policies. To encompass a broad range of policies, relational logic has been proposed as a means to specify and statically enforce policy. This paper shows how relational logic policies can be dynamically checked. To do so, we provide a new account of monitoring, in which the monitor state is viewed as an abstract interpretation of certain pairs of program runs.

13:00-14:30Lunch Break
14:30-16:00 Session 113A: FLoC Inter-Conference Topic: Security (joint with CSF)
Chair:
Location: FH, Hörsaal 1
14:30
Synthesis of Masking Countermeasures against Side Channel Attacks
SPEAKER: unknown

ABSTRACT. When mathematicians and computer scientists design cryptographic algorithms, they assume that sensitive information can be manipulated in a closed computing environment. Unfortunately, real computers and microchips leak information about the software code they execute, e.g. through power dissipation and electromagnetic radiation. Such side channel leaks has been exploited in many commercial systems in the embedded space. In this paper, we propose a counterexample guided inductive synthesis method to generating software countermeasures for implementations of cryptographic algorithms. Our new method guarantees that the resulting software code is "perfectly masked", that is, all intermediate computation results are statistically independent from the secret data. We have implemented our method based on the LLVM compiler and the Yices SMT solver. Our experiments on a set of cryptographic software code show that the new method is both effective in eliminating side channel leaks and scalable for practical use.

14:50
Temporal Mode-Checking for Runtime Monitoring of Privacy Policies
SPEAKER: Limin Jia

ABSTRACT. Fragments of first-order temporal logic are useful for representing many practical privacy and security policies. Past work has proposed two strategies for checking event trace (audit log) compliance with policies: online monitoring and offline audit. Although online monitoring is space- and time-efficient, existing techniques insist that satisfying instances of all subformulas of the policy be amenable to caching, which limits expressiveness when some subformulas have infinite support. In contrast, offline audit is brute-force and can handle such policies but is not as efficient. This paper proposes a new online monitoring algorithm that caches satisfying instances when it can, and falls back to the brute-force search when it cannot. Our key technical insight is a new static check, called the temporal mode check, which determines subformulas for which such caching is feasible and those for which it is not and, hence, guides our algorithm. We prove the correctness of our algorithm, and evaluate its performance over synthetic traces and realistic policies.

15:10
Program Verification through String Rewriting
SPEAKER: unknown

ABSTRACT. We consider the problem of verifying programs operating on strings, in combination with other datatypes like arithmetic or heap. As a tool to check verification conditions, analyse feasibility of execution paths, or build finite abstractions, we present a new decision procedure for a rich logic including strings. Main applications of our framework include, in particular, detection of vulnerabilities of web applications such as SQL injections and cross-site scripting.

15:30
A Conference Management System with Verified Document Confidentiality

ABSTRACT. We present a case study in verified security for realistic systems: the implementation of a conference management system, whose functional kernel is faithfully represented in the Isabelle theorem prover, where we specify and verify confidentiality properties. The various theoretical and practical challenges posed by this development led to a novel security model and verification method generally applicable to systems describable as input-output automata.

15:50
VAC - Verifier of Administrative Role-based Access Control Policies

ABSTRACT. In this paper we present VAC an automatic tool for verifying security properties of administrative Role-based Access Control (RBAC). RBAC has become an increasingly popular access control model, particularly suitable for large organizations, and it is implemented in several software. Automatic security analysis of administrative RBAC systems is recognized as an important problem, as an analysis tool can help designers to check whether their policies meet expected security properties. VAC converts administrative RBAC policies to imperative programs that simulate the policies both precisely and abstractly and supports several automatic verification back-ends to analyze the resulting programs. In this paper, we describe the architecture of VAC and overview the analysis techniques that have been implemented in the tool. We also report on experiments with several benchmarks from the literature.

14:30-16:00 Session 113C: FLoC Inter-Conference Topic: SAT/SMT/QBF (joint with IJCAR)
Location: FH, Hörsaal 5
14:30
SAT-based Decision Procedure for Analytic Pure Sequent Calculi
SPEAKER: Yoni Zohar

ABSTRACT. We identify a wide family of analytic sequent calculi for propositional non-classical logics whose derivability problem can be uniformly reduced to SAT. The proposed reduction is based on interpreting these calculi using non-deterministic semantics. Its time complexity is polynomial, and, in fact, linear for a useful subfamily. We further study an extension of such calculi with "Next" operators, and show that this extension preserves analyticity and is subject to a similar reduction to SAT. A particular interesting instance of these results is a HORNSAT-based linear-time decision procedure for Gurevich and Neeman's primal infon logic and several natural extensions of it.

15:00
A Unified Proof System for QBF Preprocessing
SPEAKER: Marijn Heule

ABSTRACT. For quantified Boolean formulas (QBFs), preprocessing is essential to solve many real-world formulas. Application of a preprocessor, however, prevented the extraction of proofs to independently validate correctness of the solver's result. Especially for universal expansion proof checking was not possible so far. In this paper, we introduce a unified proof system based on three simple and elegant quantified asymmetric tautology QRAT rules. In combination with an extended version of universal reduction, they are sufficient to efficiently express all preprocessing techniques used in state-of-the-art preprocessors including universal expansion. Moreover, these rules give rise to new preprocessing techniques. We equip our preprocessor bloqqer with QRAT proof logging and provide a proof checker for QRAT proofs.

15:30
The Fractal Dimension of SAT Formulas

ABSTRACT. Modern SAT solvers have experienced a remarkable progress on solving industrial instances. Most of the techniques have been developed after an intensive experimental testing process. It is believed that these techniques exploit the underlying structure of industrial instances. However, there is not a precise definition of the notion of structure. Recently, there have been some attempts to analyze the structure of these formulas in terms of complex networks, with the long-term aim of explaining the success of these SAT solving techniques, and possibly improving them. We study the fractal dimension of SAT instances with the aim of complementing the model that describes the structure of industrial instances. We show that many industrial families of formulas are self-similar, with a small fractal dimension. We also show how this dimension is affected by the addition of learnt clauses during the execution of SAT solvers.

16:00-16:30Coffee Break
16:30-17:10 Session 116A: Automata
Location: FH, Hörsaal 1
16:30
From LTL to Deterministic Automata: A Safraless Compositional Approach
SPEAKER: Jan Kretinsky

ABSTRACT. We present a new algorithm to construct a deterministic Rabin automaton for an LTL formula $\varphi$. The automaton is the product of a master automaton and an array of slave automata, one for each $\G$-subformula of $\varphi$. The slave automaton for $\G\psi$ is in charge of recognizing whether $\F\G\psi$ holds, As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows to apply various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods.

16:50
Symbolic Visibly Pushdown Automata

ABSTRACT. Nested words model data with both linear and hierarchical structure such as XML documents and program traces. A nested word is a sequence of positions together with a matching relation that connects open tags (calls) with the corresponding close tags (returns). Visibly Pushdown Automata are a restricted class of pushdown automata that process nested words, and have many appealing theoretical properties such as closure under Boolean operations and decidable equivalence. However, like any classical automata models, they are limited to finite alphabets. This limitation is restrictive for practical applications to both XML processing and program trace analysis, where values for individual symbols are usually drawn from an unbounded domain. With this motivation, we introduce Symbolic Visibly Pushdown Automata (SVPA) as an executable model for nested words over infinite alphabets. In this model, transitions are labeled with first-order predicates over the input alphabet, analogous to symbolic automata processing strings over infinite alphabets. A key novelty of SVPAs is the use of binary predicates to model relations between open and close tags in a nested word. We show how SVPAs still enjoy the decidability and closure properties of Visibly Pushdown Automata. We use SVPAs to model XML validation policies and program properties that are not naturally expressible with previous formalisms and provide experimental results on the performance of our implementation.

16:30-17:00 Session 116C: FLoC Inter-Conference Topic: SAT/SMT/QBF (joint with IJCAR)
Location: FH, Hörsaal 5
16:30
A Gentle Non-Disjoint Combination of Satisfiability Procedures
SPEAKER: unknown

ABSTRACT. A satisfiability problem is very often expressed in a combination of theories, and a very natural approach consists in solving the problem by combining the satisfiability procedures available for the component theories. This is the purpose of the combination method introduced by Nelson and Oppen. However, in its initial presentation, the Nelson-Oppen combination method requires the theories to be signature-disjoint and stably infinite (to guarantee the existence of an infinite model). The notion of gentle theory has been introduced in the last few years as one solution to go beyond the restriction of stable infiniteness, but in the case of disjoint theories. In this paper, we adapt the notion of gentle theory to the non-disjoint combination of theories sharing only unary predicates (plus constants and the equality). Like in the disjoint case, combining two theories, one of them being gentle, requires some minor assumptions on the other one. We show that major classes of theories, i.e Löwenheim and Bernays-Schönfinkel-Ramsey, satisfy the appropriate notion of gentleness introduced for this particular non-disjoint combination framework.

17:10-18:00 Session 119A: Invited Talk
Location: FH, Hörsaal 1
17:10
Designing and verifying molecular circuits and systems made of DNA
SPEAKER: Erik Winfree

ABSTRACT. Inspired by the information processing core of biological organisms and its ability to fabricate intricate machinery from the molecular scale up to the macroscopic scale, research in synthetic biology, molecular programming, and nucleic acid nanotechnology aims to create information-based chemical systems that carry out human-defined molecular programs that input, output, and manipulate molecules and molecular structures. For chemistry to become the next information technology substrate, we will need improved tools for designing, simulating, and analyzing complex molecular circuits and systems. Using DNA nanotechnology as a model system, I will discuss how programming languages can be devised for specifying molecular systems at a high level, how compilers can translate such specifications into concrete molecular implementations, how both high-level and low-level specifications can be simulated and verified according to behavioral logic and the underlying biophysics of molecular interactions, and how Bayesian analysis techniques can be used to understand and predict the behavior of experimental systems that, at this point, still inevitably contain many ill-characterized components and interactions.

17:30-18:30 Session 120: FLoC Inter-Conference Topic: Security. Information Flow 1 (joint with CSF)
Location: FH, Hörsaal 6
17:30
On Dynamic Flow-sensitive Floating-Label Systems
SPEAKER: unknown

ABSTRACT. Flow-sensitive analysis for information-flow control (IFC) allows data structures to have mutable security labels, i.e., labels that can change over the course of the computation. This feature is beneficial for several reasons: it often boosts permissiveness, reduces the burden of explicit annotations, and enables reuse of single-threaded data structures. However, in a purely dynamic setting, mutable labels can expose a covert channel capable of leaking secrets at a high rate. We present an extension for LIO---a language-level floating-label system--that safely handles flow-sensitive references. The key insight to safely manipulating the label of a reference is to not only consider the label on the data stored in the reference, i.e, the reference label, but also the label on the reference label itself. Taking this into consideration, we provide an upgrade primitive that can be used to change the label of a reference, when deemed safe. To eliminate the burden of determining when it is safe to upgrade a reference, we additionally provide a mechanism for the automatic upgrades. Our approach naturally extends to a concurrent setting, not previously considered by flow-sensitive systems. For both our sequential and concurrent calculi we prove non-interference by embedding the flow-sensitive system into the flow-insensitive LIO calculus; the embedding itself is a surprising result.

18:00
Noninterference under Weak Memory Models
SPEAKER: unknown

ABSTRACT. Research on information flow security for concurrent programs usually assumes sequential consistency although modern multi-core processors often support weaker consistency guarantees. In this article, we clarify the impact that relaxations of sequential consistency have on information flow security. We consider four memory models and prove for each of them that information flow security under this model does not imply information flow security in any of the other models. These results suggest that research on security needs to pay more attention to the consistency guarantees provided by contemporary hardware. The other main technical contribution of this article is a program transformation that soundly enforces information flow security under different memory models. This program transformation is significantly less restrictive than a transformation that first establishes sequential consistency and then applies a traditional information flow analysis for concurrent programs.

19:00-21:30 Session 122: VSL Reception 2
Location: University of Vienna, Arkadenhof
22:00-23:59 Session 123: VSL Student Reception 2
Location: Säulenhalle (Volksgarten)