LPAR-21:Papers with Abstracts

Papers
Abstract. We present a formal model for event-processing pipelines. Event-processing pipelines appear in a large number of domains, from control of cyber-physical systems (CPS), to large scale data analysis, to Internet-of-things applications. These applications are characterized by stateful transformations of event streams, for example, for the purposes of sensing, computation, and actuation of inner control loops in CPS applications, and for data cleaning, analysis, training, and querying in data analytics applications.
Our formal model provides two abstractions: streams of data, and stateful, probabilistic, filters, which transform input streams to output streams probabilistically. Programs are compositions of filters. The filters are scheduled and run by an explicit, asynchronous, scheduler.
We provide a transition system semantics for such programs based on infinite-state Markov decision processes. We characterize when a program is scheduler-independent, that is, provides the same observable behavior under every scheduler, based on local commutativity.
Abstract. We present a procedure for algorithmically embedding problems formulated in higher- order modal logic into classical higher-order logic. The procedure was implemented as a stand-alone tool and can be used as a preprocessor for turning TPTP THF-compliant the- orem provers into provers for various modal logics. The choice of the concrete modal logic is thereby specified within the problem as a meta-logical statement. This specification for- mat as well as the underlying semantics parameters are discussed, and the implementation and the operation of the tool are outlined.
By combining our tool with one or more THF-compliant theorem provers we accomplish the most widely applicable modal logic theorem prover available to date, i.e. no other available prover covers more variants of propositional and quantified modal logics. Despite this generality, our approach remains competitive, at least for quantified modal logics, as our experiments demonstrate.
Abstract. Blocked clauses provide the basis for powerful reasoning techniques used in SAT, QBF, and DQBF solving. Their definition, which relies on a simple syntactic criterion, guarantees that they are both redundant and easy to find. In this paper, we lift the notion of blocked clauses to first-order logic. We introduce two types of blocked clauses, one for first-order logic with equality and the other for first-order logic without equality, and prove their redundancy. In addition, we give a polynomial algorithm for checking whether a clause is blocked. Based on our new notions of blocking, we implemented a novel first-order preprocessing tool. Our experiments showed that many first-order problems in the TPTP library contain a large number of blocked clauses whose elimination can improve the performance of modern theorem provers, especially on satisfiable problem instances.
Abstract. It is known that one can extract Craig interpolants from so-called local
proofs. An interpolant extracted from such a proof is a boolean
combination of formulas occurring in the proof. However, standard complete
proof systems, such as superposition, for theories having the interpolation
property are not necessarily complete for local proofs: there are formulas
having non-local proofs but no local proof.

In this paper we investigate interpolant extraction from non-local refutations
(proofs of contradiction) in the superposition calculus and prove a number
of general results about interpolant extraction and complexity of extracted
interpolants. In particular, we prove that the number of quantifier alternations
in first-order interpolants of formulas without quantifier alternations is
unbounded. This result has far-reaching consequences for using local proofs
as a foundation for interpolating proof systems: any such proof system should
deal with formulas of arbitrary quantifier complexity.

To search for alternatives for interpolating proof systems, we consider several
variations on interpolation and local proofs. Namely, we give an algorithm for
building interpolants from resolution refutations in logic without equality and
discuss additional constraints when this approach can be also used for logic
with equality. We finally propose a new direction related to interpolation via
local proofs in first-order theories.
Abstract. Delete Resolution Asymmetric Tautology (DRAT) proofs have become a de facto standard to certify unsatisfiability results from SAT solvers with inprocessing. However, DRAT shows behaviors notably different from other proof systems: DRAT inferences are non- monotonic, and clauses that are not consequences of the premises can be derived. In this paper, we clarify some discrepancies on the notions of reverse unit propagation (RUP) clauses and asymmetric tautologies (AT), and furthermore develop the concept of resolution consequences. This allows us to present an intuitive explanation of RAT in terms of permissive definitions. We prove that a formula derived using RATs can be stratified into clause sets depending on which definitions they require, which give a strong invariant along RAT proofs. We furthermore study its interaction with clause deletion, characterizing DRAT derivability as satisfiability-preservation.
Abstract. Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go.
Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification.
Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.
Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 7.36% of the first-order logic translations of the Mizar Mathematical Library theorems that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%.
Abstract. The deep inference presentation of multiplicative exponential linear logic (MELL) benefits from a rich combinatoric analysis with many more proofs in comparison to its sequent calculus presentation. In the deep inference setting, all the sequent calculus proofs are preserved. Moreover, many other proofs become available, and some of these proofs are much shorter. However, proof search in deep inference is subject to a greater nondeterminism, and this nondeterminism constitutes a bottleneck for applications. To this end, we address the problem of reducing nondeterminism in MELL by refining and extending our technique that has been previously applied to multiplicative linear logic and classical logic. We show that, besides the nondeterminism in commutative contexts, the nondeterminism in exponential contexts can be reduced in a proof theoretically clean manner. The method conserves the exponential speed-up in proof construction due to deep inference, exemplified by Statman tautologies. We validate the improvement in accessing the shorter proofs by experiments with our implementations.
Abstract. Techniques combining machine learning with translation to automated reasoning have recently become an important component of formal proof assistants. Such “hammer” techniques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences combined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A*-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove 39% of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves 32% in the same amount of time.
Abstract. Stateflow is a widely used modeling framework for embedded and cyberphysical systems where control software interacts with physical processes. In this work, we present a framework and a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of Stateflow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink/Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Abstract. In this tool paper we describe a variation of Nintendo’s Super Mario World dubbed Super Formula World that creates its game maps based on an input quantified Boolean formula. Thus in Super Formula World, Mario, the plumber not only saves his girlfriend princess Peach, but also acts as a QBF solver as a side. The game is implemented in Java and platform independent. Our implementation rests on abstract frameworks by Aloupis et al. that allow the analysis of the computational complexity of a variety of famous video games. In particular it is a straightforward consequence of these results to provide a reduction from QSAT to Super Mario World. By specifying this reduction in a precise way we obtain the core engine of Super Formula World. Similarly Super Formula World implements a reduction from SAT to Super Mario Bros., yielding significantly simpler game worlds.
Abstract. Context-free language reachability (CFL-R) is a fundamental solving vehicle for computing essential compiler optimisations and static program analyses. Unfortunately, solvers for CFL- R encounter both inherently expensive problem formulations and frequent alterations to the underlying formalism. As such, tool designers are forced to create custom-tailored implementations with long development times and limited reusability. A better framework is crucial to facilitate research and development in CFL-R.
In this work we present Cauliflower, a CFL-R solver generator, that creates parallel executable C++ code from an input CFL-R rule-based specification. With Cauliflower, developers create working tools rapidly, avoiding lengthy and error-prone manual implementations. Cauliflower’s domain-specific language provides semantic extension including reversal, branch- ing, disconnection and templating. In practical experiments, Cauliflower achieves an average speedup of 1.8x compared with the best general purpose tools, and matches the performance of application-specific tools on many benchmarks.
Abstract. We present new results on a constraint satisfaction problem arising from the inference of resource types in automatic amortized analysis for object-oriented programs by Rodriguez and Hofmann.These constraints are essentially linear inequalities between infinite lists of nonnegative rational numbers which are added and compared pointwise. We study the question of satisfiability of a system of such constraints in two variants with significantly different complexity. We show that in its general form (which is the original formulation presented by Hofmann and Rodriguez at LPAR 2012) this satisfiability problem is hard for the famous Skolem-Mahler-Lech problem whose decidability status is still open but which is at least NP-hard. We then identify a subcase of the problem that still covers all instances arising from type inference in the aforementioned amortized analysis and show decidability of satisfiability in polynomial time by a reduction to linear programming. We further give a classification of the growth rates of satisfiable systems in this format and are now able to draw conclusions about resource bounds for programs that involve lists and also arbitrary data structures if we make the additional restriction that their resource annotations are generated by an infinite list (rather than an infinite tree as in the most general case). Decidability of the tree case which was also part of the original formulation by Hofmann and Rodriguez still remains an open problem.
Abstract. In this paper, we introduce RACCOON, a reasoner based on the connection calculus ALC θ-CM for the description logic ALC. We describe the calculus, and present details of RACCOON’s implementation. Currently, RACCOON carries out only consistency checks, and can be run online; its code is also publicly available. Besides, results of a comparison among RACCOON and other reasoners on the ORE 2014 and 2015 competition problems with ALC expressivity are shown and discussed.
Abstract. Inclusion dependencies are one of the most important database constraints. In isolation their finite and unrestricted implication problems coincide, are finitely axiomatizable, PSPACE-complete, and fixed-parameter tractable in their arity. In contrast, finite and unrestricted implication problems for the combined class of functional and inclusion de- pendencies deviate from one another and are each undecidable. The same holds true for the class of embedded multivalued dependencies. An important embedded tractable fragment of embedded multivalued dependencies are independence atoms. These stipulate independence between two attribute sets in the sense that for every two tuples there is a third tuple that agrees with the first tuple on the first attribute set and with the second tuple on the second attribute set. For independence atoms, their finite and unrestricted implication problems coincide, are finitely axiomatizable, and decidable in cubic time. In this article, we study the implication problems of the combined class of independence atoms and inclusion dependencies. We show that their finite and unrestricted implication problems coincide, are finitely axiomatizable, PSPACE-complete, and fixed-parameter tractable in their arity. Hence, significant expressivity is gained without sacrificing any of the desirable properties that inclusion dependencies have in isolation. Finally, we establish an efficient condition that is sufficient for independence atoms and inclusion dependencies not to inter- act. The condition ensures that we can apply known algorithms for deciding implication of the individual classes of independence atoms and inclusion dependencies, respectively, to decide implication for an input that combines both individual classes.
Abstract. Solving complex problems can involve non-trivial combinations of distinct knowledge bases and problem solvers. The Algebra of Modular Systems is a knowledge representation framework that provides a method for formally specifying such systems in purely semantic terms. Many practical systems based on expressive formalisms solve the model expansion task. In this paper, we con- struct a solver for the model expansion task for a complex modular system from an expression in the algebra and black-box propagators or solvers for the primitive modules. To this end, we define a general notion of propagators equipped with an explanation mechanism, an extension of the algebra to propagators, and a lazy conflict-driven learning algorithm. The result is a framework for seamlessly combining solving technology from different domains to produce a solver for a combined system.
Abstract. There exist powerful techniques to infer upper bounds on the innermost runtime complexity of term rewrite systems (TRSs), i.e., on the lengths of rewrite sequences that follow an innermost evaluation strategy. However, the techniques to analyze the (full) runtime complexity of TRSs are substantially weaker. In this paper, we present a sufficient criterion to ensure that the runtime complexity of a TRS coincides with its innermost runtime complexity. This criterion can easily be checked automatically and it allows us to use all techniques and tools for innermost runtime complexity in order to analyze (full) runtime complexity. By extensive experiments with an implementation of our results in the tool AProVE, we show that this improves the state of the art of automated complexity analysis significantly.
Abstract. We design an interpretation-based theory of higher-order functions that is well-suited for the complexity analysis of a standard higher- order functional language a` la ml. We manage to express the interpretation of a given program in terms of a least fixpoint and we show that when restricted to functions bounded by higher-order polynomials, they characterize exactly classes of tractable functions known as Basic Feasible Functions at any order.
Abstract. In this paper, we propose a new approach for defining tractable classes in propositional satisfiability problem (in short SAT). The basic idea consists in transforming SAT instances into instances of the problem of finding a maximum independent set. In this context, we only consider propositional formulæ in conjunctive normal form where each clause is either positive or binary negative. Tractable classes are obtained from existing polynomial time algorithms of the problem of finding a maximum independent set in the case of different graph classes, such as claw-free graphs and perfect graphs. We show, in particular, that the pigeonhole principle belongs to one of the defined tractable classes. Furthermore, we propose a characterization of the minimal models in the largest considered fragment based on the maximum independent set problem.
Abstract. We tackle the problem of simultaneous transformations of networks represented as graphs. Roughly speaking, one may distinguish two kinds of simultaneous or parallel rewrite relations over complex structures such as graphs: (i) those which transform disjoint subgraphs in parallel and hence can be simulated by successive mere sequential and local transformations and (ii) those which transform overlapping subgraphs simultaneously. In the latter situations, parallel transformations cannot be simulated in general by means of successive local rewrite steps. We investigate this last problem in the framework of overlapping graph transformation systems. As parallel transformation of a graph does not produce a graph in general, we propose first some sufficient conditions that ensure the closure of graphs by parallel rewrite relations. Then we mainly introduce and discuss two parallel rewrite relations over graphs. One relation is functional and thus deterministic, the other one is not functional for which we propose sufficient conditions which ensure its confluence.
Abstract. Partial Model-Checking (PMC) is an efficient tool to reduce the combinatorial explosion of a state-space, arising in the verification of loosely-coupled software systems. At the same time, it is useful to consider quantitative temporal-modalities. This allows for checking whether satisfying such a desired modality is too costly, by comparing the final score consisting of how much the system spends to satisfy the policy, to a given threshold. We stir these two ingredients together in order to provide a Quantitative PMC function (QPMC), based on the algebraic structure of semirings. We design a method to extract part of the weight during QPMC, with the purpose to avoid the evaluation of a modality as soon as the threshold is crossed. Moreover, we extend classical heuristics to be quantitative, and we investigate the complexity of QPMC.
Keyword: Partial Model Checking, Semirings, Optimisation, Quantitative Modal Logic Quantitative Process Algebra, Quantitative Evaluation of Systems.
Abstract. Simultaneous occurrences of multiple recurrence relations in a system of non-linear constrained Horn clauses are crucial for proving its satis ability. A solution of such system is often inexpressible in the constraint language. We propose to synchronize recurrent computations, thus increasing the chances for a solution to be found. We introduce a notion of CHC product allowing to formulate a lightweight iterative algorithm of merging recurrent computations into groups and prove its soundness. The evaluation over a set of systems handling lists and linear integer arithmetic confirms that the transformed systems are drastically more simple to solve than the original ones.
Abstract. We present a tool that transforms nondeterministic ω-automata to semi-deterministic ω-automata. The tool Seminator accepts transition-based generalized Bu ̈chi automata (TGBA) as an input and produces automata with two kinds of semi-determinism. The implemented procedure performs degeneralization and semi-determinization simultaneously and employs several other optimizations. We experimentally evaluate Seminator in the context of LTL to semi-deterministic automata translation.
Abstract. Heap and data structures represent one of the biggest challenges when applying model checking to the analysis of software programs: in order to verify (unbounded) safety of a program, it is typically necessary to formulate quantified inductive invariants that state properties about an unbounded number of heap locations. Methods like Craig interpolation, which are commonly used to infer invariants in model checking, are often ineffective when a heap is involved. To address this challenge, we introduce a set of new proof and program transformation rules for verifying object-oriented programs with the help of space invariants, which (implicitly) give rise to quantified invariants. Leveraging advances in Horn solving, we show how space invariants can be derived fully automatically, and how the framework can be used to effectively verify safety of Java programs.
Abstract. Proof by coupling is a classical proof technique for establishing probabilistic properties of two probabilistic processes, like stochastic dominance and rapid mixing of Markov chains. More recently, couplings have been investigated as a useful abstraction for formal reasoning about relational properties of probabilistic programs, in particular for modeling reduction-based cryptographic proofs and for verifying differential privacy. In this paper, we demonstrate that probabilistic couplings can be used for verifying non-relational probabilistic properties. Specifically, we show that the program logic pRHL—whose proofs are formal versions of proofs by coupling—can be used for formalizing uniformity and probabilistic independence. We formally verify our main examples using the EasyCrypt proof assistant.
Abstract. In this paper we show that a very basic fragment of FO-LTL, the monadic fully boxed fragment (all connectives and quantifiers are guarded by P) is not recursively enumerable wrt validity and 1-satisfiability if three predicates are present. This result is obtained by reduction of the fully boxed fragment of FO-LTL to the Gödel logic G↓, the infinitely valued Gödel logic with truth values in [0,1] such that all but 0 are isolated. The result on 1-satisfiability is in no way symmetric to the result on validity as in classical logic: this is demonstrated by the analysis of G↑, the related infinitely-valued Gödel logic with truth values in [0, 1] such that all but 1 are isolated. Validity of the monadic fragment with at least two predicates is not recursively enumerable, 1-satisfiability of the monadic fragment is decidable.
Abstract. We introduce a new proof-theoretic framework which enhances the expressive power of bunched sequents by extending them with a hypersequent structure. A general cut-elimination theorem that applies to bunched hypersequent calculi satisfying general rule conditions is then proved. We adapt the methods of transforming axioms into rules to provide cutfree bunched hypersequent calculi for a large class of logics extending the distributive commutative Full Lambek calculus DFLe and Bunched Implication logic BI. The methodology is then used to formulate new logics equipped with a cutfree calculus in the vicinity of Boolean BI.
Abstract. It is well known that context dependent logical rules can be problematic both to implement and reason about. This is one of the factors driving the quest for better behaved, i.e., local, logical systems. In this work we investigate such a local system for linear logic (LL) based on linear nested sequents (LNS). Relying on that system, we propose a general framework for modularly describing systems combining, coherently, substructural behaviors inherited from LL with simply dependent multimodalities. This class of systems includes linear, elementary, affine, bounded and subexponential linear logics and extensions of multiplicative additive linear logic (MALL) with normal modalities, as well as general combinations of them. The resulting LNS systems can be adequately encoded into (plain) linear logic, supporting the idea that LL is, in fact, a “universal framework” for the specification of logical systems. From the theoretical point of view, we give a uniform presentation of LL featuring different axioms for its modal operators. From the practical point of view, our results lead to a generic way of constructing theorem provers for different logics, all of them based on the same grounds. This opens the possibility of using the same logical framework for reasoning about all such logical systems.
Abstract. Linear Temporal Logic (LTL) is a de-facto standard formalism for expressing properties of systems and temporal constraints in formal verification, artificial intelligence, and other areas of computer science. The problem of LTL satisfiability is thus prominently important to check the consistency of these temporal specifications. Although adding past operators to LTL does not increase its expressive power, recently the interest for explicitly handling the past in temporal logics has increased because of the clarity and succinctness that those operators provide. In this work, a recently proposed one-pass tree-shaped tableau system for LTL is extended to support past operators. The modularity of the required changes provides evidence for the claimed ease of extensibility of this tableau system.
Abstract. Incorporating extensional equality into a dependent intensional type system such as the Calculus of Constructions provides with stronger type-checking capabilities and makes the proof development closer to intuition. Since strong forms of extensionality lead to undecidable type-checking, a good trade-off is to extend intensional equality with a decidable first-order theory T, as done in CoqMT, which uses matching modulo T for the weak and strong elimination rules, we call these rules T-elimination. So far, type-checking in CoqMT is known to be decidable in presence of a cumulative hierarchy of universes and weak T-elimination. Further, it has been shown by Wang with a formal proof in Coq that consistency is preserved in presence of weak and strong elimination rules, which actually implies consistency in presence of weak and strong T-elimination rules since T is already present in the conversion rule of the calculus.
We justify here CoqMT’s type-checking algorithm by showing strong normalization as well as the Church-Rosser property of β-reductions augmented with CoqMT’s weak and strong T -elimination rules. This therefore concludes successfully the meta-theoretical study of CoqMT.
Abstract. The main security mechanism for enforcing memory isolation in operating systems is provided by page tables. The hardware-implemented Translation Lookaside Buffer (TLB) caches these, and therefore the TLB and its consistency with memory are security crit- ical for OS kernels, including formally verified kernels such as seL4. If performance is paramount, this consistency can be subtle to achieve; yet, all major formally verified ker- nels currently leave the TLB as an assumption.
In this paper, we present a formal model of the Memory Management Unit (MMU) for the ARM architecture which includes the TLB, its maintenance operations, and its derived properties. We integrate this specification into the Cambridge ARM model. We derive sufficient conditions for TLB consistency, and we abstract away the functional details of the MMU for simpler reasoning about executions in the presence of cached address translation, including complete and partial walks.
Abstract. In 2016, Heule, Kullmann and Marek solved the Boolean Pythagorean Triples problem: is there a binary coloring of the natural numbers such that every Pythagorean triple contains an element of each color? By encoding a finite portion of this problem as a propositional formula and showing its unsatisfiability, they established that such a coloring does not exist. Subsequently, this answer was verified by a correct-by-construction checker extracted from a Coq formalization, which was able to reproduce the original proof. However, none of these works address the question of formally addressing the relationship between the propositional formula that was constructed and the mathematical problem being considered. In this work, we formalize the Boolean Pythagorean Triples problem in Coq. We recursively define a family of propositional formulas, parameterized on a natural number n, and show that unsatisfiability of this formula for any particular n implies that there does not exist a solution to the problem. We then formalize the mathematical argument behind the simplification step in the original proof of unsatisfiability and the logical argument underlying cube-and-conquer, obtaining a verified proof of Heule et al.’s solution.