VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR THURSDAY, JULY 17TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 62A: FLoC Panel (joint session of 10 meetings)
Location: FH, Hörsaal 1
08:45
FLoC Panel: Publication Models in Computing Research: Is a Change Needed? Are We Ready for a Change?

ABSTRACT. Over the last few years, our community has started a collective conversation on several topics related to our publication culture: our emphasis on conference publishing; our large number of specialty conferences; concerns that we have created a culture of hypercritical reviewing, which stifle rather than encourage innovative research; concerns that tenure and promotion practice encourage incremental short-term research; the tension between the ideal of open access and the reality of reader-pay publishing; and the role of social media in scholarly publishing. While computing research has been phenomenally successful, there is a feeling that our publication models are quite often obstacles. Yet, there is no agreement on whether our publication models need to be radically changed or fine tuned, and there is no agreement on how such change may occur. This panel is aimed at furthering the conversation on this topic, with the hope of moving us closer to an agreement.

08:45-10:15 Session 62B: Non-Termination (WST)
Location: FH, Seminarraum 101B
08:45
Non-termination of Dalvik bytecode via compilation to CLP
SPEAKER: Fred Mesnard

ABSTRACT. We present a set of rules for compiling a Dalvik bytecode program into a logic program with array constraints. Non-termination of the resulting program entails that of the original one, hence the techniques we have presented before for proving non-termination of constraint logic programs can be used for proving non-termination of Dalvik programs.

09:15
Geometric Series as Nontermination Arguments for Linear Lasso Programs

ABSTRACT. We present a new kind of nontermination argument for linear lasso programs, called geometric nontermination argument. A geometric nontermination argument is a finite representation of an infinite execution of the form $(\vec{x} + \sum_{i=0}^t \lambda^i \vec{y})_{t \geq 0}$. The existence of this nontermination argument can be stated as a set of nonlinear algebraic constraints. We show that every linear loop program that has a bounded infinite execution also has a geometric nontermination argument. Furthermore, we discuss nonterminating programs that do not have a geometric nontermination argument.

09:45
Non-termination using Regular Languages
SPEAKER: unknown

ABSTRACT. We describe a method for proving non-termination of term rewriting systems that do not admit looping reductions, that is, reductions of the form $t \to^* C[t\sigma]$. As certificates of non-termination, we employ regular automata.

08:45-10:15 Session 62C: Invited Talk I (CICLOPS-WLPE)
Location: FH, Hörsaal 7
08:45
Proofs for Optimality of Sorting Networks by Logic Programming

ABSTRACT. We present a computer-assisted non-existence proof of nine-input sorting networks consisting of 24 comparators, hence showing that the 25-comparator sorting network found by Floyd in 1964 is optimal. As a corollary, we obtain that the 29-comparator network found by Waksman in 1969 is optimal when sorting ten inputs. This closes the two smallest open instances of the optimal size sorting network problem, which have been open since the results of Floyd and Knuth from 1966 proving optimality for sorting networks of up to eight inputs.

The entire implementation is written in SWI-Prolog and was run on a cluster of 12 nodes with 12 cores each, able to run a total of 288 concurrent threads, making extensive use of SWI-Prolog’s built-in predicate concurrency/3. The search space of 2.2×10 37 comparator networks was exhausted after just under 10 days of computation. This shows the ability of logic programming to provide a scalable parallel implementation while at the same time instilling a high level of trust in the correctness of the proof.

This talk will be given by Luís Cruz-Filipe and Peter Schneider-Kamp.

09:45
Discussion
SPEAKER: Everyone

ABSTRACT. Discussion for the invited talk on "Proofs for Optimality of Sorting Networks by Logic Programming".

08:45-10:15 Session 62D: Contributed Talks (LATD)
Location: MB, Festsaal
08:45
Undecidability of consequence relation in Full Non-associative Lambek Calculus

ABSTRACT. We prove that the consequence relation in the Full Non-associative Lambek Calculus is undecidable.

09:15
Canonical formulas for k-potent residuated lattices
SPEAKER: Luca Spada

ABSTRACT. We present a method, which generalises canonical formulas for intuitionistic logic, to axiomatise all varieties of k-potent residuated lattice.

09:45
On satisfiability of terms in FLew-algebras

ABSTRACT. A paper on satisfiability of terms in FLew-algebras.

08:45-10:15 Session 62E: Contributed Talks (LATD)
Location: MB, Hörsaal 15
08:45
Five-valued LTL for Runtime Verification
SPEAKER: Ming Chai

ABSTRACT. we investigate a monitoring approach based on linear temporal logic (LTL) specifications. The five-valued logic is introduced for dealing with two kinds of uncertainties in distributed systems. On one hand, the order of causally unrelated executions is not determined when a global clock is not available. On the other hand, in a finite amount of time, the behaviour can be observed only up to a certain moment, and the future behaviour is unknown. We show the feasibility of our approach with a case study in the railway domain.

09:15
A method for generalizing finite automata arising from Stone-like dualities

ABSTRACT. We start our investigation by first providing a dictionary for translating deterministic finite automata (DFA henceforth) in the language of classical propositional logic. The main idea underlying our investigation is to regard each DFA as a finite set-theoretical object, applying the finite slice of Stone duality in order to move from DFA to algebra, and finally using the algebraizability of classical logic to introduce the formal objects which arise by this "translation" and that we call classical fortresses. We show that classical fortresses accept exactly the same languages as finite automata, that is, regular languages.

Classical fortresses, as objects specified in classical propositional logic on a finite number of variables, allow an easy generalization to any non-classical logical setting: given a propositional logical calculus L, one can easily adapt the definition of classical fortress to the frame of L, introducing a notion of L-fortress and hence studying the language accepted by such an object. We investigate the following question: What is the reflection of L-fortresses in the theory of automata? We explicit our method for Godel logic.

09:45
An application of distance-based approximate reasoning for diagnostic questionnaires in healthcare
SPEAKER: unknown

ABSTRACT. In this talk we propose a formal degree-based framework for computing scores in medical assessment questionnaires.

08:45-10:15 Session 62F: Technical session (EC2)
Location: FH, Seminarraum 101C
08:45
Atomicity Refinement for Verified Compilation

ABSTRACT. We present a methodology for the verified compilation of high-level managed languages like Java or C# whose intermediate representations provide support for shared-memory synchronization and automatic (concurrent) memory management. Ensuring the correctness of compilers for these languages is challenging because high-level actions are translated into sequences of non-atomic actions with compiler-injected snippets of potentially racy code; the behavior of this code depends not only the actions of other threads, but also on out-of-order reorderings performed by the processor. A naïve correctness proof of the compiler would require reasoning over all possible thread interleavings that can arise during execution, an impractical and non-scalable exercise.

To address this challenge, we define a refinement-based proof methodology that precisely relates concurrent code expressed at different abstraction levels, cognizant throughout of the relaxed memory semantics of the underlying processor. Our technique allows the compiler writer to reason compositionally about the atomicity of low-level concurrent code used to implement managed services. We validate the effectiveness of our approach by demonstrating the verified compilation of the non-trivial components comprising a realistic concurrent garbage collector, including racy write barriers and memory allocators.

09:45
Concolic Testing of Concurrent Programs
SPEAKER: unknown

ABSTRACT. We survey our research on concolic testing of concurrent programs. Based on concrete and symbolic executions of a concurrent program, our testing techniques derive inputs and schedules such that the execution space of the program under investigation is systematically explored. We will focus on con2colic testing which introduces interference scenarios. Interference scenarios capture the flow of data among different threads and enable a unified representation of path and interference constraints.

08:45-10:15 Session 62G: Invited Talk & Welcome (PLP)
Location: FH, Seminarraum 134
08:45
Welcome
08:55
First-order representations for integer programming
SPEAKER: James Cussens

ABSTRACT. Modelling languages such as ZIMPL are tremendously useful ways of
compactly writing down integer programs with large numbers of
variables and constraints. A ZIMPL statement such as "forall a in A
forall b in B: x[a] <= x[b]" expresses a (linear) constraint for
each pair (a,b). But prior to sending the problem to a solver such an
expression will be "propositionalised" into a (possibly large) set of
ground instances of the constraint. In this talk I aim to explore
whether this "grounding out" can be avoided, and whether there is any
advantage in so doing. I will be discussing cutting plane algorithms
(as a representation of sets of constraints) and column generation
algorithms (as a representation of sets of variables). My talk will
not assume any prior knowledge of integer programming.

09:50
PageRank, ProPPR and Stochastic Logic Programs
SPEAKER: unknown

ABSTRACT. ProPPR is a recently introduced probabilistic logic programming language inspired by stochastic logic programs that uses personalized PageRank for efficient inference. We clarify the link between these two frameworks by mapping the personalized PageRank distribution of a ProPPR program to an incomplete stochastic logic program and showing that the resulting programs induce the same probability distribution over queries.

08:45-10:15 Session 62H (VEMDP)
Location: FH, Seminarraum 136
08:45
Opening remarks
09:00
Computational design of DNA strand displacement systems
09:50
A domain-level DNA strand displacement reaction enumerator allowing arbitrary non-pseudoknotted secondary structures
SPEAKER: Casey Grun

ABSTRACT. DNA strand displacement systems have proven themselves to be a fertile substrate for the design of programmable molecular machinery and circuitry. Domain-level reaction enumerators provide the foundations for molecular programming languages by formalizing DNA strand displacement mechanisms and modeling interactions at the “domain” level – one level of abstraction above models that explicitly describe DNA strand sequences. Unfortunately, the most-developed models currently only treat pseudo-linear DNA structures, while many systems being experimentally and theoretically pursued exploit a much broader range of secondary structure configurations. Here, we describe a new domain-level reaction enumerator that can handle arbitrary non-psuedoknotted secondary structures and reaction mechanisms including association and dissociation, 3-way and 4-way branch migration, and direct as well as remote toehold activation. To avoid polymerization that is inherent when considering general structures, we employ a time-scale separation technique that holds in the limit of low concentrations. This also allows us to “condense” the detailed reactions by eliminating fast transients, with provable guarantees of correctness for the set of reactions and their kinetics. We hope that the new reaction enumerator will be used in new molecular programming languages, compilers, and tools for analysis and verification that treat a wider variety of mechanisms of interest to experimental and theoretical work.

08:45-10:15 Session 62I: Contributed Talks: Uncertainty 1 (NMR)
Location: EI, EI 9
08:45
Nonmonotonic Reasoning as a Temporal Activity

ABSTRACT. A 'dynamic reasoning system' (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct time step. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application-specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, i.e., that they serve to (i) derive all salient information and (ii) preserve the consistency of the belief set. This paper illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and an example. The example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.

09:15
Probabilistic Inductive Logic Programming based on Answer Set Programming
SPEAKER: unknown

ABSTRACT. We propose a new formal language for the expressive representation of probabilistic knowledge based on Answer Set Programming (ASP). It allows for the annotation of first-order formulas as well as ASP rules and facts with probabilities and for learning of such weights from data (parameter estimation). Weighted formulas are given a semantics in terms of soft and hard constraints which determine a probability distribution over answer sets. In contrast to related approaches, we approach inference by optionally utilizing so-called streamlining XOR constraints, in order to reduce the number of computed answer sets. Our approach is prototypically implemented. Examples illustrate the introduced concepts and point at issues and topics for future research.

09:45
A Plausibility Semantics for Abstract Argumentation Frameworks
SPEAKER: Emil Weydert

ABSTRACT. We propose and investigate a simple plausibility-based extension semantics for abstract argumentation frameworks based on their generic instantiation by default knowledge bases and the ranking construction paradigm for default reasoning.

08:45-10:15 Session 62J: Summer School on Computational Logic: Constraint Logic Programming I (ICLP)
Location: Seminarraum 1/3 OPG
08:45
Summer School Opening
SPEAKER: Gopal Gupta
08:50
Constraint Logic Programming I
SPEAKER: Roman Bartak
08:50-10:15 Session 63: Technical Session (LFMTP)
Location: FH, Zeichensaal 1
08:50
Welcome Remarks

ABSTRACT. Welcome to the 9th International Workshop on Logical Frameworks and Meta-languages: Theory and Practice, LFMTP 2014. The LFMTP workshop series resulted from the amalgamation of the Logical Frameworks and Meta-languages (LFM) and the MEchanized Reasoning about Languages with variable bINding (MERLIN) workshop series. LFMTP 2014 is afflilated with two FLoC conferences: the joint meeting of the 23rd EACSL Annual Conference on Computer Science Logic and the 9th ACM/IEEE Symposium on Logic in Computer Science (CSL-LICS), and the 7th International Joint Conference on Automated Reasoning (IJCAR). The full proceedings of LFMTP 2014 is available in the ACM Inernational Conference Proceedings Series at http://www.acm.org/publications/icp_series or http://dl.acm.org/icps.cfm?.

09:00
Proof-Theoretic Foundations of Indexing in Logic Programming

ABSTRACT. Indexing is generally viewed as an implementation artifact, indispensable to speed up the execution of logic programs and theorem provers, but with little intrinsically logical about it. We show that indexing can be given a justification in proof theory on the basis of focusing and linearity. We demonstrate this approach on predicate indexing for Horn clauses and certain formulations of hereditary Harrop formulas. We also show how to refine this approach to discriminate on function symbols as well.

09:25
Hybrid Extensions in a Logical Framework

ABSTRACT. We discuss the extension of the standard LF logical framework with operators for manipulating worlds, as found in hybrid logics or in the HLF framework. To overcome the restrictions of HLF, we present a more general approach to worlds in LF, where the structure of worlds can be described in an abstract way. We give a canonical presentation of the system and discuss the encoding of logical systems, beyond the limited scope of linear logic that formed the main goal of HLF.

09:50
Variable Arity for LF
SPEAKER: John Boyland

ABSTRACT. The dependently-typed lambda calculus (LF) supports use of meta-level binding in reasoning about bindings and hypotheticals in programming languages. That is, lambda expressions in LF can be used to model binders and hypothetical judgments depending on fixed-size contexts. However, because LF does not have a concept of variable-arity functions, a hypothetical judgment depending on a variable-size context cannot be modeled as an LF function. This paper extends LF to support variable-arity functions. As a result, one can model hypothetical judgments with variable contexts directly in the extended LF. The extended LF is simpler and more powerful than previous work that uses complex meta-machinery to type the LF context. This is work in progress: we are still in the process of constructing a proof of correctness.

For more information: http://www.cs.uwm.edu/~boyland/papers/lf+variable.html

09:00-10:15 Session 64A: Opening + Invited Talk 1 (HCVS)
Location: FH, Seminarraum 134A
09:00
Invited talk: Proof and Refutations for Horn-clause Encodings of Reachability Problems

ABSTRACT. The talk focuses on solving program reachability problems by means of a Horn-clause style program encoding and suitable decision procedures. We will first examine a way to construct the necessary invariants using a combination of abstract interpretation and DPLL-style satisfiability solving. We will then consider a technique for generating counterexamples, and ways how these counterexamples can be used for proof in a synergistic fashion. I will finally show an encoding for concurrent systems based on partial-order semantics.

09:00-10:15 Session 64B (REORDER)
Location: FH, Hörsaal 4
09:00
Digging deeper: mole looking for potential weak memory related bugs in Debian

ABSTRACT. Modern multiprocessors exhibit behaviours described as weak memory models. In recent work we proposed an axiomatic generic framework to describe these, and instantiated it for SC, TSO/Intel x86, IBM Power, ARM, and C++ restricted to release-acquire atomics. This foundational work was accompanied by empirical studies on two levels: first, extensive testing on ARM hardware revealed a behaviour later acknowledged as a bug by ARM; second, using a new static analysis tool called mole, more than 1500 software packages were scrutinised.

In this talk a brief overview of the work published in TOPLAS and PLDI will be provided, followed by a detailed account of mole. The talk will discuss both the design of mole as well as the results obtained.

09:00-10:15 Session 64C: Opening + Invited Talk by Viorica Sofronie-Stokkermans (PAS)
Location: FH, Seminarraum 138A
09:00
Hierarchical reasoning and quantifier elimination for the verification of parametric reactive and hybrid systems
09:00-10:15 Session 64E: Welcome and Invited Talk (SMT)
Location: FH, Seminarraum 104
09:00
Welcome
09:15
SMT: Where do we go from here?
SPEAKER: Clark Barrett

ABSTRACT. There is no question that the last decade has been a remarkable success story for SMT. Yet,

many challenges remain that are obstacles to unlocking the full potential of this technology. In

this talk, I will take a look at what has brought SMT to this point, including some notable

success stories. Then I will discuss some of the remaining challenges, both technical and non-

technical, and suggest directions for addressing these challenges.

 

09:15-10:15 Session 65A: NSV Invited Talk: Jean-Michel Muller (NSV)
Location: FH, Hörsaal 2
09:15
Getting tight error bounds in floating-point arithmetic: illustration with complex functions, and the real x^n function

ABSTRACT. The specifications provided by the IEEE 754 standard for floating-point arithmetic allows one to design very simple algorithms for functions such as $x^n$, or complex multiplication and division, that are much more accurate that what one might think at first glance. We will show how tight error bounds are obtained for these algorithms, and discuss how generic exemples can be built to show the tightness of these error bounds.

09:15-10:15 Session 65B: Invited talk (LC)
Location: MB, Prechtlsaal
09:15
Definable cardinals just beyond R / Q

ABSTRACT. Over the last few decades, a definable refinement of the usual notion of cardinality has been employed to great effect in shedding new light on many classification problems throughout mathematics. In order to best understand such applications, one must investigate the abstract nature of the definable cardinal hierarchy.

It is well known that the initial segment of the hierarchy below R / Q looks quite similar to the usual cardinal hierarchy. On the other hand, if one moves sufficiently far beyond R / Q, the two notions diverge wildly.

After reviewing these results, we will discuss recent joint work with Clinton Conley, seeking to explain the difficulty in understanding definable cardinality beyond R / Q by showing that the aforementioned wild behavior occurs immediately thereafter.

09:15-10:15 Session 65C: UITP Invited Talk 1 (Geoff Sutcliffe: "TPTP Process Instruction Language") (UITP)
Location: FH, Seminarraum 107
09:15
The TPTP Process Instruction Language, with Applications
09:15-10:15 Session 65D: Invited Talk by Martín Escardó: Excluded Middle Considered as a Mathematical Problem (PSC)
Location: FH, CAD 1
09:15
Excluded Middle Considered as a Mathematical Problem (Invited Talk)
10:15-10:45Coffee Break
10:45-13:00 Session 66AA: Model Theory and Complexity (CSL-LICS)
Location: FH, Hörsaal 1
10:45
Hyper-Ackermannian Bounds for Pushdown Vector Addition Systems

ABSTRACT. This paper studies the boundedness and termination problems for vector addition systems equipped with one stack. We introduce an algorithm, inspired by the Karp & Miller algorithm, that solves both problems for the larger class of well-structured pushdown systems. We show that the worst-case running time of this algorithm is hyper-Ackermannian for pushdown vector addition systems. For the upper bound, we introduce the notion of bad nested words over a well-quasi-ordered set, and we provide a general scheme of induction for bounding their lengths. We derive from this scheme a hyper-Ackermannian upper bound for the length of bad nested words over vectors of natural numbers. For the lower bound, we exhibit a family of pushdown vector addition systems with finite but large reachability sets (hyper-Ackermannian).

11:15
Preservation and Decomposition Theorems for Bounded Degree Structures

ABSTRACT. We provide elementary algorithms for two preservation theorems for first-order sentences with modulo m counting quantifiers (FO+MODm) on the class Cd of all finite structures of degree at most d: For each FO+MODm-sentence that is preserved under homomorphisms (extensions) on Cd, a Cd-equivalent existential-positive (existential) FO-sentence can be constructed in 4-fold (6-fold) exponential time. For FO-sentences, the algorithm has 4-fold (5-fold) exponential time complexity. This is complemented by lower bounds showing that for FO-sentences a 3-fold exponential blow-up of the computed existential-positive (existential) sentence is unavoidable.

Furthermore, we show that for an input FO-formula, a Cd-equivalent Feferman-Vaught decomposition can be computed in 3-fold exponential time. We also provide a matching lower bound.

11:45
On the Succinctness of Query Rewriting over OWL 2 QL Ontologies with Bounded Chase

ABSTRACT. We investigate the size of first-order rewritings of conjunctive queries over OWL 2 QL ontologies of depth 1 and 2 by means of hypergraph programs computing Boolean functions. Both positive and negative results are obtained. Conjunctive queries over ontologies of depth 1 have polynomial-size nonrecursive datalog rewritings; tree-shaped queries have quadratic positive existential rewritings; however, in the worst case, positive existential rewritings can only be of superpolynomial size. Positive existential and nonrecursive datalog rewritings of queries over ontologies of depth 2 suffer an exponential blowup in the worst case, while first-order rewritings are superpolynomial unless NP \subseteq P/poly. We also analyse rewritings of tree-shaped queries over arbitrary ontologies and observe that the query entailment problem for such queries is fixed-parameter tractable.

12:15
MSO Queries on Trees: Enumerating Answers under Updates

ABSTRACT. We investigate efficient view maintenance for MSO-definable queries over trees or, more precisely, efficiently enumerating answers of MSO-definable queries over words and trees which are subject to local updates. For words we exhibit an algorithm that uses an O(n) preprocessing phase and enumerates answers with O(log n) delay between them. When the word is updated, the algorithm can avoid repeating expensive preprocessing and restart the enumeration phase within O(log n) time. For trees, our algorithm uses O(n) preprocessing time, enumerates answers with O(log^2 n) delay, and can restart enumeration within O(log^2 n) time after receiving an update to the tree. This significantly improves the cost of recomputing the answers of a query from scratch.

10:45-13:00 Session 66AB: Semantics and Verification (CSL-LICS)
Location: FH, Hörsaal 5
10:45
Transition systems over games

ABSTRACT. We describe a framework for game semantics combining operational and denotational accounts.   A game is a bipartite graph of "passive'' and "active'' positions, or a categorical variant with morphisms between positions.

The operational part of the framework is given by a labelled transition system in which each state sits in a particular position of the game.  From a state in a passive position, transitions are labelled with a valid O-move from that position, and take us to a state in the updated position.  Transitions from states in an active position are likewise labelled with a valid P-move, but silent transitions are allowed, which must take us to a state in the same position.

The denotational part is given by a "transfer'' from one game to another, a kind of program that converts moves between the two games, giving an operation on strategies.  The agreement between the two parts is given by a relation called a "stepped bisimulation''.

The framework is illustrated by an example of substitution within a lambda-calculus.

 

11:15
Memoryful Geometry of Interaction: From Coalgebraic Components to Algebraic Effects
SPEAKER: unknown

ABSTRACT. Girard's Geometry of Interaction (GoI) is interaction-based semantics of linear logic proofs and, via suitable translations, of functional programs in general. Its mathematical cleanness identifies essential structures in computation; moreover its use as a compilation technique from programs to state-based machines has been worked out by Mackie, Ghica and others. In this paper, we take Abramsky's idea of resumption-based GoI and develop it systematically into a generic framework that accounts for computational effects (nondeterminism, probability, exception, global states, interactive I/O, etc.). The framework is categorical: algebraic operations provide an interface to computational effects (following Plotkin and Power); the framework is built on the categorical axiomatization of GoI by Abramsky, Haghverdi and Scott; and, by use of the coalgebraic formalization of component calculus, the framework describes explicit construction of state machines as interpretations of functional programs. The obtained models are shown to be sound with respect to equations between algebraic operations, as well as to Moggi's standard equations for the computational lambda calculus. The construction is illustrated by concrete examples.

11:45
Compositional Higher-Order Model Checking via Omega-Regular Games over Boehm Trees
SPEAKER: unknown

ABSTRACT. We introduce type-checking games, which are omega-regular games over Boehm trees, determined by a type of the Kobayashi-Ong intersection type system. These games are a higher-type extension of parity games over trees, determined by an alternating parity tree automaton. However, in contrast to these games over trees, the "game boards" of our type-checking games are composable, using the composition of Boehm trees. Moreover the winner of a composite game is completely determined by the respective winners of the component games.

To our knowledge, type-checking games give the first compositional analysis of higher-order model checking, or the model checking of trees generated by recursion schemes. We study a higher-type analogue of higher order model checking, namely, the problem to decide the winner of a type-checking game over the Boehm tree generated by a lambda-Y-term. We introduce a new type system and use it to prove that the problem is decidable.

On the semantic side, we develop a novel arena game model for type-checking games, which is a cartesian closed category equipped with parametric monad and comonad that themselves form a parametrised adjunction.

12:15
On the Hoare Theory of Monadic Recursion Schemes

ABSTRACT. The equational theory of monadic recursion schemes is known to be decidable by the result of S\'enizergues on the decidability of the problem of DPDA equivalence. In order to capture some properties of the domain of computation, we augment equations with certain hypotheses. This preserves the decidability of the theory, which we call \emph{simple implicational theory}. The asymptotically fastest algorithm known for deciding the equational theory, and also for deciding the simple implicational theory, has running time that is non-elementary. We therefore consider a restriction of the properties about schemes to check: instead of arbitrary equations $f \equiv g$ between schemes, we focus on propositional Hoare assertions $\{p\}f\{q\}$, where $f$ is a scheme and $p, q$ are tests. Such Hoare assertions have a straightforward encoding as equations. We investigate the \emph{Hoare theory} of monadic recursion schemes, that is, the set of valid implications whose conclusions are Hoare assertions and whose premises are of a certain simple form. We present a sound and complete Hoare-style calculus for this theory. We also show that the Hoare theory can be decided in exponential time, and that it is complete for this class.

10:45-13:00 Session 66AC (ITP)
Location: FH, Hörsaal 8
10:45
Formalized, effective domain theory in Coq

ABSTRACT. I present highlights from a formalized development of do- main theory in the theorem prover Coq. This is the first development of domain theory that is effective, formalized and that supports all the usual constructions on domains. In particular, I develop constructive models of both the unpointed profinite and the pointed profinite domains. tandard constructions (e.g., products, sums, the function space, and powerdomains) are all developed. In addition, I build the machinery necessary to compute solutions to recursive domain equations.

11:15
Completeness and Decidability Results for CTL in Coq

ABSTRACT. We prove completeness and decidability results for the temporal logic CTL in Coq/Ssreflect. Our basic result is a constructive proof that for every formula one can obtain either a finite model satisfying the formula or a proof in a Hilbert system certifying the unsatisfiability of the formula. The proof is based on a history-augmented tableau system obtained as the dual of Brünnler and Lange's cut-free sequent calculus for CTL. We prove the completeness of the tableau system and give a translation of tableau refutations into Hilbert refutations. Decidability of CTL and completeness of the Hilbert system follow as corollaries.

11:45
Universe Polymorphism in Coq
SPEAKER: unknown

ABSTRACT. Universes are used in Type Theory to ensure consistency by checking that definitions are well-stratified according to a certain hierarchy. In the case of the Coq proof assistant, based on the predicative Calculus of Inductive Constructions (pCIC), this hierachy is built from an impredicative sort Prop and an infinite number of predicative Type(i) universes. A cumulativity relation represents the inclusion order of universes in the core theory. Originaly, universes were thought to be floating levels, and definitions implicitely constrained these levels in a consistent manner. This works well for most theories, however the globality of the levels and the constraints precluded generic constructions on universes that could work at different levels. Universe polymorphism extends this setup by adding local bindings of universes and constraints, supporting generic definitions over universes, reusable at different levels. This provides the same kind of code reuse facilities as ML-style parametric polymorphism. However, the structure and hierarchy of universes is more complex than bare polymorphic type variables. In this paper, we introduce a conservative extension of pCIC supporting universe polymorphism and treating its whole hierarchy. This new design supports typical ambiguity and implicit polymorphic generalization at the same time, keeping it mostly transparent to the user. Benchmarking the implementation as an extension of the Coq proof assistant on real-world examples gives encouraging results.

12:15
Compositional Computational Reflection

ABSTRACT. Current work on computational reflection is single-minded; each reflective procedure is written with a specific application or scope in mind. Composition of these reflective procedures is done by a proof-generating tactic language such as Ltac. This composition, however, comes at the cost of both larger proof terms and redundant preprocessing. In this work, we propose a methodology for writing composable reflective procedures that solve many small tasks in a single invocation. The key technical insights are techniques for reasoning semantically about extensible syntax in intensional type theory. We also consider how users can customize the behavior of reflective procedures by packaging additional information and passing it to them. This mimicks Coq's usual support for hint databases with only slightly more user effort. We evaluate our techniques in the context of imperative program verification.

12:45
Formal C semantics: CompCert and the C standard
SPEAKER: unknown

ABSTRACT. We discuss the difference between a formal semantics of the C standard, and a formal semantics of an implementation of C that satisfies the C standard. In this context we extend the CompCert semantics with end-of-array pointers and the possibility to byte-wise copy objects. This is a first and necessary step towards proving that the CompCert semantics satisfies the formal version of the C standard that is being developed in the Formalin project in Nijmegen.

10:45-12:45 Session 66AD (RTA-TLCA)
Location: Informatikhörsaal
10:45
Tree Automata with Height Constraints between Brothers
SPEAKER: unknown

ABSTRACT. We define the tree automata with height constraints between brothers (TACBB_H). Constraints of equalities and inequalities between heights of siblings that restrict the applicability of the rules are allowed in TACBB_H. These constraints allow to express natural tree languages like complete or balanced (like AVL) trees. We prove decidability of emptiness and finiteness for TACBB_H, and also for a more general class that additionally allows to combine equality and disequality constraints between brothers.

11:15
Reduction System for Extensional Lambda-mu Calculus
SPEAKER: unknown

ABSTRACT. The Lambda-mu-calculus is an extension of Parigot’s lambda-mu-calculus, the untyped variant of which is proved by Saurin to satisfy some fundamental properties such as the standardization and the separation theorem. For the untyped Lambda-mu-calculus, Nakazawa and Katsumata gave extensional models, called stream models, which is a simple extension of the lambda models, and in which terms are represented as functions on streams. However, we cannot construct a term model from the Lambda-mu-calculus. This paper introduces an extension of the Lambda-mu-calculus, called Lambda-mu-cons, on which we can construct a term model as a stream model, and its reduction system satisfying confluence, subject reduction, and strong normalization.

11:45
Proof terms for infinitary rewriting
SPEAKER: unknown

ABSTRACT. In this paper we generalize the notion of proof term to the realm of transfinite reduction. Proof terms represent reductions in the first- order term format, thereby facilitating their formal analysis. We show that any transfinite reduction can be faithfully represented as an infinitary proof term, which is unique up to, infinitary, associativity.

Our main use of proof terms is in a definition of permutation equivalence for transfinite reductions, on the basis of permutation equations. This definition involves a variant of equational logic, adapted for dealing with infinite objects.

A proof of the compression property via proof terms is presented, which establishes permutation equivalence between the original and the compressed reductions.

12:15
Near semi-rings and lambda calculus

ABSTRACT. A connection between lambda calculus and the algebra of near semi-rings is discussed. Among the results is the following completeness theorem. A first-order equation in the language of binary associative distributive algebras is true in all such algebras if and only if the interpretations of the first order terms as lambda terms beta-eta convert to one another. A similar result holds for equations containing free variables.

10:45-11:15 Session 66AF: Simplification and Solving II (SAT)
Location: FH, Hörsaal 6
10:45
Minimal-Model-Guided Approaches to Solving Polynomial Constraints and Extensions

ABSTRACT. In this paper we present new methods for deciding the satisfiability of formulas involving integer polynomial constraints. In previous work we proposed to solve SMT(NIA) problems by reducing them to SMT(LIA): non-linear monomials are linearized by abstracting them with fresh variables and by performing case splitting on integer variables with finite domain. When variables do not have finite domains, artificial ones can be introduced by imposing a lower and an upper bound, and made iteratively larger until a solution is found (or the procedure times out). For the approach to be practical, unsatisfiable cores are used to guide which domains have to be relaxed (i.e., enlarged) from one iteration to the following one. However, it is not clear then how large they have to be made, which is critical.

Here we propose to guide the domain relaxation step by analyzing minimal models produced by the SMT(LIA) solver. Namely, we consider two different cost functions: the number of violated artificial domain bounds, and the distance with respect to the artificial domains. We compare these approaches with other techniques on benchmarks coming from constraint-based program analysis and show the potential of the method. Finally, we describe how one of these minimal-model-guided techniques can be smoothly adapted to deal with the extension Max-SMT of SMT(NIA) and then applied to program termination proving.

10:45-12:45 Session 66AG: NSV Contributed Talks: Formal Methods in Numerical Programs (NSV)
Location: FH, Hörsaal 2
10:45
Theorem-Proving Analysis of Digital Control Logic Interacting with Continuous Dynamics

ABSTRACT. This work outlines an equation-based formulation of a digital control program and transducer interacting with a continuous physical process, and an approach using the Coq theorem prover for verifying the performance of the combined hybrid system. Considering thermal dynamics with linear dissipation for simplicity, we focus on a generalizable, physically consistent description of the interaction of the real-valued temperature and the digital program acting as a thermostat. Of interest in this work is the discovery and formal proof of bounds on the temperature, the degree of variation, and other performance characteristics. Our approach explicitly addresses the need to mathematically represent the decision problem inherent in an analog-to-digital converter, which for rare values can take an arbitrarily long time to produce a digital answer (the so-called Buridan's Principle); this constraint ineluctably manifests itself in the verification of thermostat performance. Furthermore, the temporal causality constraints in the thermal physics must be made explicit to obtain a consistent model for analysis. We discuss the significance of these findings toward the verification of digital control for more complex physical variables and fields.

11:15
Transformation of a PID Controller for Numerical Accuracy

ABSTRACT. Numerical programs performing floating-point computations are very sensible to the way formulas are written. Several techniques have been proposed concerning the transformation of expressions in order to improve their accuracy and now we aim at going a step further by automatically transforming larger pieces of code containing several assignments and control structures. This article presents a case study in this direction. We consider a PID controller and we transform its code in order to improve its accuracy. The experimental data obtained when we compare the different versions of the code (which are mathematically equivalent) show that those transformations have a significant impact on the accuracy of the computations.

11:45
Policy iteration in finite templates domain
SPEAKER: Assale Adje

ABSTRACT. We prove in this paper that Policy Iteration can be generally defined in finite domain of templates using Lagrange duality. Such policy iteration algorithm converges to a fixed point when for very simple technique condition holds. This fixed point furnishes a safe over-approximation of the set of reachable values taken by the variables of a program. We prove also that Policy Iteration can be easily initialised for small programs when templates are correctly chosen.

10:45-13:00 Session 66AH: UITP Contributed Talks (Proof Strategies, HCI Focus Groups, Proof Visualization, Term Rewriting) (UITP)
Location: FH, Seminarraum 107
10:45
Tinker, tailor, solver, proof

ABSTRACT. We introduce Tinker, a tool for designing an evaluating proof strategies based on proof-strategy graphs, a formalism previously introduced by the authors. We represent proof strategies as open-graphs, which are directed graphs with additional input/output edges. Tactics appear as nodes in a graph, and can be `piped' together by adding edges between them. Goals are added to the input edges of such a graph, and flow through the graph as the strategy is evaluated. Properties of the edges ensure that only the right `type' of goals are accepted. In this paper, we detail the Tinker tool and show how it can be integrated with two different theorem provers: Isabelle and ProofPower.

11:15
How to Put Usability into Focus: Using Focus Groups to Evaluate the Usability of Interactive Theorem Provers
SPEAKER: Sarah Grebing

ABSTRACT. In recent years the effectiveness of interactive theorem provers has increased to an extend that the bottleneck in the interactive process shifted to efficiency: while in principle large and complex theorems are provable (effectiveness), it takes a lot of effort for the user interacting with the system (lack of efficiency). We conducted focus groups to evaluate the usability of Isabelle/HOL and the KeY system with two goals: (a) detect usability issues in the interaction between interactive theorem provers and their user, and (b) analyze how evaluation methods commonly used in the area of human-computer interaction, such as focus groups and co-operative evaluation, are applicable to the specific field of interactive theorem proving(ITP).

In this paper, we report on our experience using the evaluation method focus groups and how we adapted this method to ITP. We describe our results and conclusions mainly on the ``meta-level,'' i.e., we focus on the impact that specific characteristics of ITPs have on the setup and the results of focus groups. On the concrete level, we briefly summarise insights into the usability of the ITPs used in our case study.

11:45
Advanced Proof Viewing in PROOFTOOL
SPEAKER: Martin Riener

ABSTRACT. Sequent calculus is widely used for formalizing proofs. However, understanding the proofs of even simple mathematical arguments soon becomes impossible. Graphical user interfaces help in this matter, but most stick with Gentzen's original notation. This paper tries to catch up with recent developments in tree visualization and proposes the Sunburst Tree layout as a complement to the traditional tree structure. It constructs inferences as concentric circle arcs around the root inference, allowing the user to investigate the proof's structural content. We further describe the integration into PROOFTOOL and the interaction with the Gentzen layout.

12:15
The Certification Problem Format

ABSTRACT. We provide an overview of CPF, the certification problem format, and explain some design decisions. Whereas CPF was originally invented to combine three different formats for termination proofs into a single one, in the meanwhile proofs for several other properties of term rewrite systems are also expressible: like confluence, complexity, and completion. As a consequence, the format is already supported by several tools and certifiers. Its acceptance is also demonstrated in international competitions: the certified tracks of both the termination and the confluence competition utilized CPF as exchange format between automated tools and trusted certifiers.

10:45-13:00 Session 66AI: Invited Talk and Imperative Programs (WST)
Location: FH, Seminarraum 101B
10:45
Termination of Biological Programs
SPEAKER: Jasmin Fisher

ABSTRACT. Living cells are highly complex reactive systems operating under varying environmental conditions. By executing diverse cellular programs, cells are driven to acquire distinct cell fates and behaviours. Deciphering these programs is key to understanding how cells orchestrate their functions to create robust systems in health and disease. Due to the staggering complexity, this remains a major challenge. Stability in biological systems is a measure of the homeostatic nature and robustness against environmental perturbations. In computer science, stability means that the system will eventually reach a fixed point regardless of its initial state. Or, in other words, that all computations terminate with variables acquiring the same value regardless of the path that led to termination. Based on robust techniques to prove stabilization/termination in very large systems, we have developed an innovative platform called BMA that allows biologists to model and analyse biological signalling networks. BMA analyses systems for stabilization, searches for paths leading to stabilization, and allows for bounded model-checking and simulation, all with intelligible visualization of results. In this talk, I will summarize our efforts in this direction and talk about the application of termination analysis in drug discovery and cancer.

Joint work with Byron Cook, Nir Piterman, Samin Ishtiaq, Alex Taylor and Ben Hall.

11:45
On Improving Termination Preservability of Transformations from Procedural Programs into Rewrite Systems by Using Loop Invariants
SPEAKER: Naoki Nishida

ABSTRACT. Recently, to analyze procedural programs by using techniques in the field of term rewriting, several transformations of a program into a rewrite system have been developed. Such transformations are basically complete in the sense of computation, and e.g., termination of the rewrite system ensures termination of the program. However, in general, termination of the program is not preserved by the transformations and, thus, the preservation of termination is a common interesting problem. In this paper, we discuss the improvement of a transformation from a simple procedural program over integers into a constrained term rewriting system by appending loop invariants to loop conditions of "while" statements so as to preserve termination as much as possible. 

12:15
Automatic Termination Analysis for GPU Kernels
SPEAKER: Jeroen Ketema

ABSTRACT. We describe a method for proving termination of massively parallel GPU kernels. An implementation in KITTeL is able to show termination of 94% of the 598 kernels in our benchmark suite.

12:45
Discussion
SPEAKER: Everyone
10:45-11:35 Session 66AJ: Invited Talk (LFMTP)
Location: FH, Zeichensaal 1
10:45
Session Types Meet Separation Logic

ABSTRACT. Session types and separation logic are two leading methodologies for software verification. Session types allow users to write protocols that concurrent programs must adhere to; a session type specifies the order in which messages have to be exchanged, and the types of the data those messages carry. By checking that programs follow compatible session types, we can reason about the ways processes interact, ultimately guaranteeing the absence of deadlocks and race conditions in sessions. Separation logic is an extension of Hoare logic that is typically used to prove full functional correctness of sequential stateful programs; using separation logic, we can write pre- and post-conditions for program statements that use mutable stores such as a heap and modularly verify that they satisfy these specifications. Separation logic is more expressive than session types when it comes to data: with it we can state properties such as "x is a number greater than five", whereas session types can only express that "x is a number". On the other hand, session types offer a powerful means of checking that the communications among concurrent programs do not interfere with each other or deadlock.

In this talk we describe a merger of session types and a higher-order separation logic by extending a Java-like programming language with send, receive, and fork primitives and adding session types to the Hoare logic. We combine the two methodologies to verify that a system of concurrent programs performs some communications to reach a common goal, which we define using separation logic assertions on their respective mutable stores. This allows us to verify programs like distributed sorting algorithms, where an agent can receive a list and later send a sorted version of that list. All other commands, such as loops, memory accesses and function calls are handled using separation logic in the usual way. We also maintain modular verification, as each agent is individually verified as if it were a sequential program.

This is a joint work with Morten Fangel Jensen and Fabrizio Montesi.

10:45-13:00 Session 66AK: Program Transformation (CICLOPS-WLPE)
Location: FH, Hörsaal 7
10:45
Refining definitions with unknown opens using XSB for IDP3
SPEAKER: unknown

ABSTRACT. FO(·)IDP3 is a declarative modeling language that extends first-order logic with inductive definitions, partial functions, types and aggregates. Its model generator IDP3 evaluates all definitions that depend on fully known information before grounding and solving the problem specification. In this paper, we introduce a technique which allows us to refine the interpretation of defined symbols when they depend on information that is only partially given instead of completely given. We extend our existing transformation of FO(·)IDP3 definitions to Tabled Prolog rules with support for definitions that depend on information that is possibly partially unknown. In this paper we present an algorithm that uses XSB Prolog to evaluate these rules in such a way that we achieve the most precise possible refinement of the defined symbols. Experimental results show that our technique derives extra information for the defined symbols.

11:15
A Lambda Prolog Based Animation of Twelf Specifications
SPEAKER: unknown

ABSTRACT. Specifications in the Twelf system are based on a logic programming interpretation of the Edinburgh Logical Framework or LF. We consider an approach to animating such specifications using a Prolog implementation. This approach is based on a lossy encoding of LF expressions by simply typed lambda calculus (STLC) terms and a subsequent encoding of the lost dependency information in predicates that are defined by suitable clauses. To use this idea in an actual implementation, it is necessary also to describe an inverse transformation from STLC terms into LF expressions. While such an inverse does not always exist, we show that it does in the restricted setting where the STLC terms are the outcome of derivations constructed from predicate clauses that result from translating Twelf specifications. Twelf also supports the possibility of computing with LF specications in which types may contain variables with existential force. We discuss the possibility of supporting such a capability in our translation based approach to animation.

11:45
A System for Embedding Global Constraints into SAT
SPEAKER: unknown

ABSTRACT. Despite of being a dominant tool for solving some practical NP-complete problems, Propositional Satisfiability (SAT) also has a number of weaknesses, including its inability to compactly represent numerical constraints and its low level, unstructured language of clauses. In contrast, Constraint Programming (CP) has been widely used for scheduling and solving combinatorial search problems. In this report, We present a tight integration of SAT with CP, called SAT(gc),which embeds global constraints into SAT. A prototype is implemented by integrating the state of the art SAT solver zchaff and the generic constraint solver gecode. Experiments are carried out for benchmarks from puzzle domains and planning domains to reveal insights in compact representation, solving effectiveness, and novel usability of the new framework.

12:15
Towards Pre-Indexed Terms
SPEAKER: unknown

ABSTRACT. Indexing of terms and clauses is a well-known technique used in Prolog implementations, as well as automated theorem provers, to speed up search. In this paper we show how the same mechanism can be used to implement efficient reversible mappings between term representations (which we will call pre-indexing). Based on user-provided term descriptions, this mapping allows us to use cheap data encodings internally (such as prefix trees). We found that for some classes of programs we can drastically improve the efficiency by carefully selecting the application of the mapping at specific program points.

12:45
Discussion
SPEAKER: Everyone

ABSTRACT. We have additional discussion time for all talks before lunch on July 17th.

10:45-13:00 Session 66AL: Technical Session (HCVS)
Location: FH, Seminarraum 134A
10:45
Please consult the program at http://vsl2014.at/hcvs
SPEAKER: T.B.A
10:45-12:45 Session 66AM: Technical Session (EC2 and REORDER)
Location: FH, Hörsaal 4
10:45
Reordering and Verification in the Linux Kernel
SPEAKER: Paul McKenney

ABSTRACT. The Linux kernel has long run correctly on systems with weak memory models, in fact, none of the systems supporting Linux feature a sequentially consistent memory model.  Therefore, the Linux kernel provides a number of primitives that provide various ordering guarantees, including split counters, memory allocators, locking primitives, memory-barrier primitives, atomic operations, and read-copy update (RCU).  Given that  RCU is relatively unfamiliar, this talk will give a brief RCU overview.

However, all of this weak ordering does pose a validation challenge, especially in recent years.  The second half of this talk will therefore focus on validation challenges specific to the Linux kernel, how the Linux kernel community approaches validation and what more might be possible.

11:45
Reasoning about the C/C++ Weak Memory Model

ABSTRACT.  

In this talk, I will try to answer two key questions regarding the 2011 C/C++ memory model: (1) What high-level principles can programmers use to reason about their programs? (2) What source-to-source transformations can optimising compilers soundly perform?

10:45-12:30 Session 66AN: Automated Debugging (PAS)
Location: FH, Seminarraum 138A
10:45
An Attempt to Formalize Popper's Logic of Scientific Discovery
SPEAKER: Wei Li

ABSTRACT. In 1935, Karl Popper published his famous book entitled ``The Logic of Scientific Discovery.'' In this book he proposed a general procedure of scientific discovery which contains the following key points. (1) A scientific theory consists of ``conjectures'' about domain knowledge.(2) Every conjecture may be disproved, that is, conclusions or predictions derived on the basis of it may be and must be subject to verification using the results of observations of human beings and scientific experiments on natural phenomena. (3) When the conclusions or predications contradict the results of observations or experiments, the theory is refuted by facts. (4) In this case, one should abandon those conjectures in the current theory which lead to contradictions, propose new conjectures, and form a new theory. (5) Every theory obtained in this way is provisional: it is expected to be refuted by facts any time. Truth in science is reached in the process of unlimited explorations.

The above procedure is given in the layer of Scientific Philosophy. In this paper, we provide a mathematical-logical framework for this general procedure. It consists of the following four parts. (1) Representability theory, that is, the theory of representing basic concepts such as theory, conjecture, provability, and refutation by facts, which are involved in the general procedure of scientific discovery, in first-order languages. (2) R-calculus, that is, a revision calculus for deleting conjectures refuted by facts or experiments from the current theory. It is a formal inference system about logical connective symbols and quantifier symbols and is sound, complete, and reachable. (3) Theory of version evolution, that is, the theory about version sequences, their limits, proschemes, and the convergency, soundness, and independency of proschemes. (4) I-calculus, that is, a calculus for forming conjectures. It is a formal inference system about contexts and involves the theory of rationality about conjecture calculus.

The above four parts constitute the mathematical framework for the logic of scientific discovery, which may be viewed as a formal description of ``the logic of scientific discovery.'' As examples, we will use this mathematical framework to give formal descriptions of the processes of Einstein's discovery of ``the special theory of relativity" and Darwin's proposal of ``the theory of evolution.'' The difference between our mathematical framework and the general procedure of Popper's logic of scientific discovery in the sense of scientific philosophy is: the former can be implemented on computer, so that the revision of scientific theories and evolution of version sequences can be accomplished via human-computer interactions and in the case where the current theory encounters refutation by facts, the computer can provide schemes for rational revision.

11:30
A Model Theory for R-calculus without Cut
SPEAKER: unknown

ABSTRACT. In this paper, we propose a model theory for R-calculus without cut. First, validity of R-transitions and R-transition sequences is defined using models of first-order languages. Then reducibility of R-transition sequences is defined and it is proved that every R-transition sequence, which transits to an R-termination, can be reduced to an irreducible R-transition sequence. Finally, the soundness and completeness of R-calculus is redefined. And R-calculus without cut is proved to still be sound and complete.

12:00
RCK: A Software Toolkit for R-calculus
SPEAKER: unknown

ABSTRACT. R-calculus is an inference system to formally deduce all possible maximal contractions of a theory in first-order language. Based on the theories of R-calculus, we implemented a software toolkit, called RCK, to perform R-calculus on computer. The toolkit enable users to obtain and verify maximal contractions of a given theory whenever the theory is refuted by given facts. We briefly introduce the framework of RCK and demonstrate how R-calculus can serve for scientific discovery.

10:45-13:00 Session 66AP: Analysis: Understanding and Explanation (VSTTE)
Location: FH, Zeichensaal 3
10:45
Finding Security Vulnerabilities in a Network Protocol using Parameterized Systems
SPEAKER: Orna Grumberg

ABSTRACT. This paper presents a novel approach to automatically finding security
vulnerabilities in the routing protocol OSPF -- the most widely used
protocol for Internet routing. We start by modeling  OSPF on (concrete)
networks with a fixed number of routers in a specific topology. By using the
model checking tool CBMC, we found several simple, previously unpublished
attacks on OSPF.

In order to search for attacks in a  family of networks with varied sizes
and topologies, we define the concept of an  abstract network which
represents such a family. The abstract network A has the property that if
there is an attack on A then there is a corresponding attack on each of the
(concrete) networks represented by A.

The attacks we have found on abstract networks reveal security
vulnerabilities in the OSPF protocol, which can harm routing in huge
networks with complex topologies. Finding such attacks directly on the huge
networks is practically impossible. Abstraction is therefore essential.
Further, abstraction enables showing that the attacks are  general. That is,
they are applicable in a large (even infinite) number of networks.
This indicates that the attacks exploit fundamental vulnerabilities, which
are applicable to many configurations of the network.

This is a joint work with Adi Sosnovich and Gabi Nakibly.

11:45
A Model for Capturing and Replaying Proof Strategies

ABSTRACT. Modern theorem provers can discharge a significant proportion of Proof Obligations (POs) that arise in the use of Formal Methods (FMs). Unfortunately, the residual POs require tedious manual guidance. On the positive side, these "difficult" POs tend to fall into families each of which requires only a few key ideas to unlock. This paper outlines a system that can identify and characterise ways of discharging POs of a family by tracking an interactive proof of one member of the family. This opens the possibility of capturing ideas from an expert and/or maximising reuse of ideas after changes to definitions. The proposed system has to store a wealth of meta-information about conjectures, which can be matched against previously learned strategies, or can be used to construct new strategies based on expert guidance. This paper describes this meta-information and how it is used to lessen the burden of FM proofs.

12:10
A Verification Condition Visualizer
SPEAKER: unknown

ABSTRACT. When first encountering data structures such as arrays, records and pointers programmers are often presented with pictorial representations. The use of pictures to describe data structures and their manipulation can help establish basic programming intuitions. The same is true of program proving where pictures are frequently used within the literature to describe program properties such as loop invariants. Here we report on an experimental prototype of a visualization tool that translates verification conditions arising from array based code into pictures. While initially aimed at supporting teaching, we have received positive feedback from users of program proving tools within industry.

12:35
What Gives? A Hybrid Algorithm for Error Explanation
SPEAKER: unknown

ABSTRACT. When a program fails, the cause of the failure is often buried in a long, hard-to-understand error trace. We present a new technique for automatic error localization, which formally unifies prior approaches based on computing interpolants and minimal unsatisfiable cores of failing executions. Our technique works by automatically reducing an error trace to its essential components—a minimal set of statements that are responsible for the error, together with key predicates that explain how these statements lead to the failure. We prove that our approach is sound, and we show that it is useful for debugging real programs.

10:45-13:00 Session 66AQ: Invited Talk, Contributed Talks, and Poster Announcements (DL)
Location: EI, EI 7
10:45
Invited Talk: New Perspectives on Query Reformulation

ABSTRACT. This talk will discuss a very general notion of query reformulation: Given a query Q, a set of constraints Sigma, a target query language or programming language, and a cost metric, determine if there is an object in the target language that is equivalent to Q relative to the constraints, and if so find such an object optimizing the cost metric.

The problem has a long history in the database community, originally motivated by querying over materialized views. This talk will go over several very-recent developments—from a database perspective, but mentioning the relationship to description logics.

We will first go over the connection between reformulation and interpolation/preservation properties, originating in the work of Nash, Segoufin, and Vianu, and show how this can be extended to give a very elegant theoretical framework for studying the reformulation problem. After that we will turn to performing interpolation and reformulation directly over a physical plan language, rather than a query language. This will include a discussion of:

-- how to extend traditional cost-based query optimization to the setting of constraint-based reformulation

-- how to "plug in" reasoners, particularly those for expressive decidable logics (guarded negation, guarded TGDs), into the reformulation process.

This includes work with Balder ten Cate, Julien Leblay, Efi Tsamoura, and Michael Vanden Boom.

11:45
Acyclic Query Answering under Guarded Disjunctive Existential Rules and Consequences to DLs
SPEAKER: Michael Morak

ABSTRACT. The complete picture of the complexity of conjunctive query answering under guarded disjunctive existential rules has been recently settled. However, in the case of acyclic (unions of) conjunctive queries there are some fundamental questions which are still open. It is the precise aim of the present paper to close those questions. Our main result states that acyclic conjunctive query answering under a fixed set of guarded disjunctive existential rules is EXPTIME-hard. This result together with an EXPTIME upper bound obtained by exploiting classical results on guarded first-order logic, gives us the complete picture of the complexity of our problem. As an immediate consequence, we get that answering acyclic unions of conjunctive queries under DL-Lite_{bool}^{H}, one of the most expressive languages of the DL-Lite family of DLs, is EXPTIME-complete, even in the case of a fixed TBox. In fact, the EXPTIME-hardnees holds even for DL-Lite_{bool}, that is, the DL obtained from DL-Lite_{bool}^{H} after forbidding role hierarchies.

12:10
Complexities of Nominal Schemas
SPEAKER: unknown

ABSTRACT. Nominal schemas extend description logics (DLs) with a restricted form of variables, thus integrating rule-like expressive power into standard DLs. They are also one of the most recently introduced DL features, and in spite of many works on algorithms and implementations, almost nothing was known about their computational complexity and expressivity until recently. In this extended abstract, we review our recent work "Nominal Schemas in Description Logics: Complexities Clarified" that will be presented at KR 2014. We give a comprehensive overview of the reasoning complexities of a wide range of DLs—from EL to SROIQ—extended with nominal schemas. Both combined and data complexities increase by one exponential in most cases, with the one previously known case of SROIQ being the main exception. To further improve our understanding of nominal schemas, we also investigate their semantics, traditionally based on finite grounding, and show that it can be extended to infinite sets of individuals without affecting reasoning complexities. We argue that this might be a more suitable semantics when considering entailments of axioms with nominal schemas.

12:35
Comparing the Expressiveness of Description Logics

ABSTRACT. We compare the expressiveness of the description logics that extend \ALCreg (a variant of PDL) with any combination of the features: inverse roles, nominals, quantified number restrictions, the universal role, the concept constructor for expressing the local reflexivity of a role. The comparison is done for the expressiveness with respect to concepts, positive concepts and TBoxes. It is based on bisimulations.

12:38
An Empirical Investigation of Difficulty of Subsets of Description Logic Ontologies
SPEAKER: unknown

ABSTRACT. Very expressive Description Logics in the $\Logic{SH}$ family have worst case complexity ranging from \complexity{EXPTIME} to double \complexity{NEXPTIME}. In spite of this, they are very popular with modellers and serve as the foundation of the Web Ontology Language (OWL), a W3C standard. Highly optimised reasoners handle a wide range of naturally occurring ontologies with relative ease, albeit with some pathological cases. A recent optimisation trend has been \emph{modular} reasoning, that is, breaking the ontology into hopefully easier subsets with a hopefully smaller overall reasoning time (see MORe and Chainsaw for prominent examples).

However, it has been demonstrated that subsets of an OWL ontology may be harder -- even much harder -- than the whole ontology. This introduces the risk that modular approaches might have even more severe pathological cases than the normal monolithic approaches.

In this paper, we analyse a number of ontologies from the BioPortal repository in order to isolate cases where random subsets are harder than the whole. For such cases, we then examine whether the module nearest to the random subset also exhibits the pathological behaviour.

12:41
Predicting OWL Reasoners: Locally or Globally?
SPEAKER: unknown

ABSTRACT. We propose a novel approach for performance prediction of OWL reasoners. It selects suitable, small ontology subsets, and then extrapolates reasoner's performance on them to the whole ontology. We investigate intercorrelation of ontology features using PCA. Finally, we discuss various error measures for performance prediction and compare our approach against an existing one using these measures.

12:44
Bridging the Gap between Tableau and Consequence-Based Reasoning
SPEAKER: unknown

ABSTRACT. We present a non-deterministic consequence-based procedure for the description logic ALCHI. Just like the similar style (deterministic) procedures for EL and Horn-SHIQ, our procedure explicitly derives subsumptions between concepts, but due to non-deterministic rules, not all of these subsumptions are consequences of the ontology. Instead, the consequences are only those subsumptions that can be derived regardless of the choices made in the application of the rules. This is similar to tableau-based procedures, for which an ontology is inconsistent if every expansion of the tableau eventually results in a clash. We report on a preliminary experimental evaluation of the procedure using a version of SNOMED~CT with disjunctions, which demonstrates some promising potential.

12:47
Towards a Description Logic for Program Analysis: Extending ALCQIO with Reachability
SPEAKER: Tomer Kotek

ABSTRACT. Program Analysis requires expressive logics enjoying good computational properties and capable of describing graph-structured information. In particular, logical representation of standard dynamic data structures like lists or trees requires some form of reachability assertions. However, reachability is usually not expressible in decidable logics for graphs such as guarded first-order logic (GF), and thus in Description Logics (DLs), which are mainly fragments of GF. In this work, we introduce the DL $ALCQIO_{b,Re}$ that extends the Boolean $ALCQIO$ KBs with reachability assertions and is tailored for reasoning over programs with dynamic data structures. $ALCQIO_{b,Re}$ is powerful enough to express (an unbounded number of) lists and trees, can describe complex data structures with a high degree of sharing, and also allows compositions like trees of lists or lists of trees. We show that finite model reasoning in $ALCQIO_{b,Re}$ is polynomial-time reducible to finite satisfiability of $ALCQIO$-formulae. As a consequence, we get that finite satisfiability and finite implication in $ALCQIO_{b,Re}$ are NEXPTIME-complete. Extensions of DLs with reachability have been studied previously (mainly in terms of fix-points and regular expressions), but $ALCQIO_{b,Re}$ is the first such enriched DL that remains decidable in the simultaneous presence of nominals, inverse roles and counting quantifiers.

12:50
How to Best Nest Regular Path Queries
SPEAKER: unknown

ABSTRACT. Regular path queries (RPQs) define query patterns in terms of regular expressions and are therefore well-suited to query for paths over roles in DL. RPQs can be extended to 2-way RPQs (with converse), CRPQs (with conjunctions), or PRPQs (arbitrary positive Boolean combinations), all of which have been explored in DL research. Another natural extension of any query language is nesting, where query predicates can be defined in terms of subqueries. In this paper, we discuss several ways of introducing nesting to PRPQs, and show that they lead to increasingly expressive query languages: CN2RPQs, which were studied in the context of DLs recently; nested P2RPQs; and positive queries with transitive closure on binary predicates. The latter is one of the most expressive languages for which query answering can still be decided over DL knowledge bases. We present initial complexity results that show query answering to be non-elementary in the worst case, with an exponential increase for each level of nesting.

12:53
Typed Higher-Order Variant of SROIQ - Why Not?
SPEAKER: unknown

ABSTRACT. Ontology modelling sometimes calls for higher order entities, i.e., classes of classes, and so on. Such constructs are, however, not supported by most widely used ontology languages, such as OWL 2, which is based on the SROIQ DL. This often results in approximate modelling techniques, such as modelling classes as individuals. Some of the logical and ontological information is thus lost, or at least obscured. In this paper we show a typed higher-order variant of SROIQ. We show that it is decidable, and in fact polynomially reducible into regular SROIQ. In the latter part of the paper we then illustrate modelling scenarios in which such a language is useful.

12:56
Practical query answering over Hi (DL-LiteR) knowledge bases
SPEAKER: unknown

ABSTRACT. The language Hi(DL-LiteR) is obtained from DL-LiteR by adding metamodeling features, and is equipped with a query language that is able to express higher-order queries. We investigate the problem of answering a particular class of such queries, called instance higher-order queries posed over Hi (DL-LiteR ) knowledge bases (KBs). The only existing algorithm for this problem is based on the idea of reducing the evaluation of a higher-order query Q over a Hi(DL-LiteR) KB to the evaluation of a union of first-order queries over a DL-LiteR KB, built from Q by instantiating in all possible ways all metavariables. Even though of polynomial time complexity with respect to the size of the KB, this algorithm turns out to be inefficient in practice. In this paper we present a new algorithm, called Smart Binding Planner (SBP), that compiles Q into a program, that issues a sequence of first-order conjunctive queries, where each query has the goal of providing the bindings for metavariables of the next ones, and the last one completes the process by computing the answers to Q. We also illustrate some experiments showing that, in practice, SBP is significantly more efficient than the previous approach.

10:45-12:45 Session 66AR: Contributed talks to NLCS (NLCS)
Location: FH, CAD 2
10:45
How to do things with types
SPEAKER: Robin Cooper

ABSTRACT. This paper attempts to explore what kind of general theory of action a notion of linguistic acts could be embedded in and tries to develop the beginnings of such a theory using type theory as the starting point. We will here only develop a non-linguistic example, although we will point out relationships to linguistic acts during the course of the paper.

11:15
Mereological Comparatives in Delineation Semantics

ABSTRACT. This paper presents a new logical analysis of nominal comparatives (i.e. sentences as in (1)) within the Delineation Semantics (DelS) approach to gradability and comparison (Klein, 1980, among many others).

(1) More linguists came to the party than stayed home to study.

Along with the Degree Semantics (DegS) framework (see Kennedy, 1997, for a summary), DelS is one of the dominant logical frameworks for analyzing the meaning of gradable constituents of the adjectival syntactic category; however, there has been very little work done investigating the application of this framework to the analysis of gradability outside the adjectival domain. This state of affairs distinguishes the DelS framework from its DegS counterpart, where such questions have been investigated in great deal since the beginning of the 21st century. Nevertheless, it has been observed (for example, by Doetjes et al. (2011)) that there is nothing inherently adjectival about the way that the interpretations of scalar predicates are calculated in DelS, and therefore that there is enormous potential for this approach to shed light on the nature of gradability and comparison in the nominal and verbal domains. This paper gives the first attempt at realizing this potential within a Mereological extension of a (simplified) version of Klein (1980)’s system.

11:45
Proof-Theoretic Semantics for intensional transitive verbs

ABSTRACT. the paper presents a proof-theoretic semantics (PTS) for intensional transitive verbs, claimed to be advantageous compared to the model-theoretic semantics (MTS) for them. The PTS evades the controversies over the truth-conditions for sentences headed by ITVs, and is relieved from the complicated ontological commitments originated from various inhabitants populating models (over which there is no consensus). The following three characteristics of ITVs are handled: 1. Admittance of non-specific objects 2. Resistance to substitutability of coextensives 3. Suspension of existential commitment The paper also considers in detail the monotonicity involved with ITVs. Finally, there is a discussion of a proof-theoretic handling of passivisation of ITVs. The paper extends work on PTS for an extensional fragment, reviewed in an appendix.

12:15
Grice, Hoare and Nash: some comments on conversational implicature
SPEAKER: Rohit Parikh

ABSTRACT. Much of the current work in pragmatics goes back to Paul Grice who introduced the notion of implicature, a part of the meaning of an utterance which goes beyond the semantic value of the actual sentence uttered.  An implicature may depend on the particular circumstances prevailing when a statement is made, or it may be more general and community wide.

We consider the special case of a particularized conversational implicature which arises when one agent, A, has a goal G (typically assumed to be common knowledge), and makes a statement relevant to her situation. The other agent B then says something, and an implicature arises from B's utterance which can only be understood in terms of B helping A to achieve the goal. It turns out that Hoare semantics, developed originally for proving program correctness, is relevant here to examine the appropriateness of the action which A might take to achieve her goal. 

A second point is Grice's principle of cooperation.  Grice assumed that A and B above are cooperating and his treatment of implicature needs an assumption of cooperation to work out.   But as we are all aware, not all conversations are fully cooperative. For instance, an American tourist and a street hawker in India may negotiate the price of a carved elephant which the latter is offering to the former. The cooperation arises from the fact that both want the sale to take place, but there is also an opposition as the tourist wants to pay less and the hawker wants to charge more. What will each say?

The well known come on uttered by a young man to his date, "Would you like to come to my apartment and see my etchings?" provides another such case where the man and his date may have different priorities which they need to negotiate.

We suggest that a principle formulated by John Nash in his "The Bargaining Problem" provides the proper generalization for the mix of cooperation and competition which prevails in such dialogues.

10:45-12:45 Session 66AS: Invited Talk (Miller) and Tutorial (Marra) (LATD)
Location: MB, Festsaal
10:45
Combining Intuitionistic and Classical Logic: a proof system and semantics
SPEAKER: Dale Miller

ABSTRACT. While Gentzen's sequent calculus provides a framework for developing the proof theory of both classical and intuitionistic logic, it did not provide us with one logic that combines them.  There are, of course, a number of ways to relate classical and intuitionistic logic: for example, intuitionistic logic can be translated into classical logic with the addition of the S4 modality and classical logic can be embedded into intuitionistic logic using negation translations.  Here we consider the problem of building proof systems and semantics for a logic in which classical and intuitionistic connectives mix freely. Our solution, the logic of Polarized Intuitionistic Logic, employs a polarization (red/green) of formulas and two entailment judgments.  We give a Kripke semantics and a sequent calculus for this logic for which soundness and completeness holds.  The sequent calculus proof system mixes elements of Gentzen's LJ proof system and Girard's LC proof system.  This talk is based on joint work with Chuck Liang. 

 

11:45
Tutorial 2/2: The more, the less, and the much more: An introduction to Lukasiewicz logic as a logic of vague propositions, and to its applications

ABSTRACT. In the second talk I show through examples how the availability of such an intended semantics, far from being
an ornamental addition to the literature, is in fact a sine qua non to deploy Lukasiewicz logic in applications of genuine
importance. In closing, if time allows, I revisit H\’ajek’s Programme in many-valued logic in light of our discussion of
Lukasiewicz logic.

10:45-13:00 Session 66AT: Program Analysis and Sets (SMT)
Location: FH, Seminarraum 104
10:45
Speeding Up SMT-Based Quantitative Program Analysis

ABSTRACT. Quantitative program analysis involves computing numerical quantities about individual or collections of program executions. An example of such a computation is quantitative information flow analysis, where one estimates the amount of information leaked about secret data through a program's output channels. Such information can be quantified in several ways, including channel capacity and (Shannon) entropy. In this paper, we formalize a class of quantitative analysis problems defined over a weighted control flow graph of a program. These problems can be solved using a combination of path enumeration, SMT solving, and model counting. However, existing methods can only handle very small programs, primarily because the number of execution paths can be exponential in the program size. We show how path explosion can be mitigated in some practical cases by taking advantage of special branching structure and by novel algorithm design. We demonstrate our techniques by computing the channel capacities of the timing side-channels of two programs with extremely large numbers of paths.

11:15
Multi-solver Support in Symbolic Execution
SPEAKER: unknown

ABSTRACT. In this talk, we will present the results reported in our CAV 2013 paper on integrating support for multiple SMT solvers in the dynamic symbolic execution engine KLEE. In particular, we will outline the key characteristics of the SMT queries generated during symbolic execution, introduce an extension of KLEE that uses a number of state-of-the-art SMT solvers (Z3, Boolector and STP) through the metaSMT solver framework, and compare the solvers' performance when run on large sets of QF_ABV queries obtained during the symbolic execution of real-world software. In addition, we will discuss several options for designing a parallel portfolio solver for symbolic execution tools.

11:45
Protocol log analysis with constraint programming
SPEAKER: unknown

ABSTRACT. Testing a telecommunication protocol often requires protocol log analysis. A protocol log is a sequence of messages with timestamps. Protocol log analysis involves checking that the content of messages and timestamps are correct with respect to the protocol specification. We model the protocol specification using constraint programming (MiniZinc), and we present an approach where a constraint solver is used to perform protocol log analysis. Our case study is the Public Warning System service, which is a part of the Long Term Evolution (LTE) 4G standard.

12:15
Reasoning About Set Comprehensions

ABSTRACT. Set comprehension is a mathematical notation for defining sets on the basis of a property of their members. Although set comprehension is widely used in mathematics and some programming languages, direct support for reasoning about it is still not readily available in state-of-the-art SMT solvers. This paper presents a technique for translating formulas which express constraints involving set comprehensions into first-order formulas that can be verified by off-the-shelf SMT solvers. More specifically, we have developed a lightweight Python library that extends the popular Z3 SMT solver with the ability to reason about the satisfiability of set comprehension patterns. This technique is general and can be deployed in a broad range SMT solvers.

10:45-12:15 Session 66AU: Contributed Talks: Completeness and Computation (PSC)
Location: FH, CAD 1
10:45
Constructive completeness and exploding models
SPEAKER: unknown

ABSTRACT. We present a unifying categorical approach, based on Joyal’s completeness theorem, that proves constructive completeness theorems for classical and intuitionistic first order logic, as well as for the coherent fragment. We show that the notion of exploding structure introduced by Veldman in 1976 is adequate for certain fragments of first-order logic, and that Veldman’s modified Kripke semantics arises, as a consequence, as the semantics of a suitable sheaf topos. The proof, as that of Veldman’s, makes essential use of the Fan theorem, since we show that these completeness theorems are in fact equivalent to it. This raises the question of how far one can get in proving completeness constructively but without using the Fan theorem. We prove that the disjunction-free fragment is constructively complete without appealing to it, and also without placing restrictions on the decidability of the theories and the size of the language, while the latter is a fundamental constraint in the general case when working in intuitionistic set theory. Along the way, we show that the completeness of Kripke semantics for the disjunction free fragment is equivalent, over intuitionistic Zermelo-Fraenkel set theory, to the Law of Excluded Middle.

11:30
Computing with Gödel’s completeness theorem
SPEAKER: unknown

ABSTRACT. We give a short computational proof derived from Henkin's proof of Gödel’s completeness theorem.

10:45-13:00 Session 66AV: Invited Talk by Orna Grumberg and Regular Talks (iPRA)
Location: FH, Seminarraum 101A
10:45
SAT-Based Model Checking with Interpolation
SPEAKER: Orna Grumberg

ABSTRACT. This talk is a tutorial on interpolation in verification.

Model checking is an automatic approach to formally verify that a given system satisfies a given specification. SAT-based model checking is most efficient in finding bugs in large systems. When combined with interpolation, it can also provide full verification. We describe the use of interpolation in SAT-based model checking and survey several approaches to enhance its power. We will assume no previous knowledge on model checking or interpolation. All the needed background will be given.

12:00
Interpolants in Two-Player Games
SPEAKER: unknown

ABSTRACT. We present a new application of interpolants in the context of two-player games. Two-player games is a useful formalism for the synthesis of reactive systems, with applications in device driver development, hardware design, industrial automation, etc.In particular, we consider reachability games over a finite state space, where the first player (the controller) must force the game into a given goal region given any valid behaviour of the second player (the environment). In this work, we focus on the strategy extraction problem in the context of a new counter example guided SAT-based algorithm for solving reachability games, recently proposed by Narodytska et al. We demonstrate how interpolants can be used to find a winning strategy.

12:30
Fault Localization using Interpolation
SPEAKER: unknown

ABSTRACT. An integral part of all debugging activities is the task of fault localization. Most existing fault localization techniques rely on the availability of high quality test suites because they work by comparing failing and passing runs to identify the cause of an error. This limits their applicability. One alternative are techniques that statically analyze an error trace of the program without relying on additional passing runs to compare against. Particularly promising are novel proof-based approaches that leverage the advances in automated theorem proving. These approaches encode error paths into \emph{error trace formulas}. An error trace formula is an unsatisfiable logical formula. A refutation proof of this formula captures the reason why the execution of the error trace fails. By applying an automated theorem prover to obtain such proofs, the relevant portions of the trace can be automatically identified without relying on the availability of test suites. For example, in our previous work, we have used Craig interpolation to extract formulas from the obtained refutation proof. These so-called \emph{error invariants} provide an explanation of the error for a given position in the trace and can be used for more effective static slicing.

In particular, we encode the error trace into a logical formula whose satisfying assignment exactly corresponds to the states that are observed during the concrete execution of the trace. The encoding uses standard symbolic execution techniques. Namely, we first convert the trace into static single-assignment form, then translate each individual statement into a logical constraint, and finally conjoin all the constraints to a single formula. We call the resulting conjunction the \emph{error trace formula} of the trace. By conjoining the error trace formula with the property that is violated at the end of the trace, we obtain an unsatisfiable formula. Since this property is violated at the end of the execution of the error trace, this formula is contradictory (i.e. UNSAT). We then use an interpolating SMT solver to generate an error invariant for each position in the trace from a refutation proof of the unsatisfiable formula. The computed error invariants have two important properties: (1) they are satisfied by the concrete state that is observed at the respective position in the trace; and (2) any execution of the remainder of the trace starting from that position with a state that satisfies the error invariant will still violate the property at the end of the trace.

The generated error invariants are then propagated along the trace to identify maximal subtraces such that the same error invariant holds at both the beginning and the end of the subtrace. From the two properties of error invariants, it follows that the statements between these two positions are irrelevant for understanding the bug and, hence, can be sliced from the trace. We proceed like this for the entire error trace to produce the abstract slice.

While proof-based fault localization techniques are a promising alternative to dynamic techniques, they have practical limitations that have not been addressed in previous work. Precisely capturing the behavior of an erroneous program requires modeling such features as dynamic memory allocation, function calls, inputs, and loops, over traces that can extend to large numbers of instructions. Although mathematical techniques exist to handle these complexities, their implementation in current off-the-shelf theorem provers is often far from robust. For example, heap accesses are traditionally handled using array theories. Unfortunately, most provers lack interpolation procedures for the theory of arrays, making it difficult to automatically infer invariants about heap allocated objects.

To address these shortcomings of purely static approaches, we propose a novel fault localization technique that combines dynamic and static analysis of the error trace. In particular, we note that given an executable error trace, we can determine the concrete value of every variable, and every memory address, using a standard debugger. This information can then be used to dramatically simplify the encoding of the error trace for static analysis. Instead of relying on complex mathematical theories to reason about pointer aliasing, we can directly determine which memory location a pointer references using the debugger. If a trace contains too many statements to analyze using a theorem prover, we can split the trace into subtraces, and use the program states observed during the concrete execution to seed the analysis of the individual subtraces.

We have conducted a case study where we applied our new algorithm to error traces generated from faulty versions of UNIX utils such as gzip and sed. Our case study indicates that our concolic fault localization scales to real-world error traces and generates useful error diagnoses.

10:45-13:00 Session 66AX: Algorithms & Semantics (PLP)
Location: FH, Seminarraum 134
10:45
Prefix and infix probability computation in PRISM
SPEAKER: unknown

ABSTRACT. This paper presents the recent progress concerning prefix and infix probability for PCFGs in a logic-based modeling language PRISM. A prefix is an initial substring of a sentence and likewise an infix is a substring that occurs within a sentence. The prefix probability computation is already introduced to PRISM but applications are still scarce. We describe a new application to web data that identifies visitors’ intentions, or goals visiting a website from observed sequences of their actions using prefix probability. We also discuss infix probability computation that generalizes prefix probability computation. Unlike previous approaches, we compute it through parsing followed by solving a set of non-linear equations.

11:15
Compiling Probabilistic Logic Programs into Sentential Decision Diagrams

ABSTRACT. Knowledge compilation algorithms transform a probabilistic logic program into a circuit representation that permits efficient probability computation. Knowledge compilation underlies algorithms for exact probabilistic inference and parameter learning in several languages, including ProbLog, PRISM, and LPADs. Developing such algorithms involves a choice, of which circuit language to target, and which compilation algorithm to use. Historically, Binary Decision Diagrams (BDDs) have been a popular target language, whereas recently, deterministic-Decomposable Negation Normal Form (d-DNNF) circuits were shown to outperform BDDs on these tasks. We investigate the use of a new language, called Sentential Decision Diagrams (SDDs), for inference in probabilistic logic programs. SDDs combine desirable properties of BDDs and d-DNNFs. Like BDDs, they support bottom-up compilation and circuit minimization, yet they are a more general and flexible representation. Our preliminary experiments show that compilation to SDD yields smaller circuits and more scalable inference, outperforming the state of the art in ProbLog inference.

11:45
cProbLog: Restricting the Possible Worlds of Probabilistic Logic Programs

ABSTRACT. A program in the Probabilistic Logic Programming language ProbLog defines a distribution over possible worlds. Adding evidence (a set of ground probabilistic atoms with observed truth values) rules out some of the possible worlds. Generalizing the evidence atoms to First Order Logic constraints increases the expressive power of ProbLog. In this paper we introduce the first implementation of cProbLog – the extension of ProbLog with constraints. Our implementation transforms cProbLog programs with FOL constraints into ProbLog programs with evidence that specify the same possible worlds. We backup our design and implementation decisions with a series of examples.

12:15
Approximated Probabilistic Answer Set Programming

ABSTRACT. This paper is a work in progress showing our search for a modification of Probabilistic Answer Set Programming (PASP), a technique that allows modeling complex theories and checking its satisfiability with respect to a set of probabilistic data, to obtain approximated solutions.

10:45-13:00 Session 66AY (VEMDP)
Location: FH, Seminarraum 136
10:45
Molecular computers for molecular robots as hybrid systems
SPEAKER: Masami Hagiya
11:40
DNA-based circuits in well-mixed and spatially organized systems
SPEAKER: Lulu Qian
12:10
Verifying Polymer Reaction Networks using Bisimulation

ABSTRACT. The Chemical Reaction Network model has been proposed as a programming language for molecular programming. Methods to implement arbitrary CRNs using DNA strand displacement circuits have been proposed, as have meth- ods to prove the correctness of those or other implementations. However, the stochastic Chemical Reaction Network model is provably not deterministically Turing-universal, that is, it is impossible to create a stochastic CRN where a given output molecule is produced if and only if an arbitrary Turing machine accepts. A DNA stack machine that can simulate arbitrary Turing machines with no slowdown deterministically has been proposed, but it uses unbounded polymers that cannot be modeled as a Chemical Reaction Network. We propose an extended version of a Chemical Reaction Network that models unbounded linear polymers made from a finite number of monomers. This Polymer Re- action Network model covers the DNA stack machine, as well as copy-tolerant Turing machines and some examples from biochemistry. We adapt the bisimula- tion method of verifying DNA implementations of Chemical Reaction Networks to our model, and use it to prove the correctness of the DNA stack machine implementation. We define a subclass of single-locus polymer reaction networks and show that any member of that class can be bisimulated by a network using only four primitives, suggesting a method of DNA implementation. Finally, we prove that deciding whether an implementation is a bisimulation is Π2-complete, and thus undecidable in the general case. We hope that the ability to model and verify implementations of polymer reaction networks will aid in the rational design of molecular systems.

12:35
Modeling and Analysis of Genetic Boolean Gates Using the Infobiotics Workbench
SPEAKER: Savas Konur

ABSTRACT. In this paper, we illustrate the use of the Infobiotics Workbench platform, designed to model and analyse stochastic P systems, by modeling basic synthetic Boolean gates and analysing them using some computational techniques: simulation, verification and biocompilation.

10:45-13:00 Session 66AZ: Opening and Contributed talks (VPT)
Location: FH, Dissertantenraum E104
10:45
Verification of Multi-Party Ping-Pong Protocols via Program Transformation

ABSTRACT. The paper describes a verification technique based on program transformation with unfolding that allows to find short attacks on multi-party ping-pong protocols in the Dolev-Yao intruder model. Protocols are modelled by prefix grammars, and questions of model optimization and complexity are considered.

11:30
Program Verification using Constraint Handling Rules and Array Constraint Generalizations
SPEAKER: unknown

ABSTRACT. The transformation of constraint logic programs (CLP programs) has been shown to be an effective methodology for verifying properties of imperative programs. By following this methodology, we encode the negation of a partial correctness property of an imperative program \textit{prog} as a predicate {\tt incorrect} defined by a CLP program $P$, and we show that \textit{prog} is correct by transforming $P$ into the empty program through the application of semantics preserving transformation rules. Some of these rules perform replacements of constraints that encode properties of the data structures manipulated by the program \textit{prog}. In this paper we show that Constraint Handling Rules (CHR) are a suitable formalism for representing and applying constraint replacements during the transformation of CLP programs. In particular, we consider programs that manipulate integer arrays and we present a CHR encoding of a constraint replacement strategy based on the theory of arrays. We also propose a novel generalization strategy for constraints on integer arrays that combines the CHR constraint replacement strategy with various generalization operator for linear constraints, such as widening and convex hull. Generalization is controlled by additional constraints that relate the variable identifiers in the imperative program and the CLP representation of their values. The method presented in this paper has been implemented and we have demonstrated its effectiveness on a set of benchmark programs taken from the literature.

12:15
A Note on Program Specialization. What Syntactical Properties of Residual Programs Can Reveal?
SPEAKER: unknown

ABSTRACT. {Regular paper} The paper presents two examples of non-traditional using of program specialization by Turchin's supercompilation method. In both cases we are interested in syntactical properties of residual programs produced by supercompilation. In the first example we apply supercompilation to a program encoding a word equation and as a result we obtain a program representing a graph describing the solution set of the word equation. The idea of the second example belongs to Alexandr V. Korlyukov. He considered an interpreter simulating the dynamic of the well known missionaries-cannibals puzzle. Supercompilation of the interpreter allows us to solve the puzzle. The interpreter may also be seen as an encoding of a non-deterministic protocol.

10:45-12:15 Session 66BA: Contributed Talks: Declarative Programming 1 (NMR)
Location: EI, EI 9
10:45
An Approach to Forgetting in Disjunctive Logic Programs that Preserves Strong Equivalence
SPEAKER: unknown

ABSTRACT. In this paper we investigate forgetting in disjunctive logic programs, where forgetting an atom from a program amounts to a reduction in the signature of that program. The goal is to provide an approach that is syntax-independent, in that if two programs are strongly equivalent, then the results of forgetting an atom in each program should also be strongly equivalent. Our central definition of forgetting is impractical but satisfies this goal: Forgetting an atom is characterised by the set of SE consequences of the program that do not mention the atom to be forgotten. We then provide an equivalent, practical definition, wherein forgetting an atom $p$ is given by those rules in the program that don't mention $p$, together with rules obtained by a single inference step from rules that do mention $p$. Forgetting is shown to have appropriate properties; as well, the finite characterisation results in a modest (at worst quadratic) blowup. Finally we have also obtained a prototype implementation of this approach to forgetting.

11:15
Three Semantics for Modular Systems

ABSTRACT. IIn this paper, we further develop the framework of Modular Systems that lays model-theoretic foundations for combining different declarative languages, agents and solvers. We introduce a multi-language logic of modular systems. We define two novel semantics, a structural operational semantics, and an inference-based semantics. We prove the new semantics are equivalent to the original model-theoretic semantics and describe future research directions.

11:45
Generalising Modular Logic Programs
SPEAKER: unknown

ABSTRACT. Answer set programming is a prominent declarative rulebased programming paradigm. Even though modularity has been studied extensively in conventional logic programming, there are only few approaches on how to incorporate modularity into Answer set programming. A major approach is Oikarinnen and Janhunen’s Gaifman-Shapiro-style architecture of program modules, which provides the composition of program modules. Their module theorem properly strengthens Lifschitz and Turner’s splitting set theorem for normal logic programs. However, this approach is limited by module conditions that are imposed in order to ensure the compatibility of their module system with the answer set semantics, namely forcing output signatures of modules to be disjoint and disallowing positive cyclic dependencies between different modules. These conditions turn out to be too restrictive in practice. In this paper we discuss alternative ways of lifting both restrictions independently, widening the applicability of this framework and the scope of the module theorem.

10:50-12:10 Session 67: Propositional Satisfiability (RCRA)
Location: MB, Hörsaal 12
10:50
Optimal Neighborhood Preserving Visualization by Maximum Satisfiability
SPEAKER: Jeremias Berg

ABSTRACT. We present a novel approach to low-dimensional neighbor embedding for visualization, based on formulating an information retrieval based neighborhood preservation cost function as Maximum satisfiability on a discretized output display. The method has a rigorous interpretation as optimal visualization based on the cost function. Unlike previous low-dimensional neighbor embedding methods, our formulation is guaranteed to yield globally optimal visualizations, and does so reasonably fast. Unlike previous manifold learning methods yielding global optima of their cost functions, our cost function and method are designed for low-dimensional visualization where evaluation and minimization of visualization errors are crucial. Our method performs well in experiments, yielding clean embeddings of datasets where a state-of-the-art comparison method yields poor arrangements. In a real-world case study for semi-supervised WLAN signal mapping in buildings we outperform state-of-the-art methods.

This work will appear in the proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014).

11:20
Maximal Falsifiability: Definitions, Algorithms, and Applications

ABSTRACT. Similarly to Maximum Satisfiability (MaxSAT), Minimum Satisfiability (MinSAT) is an optimization extension of the Boolean Satisfiability (SAT) decision problem. In recent years, both problems have been studied in terms of exact and approximation algorithms. In addition, the MaxSAT problem has been characterized in terms of Maximal Satisfiable Subsets (MSSes) and Minimal Correction Subsets (MCSes), as well as Minimal Unsatisfiable Subsets (MUSes) and minimal hitting set dualization. However, and in contrast with MaxSAT, no such characterizations exist for MinSAT. This paper addresses this issue by casting the MinSAT problem in a more general framework. The paper studies Maximal Falsifiability, the problem of computing a subset-maximal set of clauses that can be simultaneously falsified, and shows that MinSAT corresponds to the complement of a largest subset-maximal set of simultaneously falsifiable clauses, i.e. the solution of the Maximum Falsifiability (MaxFalse) problem. Additional contributions of the paper include novel algorithms for Maximum and Maximal Falsifiability, as well as minimal hitting set dualization results for the MaxFalse problem. Moreover, the proposed algorithms are validated on practical instances.

11:50
Latin Squares with Graph Properties
SPEAKER: Hans Zantema

ABSTRACT. In a Latin square of size n every number from 1 to n occurs exactly once in every row and every column. Two neighbors (straight or diagonal) in a Latin square are connected by an edge if their values differ at most 1. We investigate Latin squares for which this underlying graph has typical graph properties, like being connected, being a tree or being Hamiltonian. Searching for such Latin squares shows up combinatorial explosion. Nevertheless, by exploiting current SAT/SMT solving, we explicitly find instances for several variants, or prove that they do not exist.

11:00-13:00 Session 68A: Tutorial and invited talk (LC)
Location: MB, Prechtlsaal
11:00
Tutorial on classical realizability and forcing 1
12:00
Cardinal invariants and template iterations
SPEAKER: Vera Fischer

ABSTRACT. The cardinal invariants of the continuum arise from combinatorial, topological and measure theoretic properties of the reals, and are often defined to be the minimum size of a family of reals satisfying a certain property.

An example of such an invariant is the minimum size of a subgroup of $S_\infty$, all of whose non-identity elements have only finitely many fixed points and which is maximal (with respect to this property) under inclusion. This cardinal invariant is denoted $\mathfrak{a}_g$. Another well-known invariant, denoted $\hbox{non}(\mathcal{M})$, is the minimum size of a set of reals which is not meager. It is a ZFC theorem that $\hbox{non}(\mathcal{M})\leq\mathfrak{a}_g$. A third invariant, denoted $\mathfrak{d}$, is the minimum size of a family $\mathcal{F}$ of functions in $^\omega\omega$ which has the property that every function in $^\omega\omega$ is eventually dominated by an element of $\mathcal{F}$. In contrast to the situation between $\mathfrak{a}_g$ and $\hbox{non}(\mathcal{M})$, ZFC cannot prove either of the inequalities $\mathfrak{a}_g\leq \mathfrak{d}$ or $\mathfrak{d}\leq\mathfrak{a}_g$. The classical forcing techniques seem, however, to be inadequate in addressing the consistency of $\mathfrak{d}<\mathfrak{a}_g$ which was obtained only after a ground-breaking work by Shelah and the appearance of his ``template iteration" forcing techniques.

We further develop these techniques to show that $\mathfrak{a}_g$, as well as some of its relatives, can be of countable cofinality. In addition we will discuss other recent developments of the technique and conclude with open questions and directions for further research.

11:00-12:00 Session 68B: Invited talk (PCCR)
Location: FH, Hörsaal 3
11:00
Fixed-Parameter Algorithms for Reasoning Problems in NP and Beyond

ABSTRACT. Many computational problems that arise in the context of AI and reasoning are NP-hard or even hard for the second level of the Polynomial Hierarchy. Practical instances of these problems often contain some "hidden structure" that can be utilized algorithmically. There are various ways of capturing the hidden structure in terms of parameters which in turn can be used by fixed-parameter algorithms. In order to achive fixed-parameter tractability for problems that are hard for the second level of the Polynomial Hierarchy, the parameter usually needs to be quite restrictive. For such problems, it therefore seems to be an even more compelling approach to utilize the hidden structure captured by the parameter not to solve the problem, but to reduce it to a different problem of lower complexity, say, to a problem in NP. The parameter can thus be less restrictive and so reasonably small for larger classes of inputs. Clearly such a reduction cannot be computed in polynomial time, unless the Polynomial Hierarchy collapses. However, the reduction can be fixed-parameter tractable and thus break complexity barriers between the levels of the Polynomial Hierarchy. SAT, the propositional satisability problem, is ideally suited as a target problem since by means of fixed-parameter tractable reductions to SAT one can make the spectacular solving power of today's SAT solvers applicable to problems at higher levels of the Polynomial Hierarchy. In fact, the extremely successful practical technique of Bounded Model Checking can, in retrospect, be seen as a fixed-parameter tractable reduction to SAT. Recently developed fixed-parameter tractable reductions to SAT break complexity barriers for several problems, including the main reasoning problems of disjunctive answer-set programming, problems arising in propositional abductive reasoning, and problems arising in agenda safety. There are already basic concepts and results for a hardness theory that can be used to provide evidence that certain parameterized problems do not admit fixed-parameter tractable reductions to SAT.

11:15-12:45 Session 69: Analysis (SAT)
Location: FH, Hörsaal 6
11:15
An Ising Model inspired Extension of the Product-based MP Framework for SAT

ABSTRACT. Message Passing (MP) has been presented in a unified and consistent notational frame in \cite{gableske}. The paper explained the product-based MP framework (PMPF) and various MP heuristics (BP, SP, and several interpolations). The paper concluded, that an increased flexibility of MP heuristics (in the form of tuneable parameters) leads to a tuneable MP algorithm that can be helpful to solve a wide variety of different CNFs. Based on this work, the paper at hand makes three contributions. First, we extend the PMPF regarding flexibility based on theoretical insights from the Ising Model \cite{ising}. As an immediate result, this extended PMPF (ePMPF) provides new possibilities for parameter tuning. Second, the ePMPF will also allow us to uncover various theoretical connections between well-known variable and value ordering heuristics for SAT (Zero-first, One-first, occurrence-based heuristics). We show, that these heuristics can be understood as special cases of Message Passing. Third, we show that the ePMPF provides numerous possibilities to tightly integrate Message Passing with various aspects of the CDCL search paradigm.

11:45
Approximating highly satisfiable random 2-SAT
SPEAKER: unknown

ABSTRACT. In this paper we introduce two distributions for the Max-2-SAT problem similar to the uniform distributions of satisfiable CNFs and the planted distribution for the decision version of SAT. In both cases there is a parameter p, 0<= p<= 1/4d, where d is the clause-to-variable ratio (density) of the formula, such that formulas chosen according to both distributions are p-satisfiable, that is, at least (3/4d+p)n clauses can be satisfied. In the planted distribution this happens for a fixed assignment, while for the p-satisfiable distribution formulas are chosen uniformly from the set of all p-satisfiable formulas.

Following Coja-Oghlan, Krivelevich, and Vilenchik (2007) we establish a connection between the probabilities of events under the two distributions. Then we consider the case when p is sufficiently large, p=r\sqrt{d\log d} and r>3/2. We present an algorithm that in the case of the planted distribution for any e>0 with high probability finds an assignment satisfying at least (3/4d+p-e)n clauses. For the p-satisfiable distribution for every d there is e(d) (which is a polynomial in d of degree depending on r) such that the algorithm with high probability finds an assignment satisfying at least (3/4d+p-e(d))n clauses. It does not seem this algorithm can be converted into an expected polynomial time algorithm finding a p-satisfying assignment.

Also we use the connection between the planted and uniform p-satisfiable distributions to evaluate the number of clauses satisfiable in a random (not p-satisfiable) 2-CNF. We find the expectation of this number, but do not improve the existing concentration results.

12:15
Hypergraph Acyclicity and Propositional Model Counting

ABSTRACT. We show that the propositional model counting problem #SAT for CNF- formulas with hypergraphs that allow a disjoint branches decomposition can be solved in polynomial time. We show that this class of hypergraphs is incomparable to hypergraphs of bounded incidence cliquewidth which were the biggest class of hypergraphs for which #SAT was known to be solvable in polynomial time so far. Furthermore, we present a polynomial time algorithm that computes a disjoint branches decomposition of a given hypergraph if it exists and rejects otherwise. Finally, we show that some slight extensions of the class of hypergraphs with disjoint branches decompositions lead to intractable #SAT, leaving open how to generalize the counting result of this paper.

11:45-13:00 Session 70: Technical Session (LFMTP)
Location: FH, Zeichensaal 1
11:45
Internal Adequacy of Bookkeeping in Coq

ABSTRACT. We focus on a common problem encountered in encoding and formally reasoning about a wide range of formal systems, namely, the representation of a typing environment. In particular, we apply the bookkeeping technique to a well-known case study (i.e., System F<:’s type language), proving in Coq an internal correspondence with a more standard representation of the typing environment as a list of pairs.

In order to keep the signature readable and concise, we make use of higher-order abstract syntax (HOAS), which allows us to deal smoothly with the representation of the universal binder of System F

12:10
A Generic Approach to Proofs about Substitution

ABSTRACT. It is well known that reasoning about substitution is a huge “distraction” that inevitably gets in the way of formalizing interesting properties of languages with variable bindings. Most formalizations have their own separate definitions of terms and substitution, and properties about it. However there is a great deal of uniformity in the way substitution works and the reasons why its properties hold. We expose this uniformity by defining terms, substitution and α-equality generically in Coq by parametrizing them over a Context Free Grammar annotated with Variable binding information (CFGV).

We also provide proofs of many properties about the above definitions (enough to formalize the PER semantics of Nuprl in Coq). Unlike many other tools which generate a custom definition of substitution for each input, all instantiations of our term model share the same substitution function. The proofs about this function have been accepted by Coq’s typechecker once and for all.

12:35
A Framework for Implementing Logical Frameworks
SPEAKER: Florian Rabe

ABSTRACT. MMT implements a prototypical declarative language in a way that systematically abstracts from theoretical and practical aspects of individual logics. It tries to develop as many solutions as possible generically so that logic designers can focus on the essentials and inherit a large scale implementation at low cost.

For example, advanced generic features include module system, user interface, and change management. Here we focus on MMT's generic type reconstruction component and show how it can be customized to obtain implementations of various logics. As examples, we consider LF as well as its extensions with shallow polymorphism and rewriting.

12:00-13:00 Session 71: Contributed talks (PCCR)
Location: FH, Hörsaal 3
12:00
A Parameterized Complexity Analysis of Generalized CP-Nets

ABSTRACT. Generalized CP-nets (GCP-nets) allow a succinct representation of preferences over multi-attribute domains. As a consequence of their succinct representation, many GCP-net related tasks are computationally hard. Even finding the more preferable of two outcomes is PSPACE-complete. In this work, we employ the framework of (parameterized) complexity theory to achieve two goals: First, we want to gain a deeper understanding of the complexity of GCP-nets. Second, we search for efficient fixed-parameter tractable algorithms.

12:30
Backdoors to Planning

ABSTRACT. Backdoors measure the distance to tractable fragments and have become an important tool to find fixed-parameter tractable (fpt) algorithms. Despite their success, backdoors have not been used for planning, a central problem in AI that has a high computational complexity. In this work, we introduce two notions of backdoors building upon the causal graph. We analyze the complexity of finding a small backdoor (detection) and using the backdoor to solve the problem (evaluation) in the light of planning with (un)bounded plan length/domain of the variables. For each setting we present either an fpt-result or rule out the existence thereof by showing parameterized intractability. In three cases we achieve the most desirable outcome: detection and evaluation are fpt.

12:10-13:00 Session 72: Answer Set Programming (RCRA)
Location: MB, Hörsaal 12
12:10
Efficient Computation of the Well-Founded Semantics over Big Data

ABSTRACT. Data originating from the Web, sensor readings and social media result in increasingly huge datasets. The so called Big Data comes with new scientific and technological challenges while creating new opportunities, hence the increasing interest in academia and industry. Traditionally, logic programming has focused on complex knowledge structures/programs, so the question arises whether and how it can work in the face of Big Data. In this paper, we examine how the well-founded semantics can process huge amounts of data through mass parallelization. More specifically, we propose and evaluate a parallel approach using the MapReduce framework. Our experimental results indicate that our approach is scalable and that well-founded semantics can be applied to billions of facts. To the best of our knowledge, this is the first work that addresses large scale nonmonotonic reasoning without the restriction of stratification for predicates of arbitrary arity.

12:40
Declarative Specification of Benchmark Sessions via ASP
SPEAKER: unknown

ABSTRACT. As a matter of fact, benchmark sessions are easy to describe in a declarative fashion, but are usually painful to implement in practice. In this paper we describe DecBench, a suite of tools conceived for easing the task of configuring and running structured software benchmark analyses of command line executables. Benchmark sessions are specified in DecBench by means of Answer Set Programming (ASP), a declarative knowledge representation language having its roots in nonmonotonic reasoning and logic programming. The provided declarative specification is fed in an ASP solver, and the output is used to produce a Makefile that controls the execution of the specified benchmark analysis. The paper reports also a complete usecase conceived to validate the presented tool.

12:15-13:00 Session 73: Contributed Talks: Systems 1 (NMR)
Location: EI, EI 9
12:15
The Multi-engine ASP Solver ME-ASP: Progress Report
SPEAKER: unknown

ABSTRACT. ME-ASP is a multi-engine solver for ground ASP programs. It exploits algorithm selection techniques based on classification to select one among a set of out-of-the-box heterogeneous ASP solvers used as black-box engines. In this paper we report on (i) a new optimized implementation of ME-ASP; and (ii) an attempt of applying algorithm selection to non-ground programs. An experimental analysis reported in the paper shows that (i) the new implementation of ME-ASP is substantially faster than the previous version; and (ii) the multi-engine recipe can be applied to the evaluation of non-ground programs with some benefits.

12:37
Preliminary Report on WASP 2
SPEAKER: unknown

ABSTRACT. Answer Set Programming (ASP) is a declarative programming paradigm. The intrinsic complexity of the evaluation of ASP programs makes the development of more effective and faster systems a challenging research topic. This paper reports on the recent improvements of the ASP solver WASP. WASP is undergoing a refactoring process which will end up in the release of a new and more performant version of the software. In particular the paper focus on the improvements to the core evaluation algorithms working on normal programs. A preliminary experiment on benchmarks from the 3rd ASP competition belonging to the NP class is reported. The previous version of WASP was often not competitive with alternative solutions on this class. The new version of WASP shows a substantial increase in performance.

13:00-14:30Lunch Break
14:00-16:00 Session 74 (REORDER)
Location: FH, Hörsaal 4
14:00
Reasoning about Fences in C11 Weak Memory Model
SPEAKER: Marko Doko

ABSTRACT. In this talk I will present a logic for reasoning about release/acquire fences as defined in C11 memory model. The logic itself is an extension of relaxed separation logic.

14:30
Robustness against Power is PSpace-complete
SPEAKER: Roland Meyer

ABSTRACT. Power is a RISC architecture developed by IBM, Freescale, and several other companies and implemented in a series of POWER processors. The architecture features a relaxed memory model providing very weak guarantees with respect to the ordering and atomicity of memory accesses.

Due to these weaknesses, some programs that are correct under sequential consistency (SC) show undesirable effects when run under Power. We say that these programs are not robust against the Power memory
model. Formally, a program is robust if every computation under Power has the same data and control dependencies as some SC computation.

Our contribution is a decision procedure for robustness of concurrent programs against the Power memory model. It is based on three ideas. First, we reformulate robustness in terms of the acyclicity of a happens-before relation. Second, we prove that among the computations with cyclic happens-before relation there is one in a certain normal form. Finally, we reduce the existence of such a normal-form computation to a
language emptiness problem. Altogether, this yields a PSpace algorithm for checking robustness against Power. We complement it by a matching lower bound to show PSpace-completeness.
 

Joint work with Egor Derevenetc.

15:00
Simulating, exploring and formalising the OpenCL 2.0 memory model

ABSTRACT. I will report on ongoing work to investigate the relaxed memory model proposed by the recent OpenCL 2.0 standard. The OpenCL memory model draws heavily on the one used by C/C++, but also extends it in interesting and subtle ways, by adding such GPU-specific concepts as thread hierarchies, memory hierarchies, and synchronisation barriers. In my talk, I will describe a draft formalisation of the OpenCL memory model that my coauthors and I have written in the ".cat" format, and also how the Herd memory-model simulator has been extended to allow our formalisation to be tested for faithfulness.

15:30
Testing GPU Memory Models
SPEAKER: Daniel Poetzl

ABSTRACT. Graphics processing units (GPUs) are specialized highly concurrent microprocessors. Traditionally, these chips have been used for accelerating graphics rendering. However, since the advent of general purpose GPU programming frameworks such as Nvidia's CUDA, they have found their way into many
applications, especially scientific and embedded computing.

To enable precise reasoning about GPU programs, a formal memory model is desirable. A necessary first step toward building a model is the thorough
testing of the hardware. The model cannot be based on the vendor documentation alone, as it typically contains little detail and is at times ambiguous.

In this talk, I will describe our approach to testing GPU memory models, and will discuss some of our testing results.

14:30-16:00 Session 75AA: Models and Games (CSL-LICS)
Location: FH, Hörsaal 1
14:30
Decidability of Weak Logics with Deterministic Transitive Closure

ABSTRACT. The deterministic transitive closure operator allows to express many natural properties of a binary relation, including being a linear order, a tree, a forest or a partial function. It makes it a potentially attractive ingredient of computer science formalisms. In this paper we consider the extension of the two-variable fragment of first-order logic by the deterministic transitive closure of a single binary relation, and prove that the satisfiability and finite satisfiability problems for the obtained logic are decidable and \ExpSpace-complete. This contrasts with the undecidability of two-variable logic with the deterministic transitive closures of several binary relations, known before. We also consider the class of universal first-order formulas in prenex form. Its various extensions by deterministic closure operations were earlier considered by other authors, leading to both decidability and undecidability results. We examine this scenario in more details.

15:00
The Complexity of Admissibility in Omega-Regular Games

ABSTRACT. Iterated admissibility is a well-known and important concept in classical game theory, e.g. to determine behaviors in multi-player matrix games. As recently shown by Berwanger, this concept can be soundly extended to infinite games played on graphs with omega-regular objectives. In this paper, we study the algorithmic properties of this concept for such games. We settle the exact complexity of natural decision problems on the set of strategies that survive iterated elimination of dominated strategies. As a byproduct of our construction, we obtain automata which recognize all the possible outcomes of such strategies.

15:30
Pattern Logics and Auxiliary Relations
SPEAKER: Leonid Libkin

ABSTRACT. A common theme in the study of logics over finite structures is adding auxiliary predicates to enhance expressiveness and convey additional information. Examples include adding an order or arithmetic for capturing complexity classes, or the power of real-life declarative languages. A recent trend is to add a data-value comparison relation to words, trees, and graphs, for capturing modern data models such as XML and graph databases.

Such additions often result in the loss of good properties of the underlying logic. Our goal is to show that such a loss can be avoided if we use pattern-based logics, standard in XML and graph data querying. The essence of such logics is that auxiliary relations are tested locally with respect to other relations in the structure. These logics are shown to admit strong versions of Hanf and Gaifman locality theorems, which are used to prove a homomorphism preservation theorem, and a decidability result for the satisfiability problem. We discuss applications of these results to pattern logics over data forests, and consequently to querying XML data.

14:30-16:00 Session 75AB: Programming Language Semantics (CSL-LICS)
Location: FH, Hörsaal 5
14:30
KAT + B!
SPEAKER: unknown

ABSTRACT. Certain program transformations require a small amount of mutable state, a feature not explicitly provided by Kleene algebra with tests. We show how to axiomatically extend KAT with this feature in the form of mutable tests. The extension is conservative and is formulated as a general commutative coproduct construction. We give several results on deductive completeness and complexity of the system, as well as some examples of its use.

15:00
On the characterization of models of H*

ABSTRACT. We give a characterization, with respect a large class of models of untyped lambda-calculus, of those models which are fully abstract for head normalization, i.e., whose equational theory is H*. An extensional K-model D is fully abstract if and only if it is hyperimmune, i.e., non-well founded chains of elements of D cannot be captured by any recursive function.

15:30
Functional Reactive Types
SPEAKER: Alan Jeffrey

ABSTRACT. Functional Reactive Programming (FRP) is an approach to streaming data with a pure functional semantics as time-indexed values. In previous work, we showed that Linear-time Temporal Logic (LTL) can be used as a type system for discrete-time FRP, and that functional reactive primitives perform two roles: as combinators for building streams of data, and as proof rules for constructive LTL. In this paper, we add a third role, by showing that FRP combinators can be used to define streams of types, and that these functional reactive types can be viewed both as a constructive temporal logic, and as the types for functional reactive programs. As an application of functional reactive types, we show that past-time LTL (pLTL) can be extended with FRP to get a logic pLTL+FRP. This logic is expressed as streams of boolean expressions, and so bounded satisfiability of pLTL can be translated to Satisfiability Modulo Theory (SMT). Thus, pLTL+FRP can be used as a constraint language for problems which mix properties of data with temporal properties.

14:30-16:00 Session 75AC (ITP)
Location: FH, Hörsaal 8
14:30
A New and Formalized Proof of Abstract Completion
SPEAKER: Nao Hirokawa

ABSTRACT. Completion is one of the most studied techniques in term rewriting. We present a new proof of the correctness of abstract completion that is based on peak decreasingness, a special case of decreasing diagrams. Peak decreasingness replaces Newman's Lemma and allows us to avoid proof orders in the correctness proof of completion. As a result, our proof is simpler than the one presented in textbooks, which is confirmed by our Isabelle/HOL formalization. Furthermore, we show that critical pair criteria are easily incorporated in our setting.

15:00
Hypermap Specification and Certified Linked Implementation using Orbits

ABSTRACT. We propose a revised constructive specification and a certified entangled linked implementation of combinatorial hypermaps using a general notion of orbit. Combinatorial hypermaps help to prove theorems in algebraic topology and to develop algorithms in computational geometry. Orbits unify the presentation at conceptual and concrete levels and reduce the proof effort. All the development is formalized and verified in the Coq proof assistant. The implementation is easily proved observationally equivalent to the specification and translated in C language. Our method is transferable to a great class of algebraic specifications implemented into complex data structures with entangled or nested linear, circular or symmetric linked lists, and pointer arrays.

15:30
Implicational Rewriting Tactics in HOL
SPEAKER: unknown

ABSTRACT. Reducing the distance between informal and formal proofs in interactive theorem proving is a long-standing matter. An approach to this general topic is to increase automation in theorem provers: indeed, automation turns many small formal steps into one big step. Inspite of the usual automation methods, there are still many situations where the user has to provide some information manually, whereas this information could be derived from the context. In this paper, we characterize some very common use cases where such situations happen, and identify some general patterns behind them. We then provide solutions to deal with these situations automatically, which we implemented as HOL Light and HOL4 tactics. We find these tactics to be extremely useful in practice, not only for their automation but also for the feedback that they provide to the user.

14:30-16:00 Session 75AG: NSV Contributed Talks: Certified Numerical Computation (NSV)
Location: FH, Hörsaal 2
14:30
Certifying Square Root and Division Elimination
SPEAKER: Pierre Neron

ABSTRACT. This paper presents a program transformation that removes square roots and divisions from functional programs without recursion, producing code that can be exactly computed. This transformation accepts different subsets of languages as input and it provides a certifying mechanism when the targeted language is Pvs. In this case, we provide a relation between every function definition in the output code and its corresponding one in the input code, that specifies the behavior of the produced function with respect to the input one. This transformation has been implemented in OCaml and has been tested on different algorithms from the NASA ACCoRD project.

15:00
Computational complexity of iterated maps on points and sets

ABSTRACT. The computational complexity of the certified iteration of discrete dynamical systems is considered. Relations to the Lyapunov-Exponents are shown.

15:30
SetBased Decorated Intervals

ABSTRACT. The draft  IEEE standard for interval arithmetic p1788 has recently been developed  and will be  open for public review in this summer 2014

In the talk we introduce the newly developed concepts such as set-based intervals, flavors, decorations or revers function evaluations

14:30-15:30 Session 75AH: UITP Invited Talk 2 (Enrico Tassi: "Asynchronous Processing in Coq") (UITP)
Location: FH, Seminarraum 107
14:30
Asynchronous Processing of Formal Documents in Coq
SPEAKER: Enrico Tassi
14:30-16:00 Session 75AI: Ranking Functions and Complexity (WST)
Location: FH, Seminarraum 101B
14:30
To Infinity... and Beyond!

ABSTRACT. The traditional method for proving program termination consists in inferring a ranking function. In many cases (i.e. programs with unbounded non-determinism), a single ranking function over natural numbers is not sufficient. Hence, we propose a new abstract domain to automatically infer ranking functions over ordinals. We extend an existing domain for piecewise-defined natural-valued ranking functions to polynomials in $\omega$, where the polynomial coefficients are natural-valued functions of the program variables. The abstract domain is parametric in the choice of the state partitioning inducing the piecewise-definition and the type of functions used as polynomial coefficients. To our knowledge this is the first abstract domain able to reason about ordinals. Handling ordinals leads to a powerful approach for proving termination of imperative programs, which in particular allows us to take a first step in the direction of proving termination under fairness constraints and proving liveness properties of (sequential and) concurrent programs.

15:00
Real-world loops are easy to predict : a case study
SPEAKER: Laure Gonnord

ABSTRACT. In this paper we study the relevance of fast and simple solutions to compute approximations of the number of iterations of loops (loop trip count) of imperative real-world programs. The context of this work is the use of these approximations in compiler optimizations: most of the time, the optimizations yield greater benefits for large trip counts, and are either innocuous or detrimental for small ones. In this particular work, we argue that, although predicting exactly the trip count of a loop is undecidable, most of the time, there is no need to use computationally expensive state-of-the-art methods to compute (an approximation of) it. We support our position with an actual case study. We show that a fast predictor can be used to speedup the JavaScript JIT compiler of Firefox - one of the most well-engineered runtime environments in use today. We have accurately predicted over 85% of all the interval loops found in typical JavaScript benchmarks, and in millions of lines of C code. Furthermore, we have been able to speedup several JavaScript programs by over 5%, reaching 24% of improvement in one benchmark.

15:30
Type Introduction for Runtime Complexity Analysis
SPEAKER: unknown

ABSTRACT. In this note we show that the runtime complexity function of a sorted rewrite system R coincides with the runtime complexity function of the unsorted rewrite system obtained by forgetting sort information. Hence our result states that sort-introduction, a process that is easily carried out via unification, is sound for runtime complexity analysis. Our result thus provides the foundation for exploiting sort information in analysis of TRSs.

14:30-15:20 Session 75AJ: Invited Talk (LFMTP)
Location: FH, Zeichensaal 1
14:30
A Framework for the Verified Transformation of Functional Programs

ABSTRACT. The compilation of functional programs relies on transformations that simplify their structure while ostensibly preserving their meanings. We argue that the combination of the lambda Prolog language and the Abella interactive theorem-prover provide a natural framework for the verified implementation of such transformations. Underlying this argument is the fact that the transformations are syntax-directed and rule-based, with the important proviso that they pay attention to and also modify the binding structure of programs. The logic of higher-order hereditary Harrop formulas, the HoHH logic for short, is well-suited to formalizing such descriptions especially because of the support it provides for the higher-order representation of syntax. By virtue of the computational interpretation of the HoHH logic embodied in lambda Prolog, these formalizations become implementations of the corresponding transformations. The logic that underlies Abella embeds the HoHH logic and provides a complementary capability for reasoning flexibly and succinctly about the properties of specifications written in the HoHH logic. We will consider a typical functional program transformation and show how these twin capabilities can be exploited in its verified implementation; we will especially focus on demonstrating the benefits of a higher-order representation of syntax in both specification and reasoning. We will also discuss an extension to the logic underlying Abella for treating logical relations, a notion that is often needed in semantics preservation arguments.

The talk will draw on work done at various times with David Baelde, Alwen Tiu and Yuting Wang.

14:30-16:00 Session 75AK: Prolog Extensions (CICLOPS-WLPE)
Location: FH, Hörsaal 7
14:30
Extending the Finite Domain Solver of GNU Prolog

ABSTRACT. This paper describes three important extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to overcome its current limitations. The preliminary performance evaluation shows a limited slowdown factor with respect to the initial solver. This factor is widely counterbalanced by the new possibilities and the robustness of the solver. Furthermore these results are preliminary and we propose some directions to limit this overhead.

15:00
SWI-Prolog version 7 extensions

ABSTRACT. SWI-Prolog version~7 extends the Prolog language further as a general purpose programming language that can be used as `glue' between components written in different languages. Taking this role rather than that of a DSL \emph{inside} other IT components has always been the design objective of SWI-Prolog as illustrated by XPCE (its object oriented communication to the OS and graphics), the HTTP server library and the many interfaces to external systems and file formats. In recent years, we started extending the language itself, notably to accomodate expressing syntactic constructions of other languages. This resulted in an extended notion of operators and quasi quotations. SWI-Prolog version~7 takes this one step further by extending the primitive data types of Prolog. This article describes and motivates these extensions.

15:30
A Portable Prolog Predicate for Printing Rational Terms

ABSTRACT. Rational terms or rational trees are terms with one or more infinite sub-terms but with a finite representation. Rational terms appeared as a side effect of omitting the \emph{occurs check} in the unification of terms, but its support across Prolog systems varies and often fails to provide the expected functionality. A common problem is the lack of support for printing query bindings with rational terms. In this paper, we discuss the minimum requirements a Prolog system should have to support printing rational terms and propose the integration of an ISO Prolog predicate that works in several existing Prolog systems in order to overcome the technical problem of printing rational terms. This predicate could be easily adapted to work both for the top query printouts and for user printing.

14:30-15:30 Session 75AL: Invited Talk (HCVS and VPT)
Location: FH, Seminarraum 134A
14:30
Invited talk: Towards Constraint-Solving over Higher-Order Unbounded Datatypes using Formal Methods Tools
ABSTRACT.
We argue that formal methods such as B can be used to conveniently express a wide range of constraint satisfaction problems.
We also show that some problems can be solved quite effectively by existing formal methods tools such as Alloy or ProB.
We illustrate our claim on several examples.
Our approach is particularly interesting when a high assurance of correctness is required.
Indeed, validation and double checking of solutions is available for certain formal methods tools and formal proof can be applied to establish important properties or
 provide unambiguous semantics to problem specifications.
The experiments also provide interesting insights about the effectiveness of existing formal method tools, and highlight interesting avenues for future uimprovement.
14:30-16:00 Session 75AM: Invited Talk by Bruno Buchberger (PAS)
Location: FH, Seminarraum 138A
14:30
The Theorema Approach to Mathematics

ABSTRACT. In this talk I will give an overview on the main ideas of my "Theorema" approach to mathematics, which reflects my view on mathematics that evolved over my life as a mathematician ("from 17 to 71"):

- Mathematics is the "art of explanation", the art of reducing the complex to the simple. Hence, mathematics is of highest relevance for the "wo-man on the street".

- Mathematics is in the center of the "innovation spiral" that starts from basic research on the nature of nature and goes all the way via technology to economy and welfare. Hence, mathematics is of highest relevance to our society.

- Mathematics is the meta-theory of itself. It can explain how we derive new mathematical knowledge and methods from available knowledge and methods. This is possible because mathematics is essentially formal. The application of mathematics to itself, i.e. to the mathematical exploration process, was hardly noticeable until the middle of 20th century. However, now, we can observe what "application of mathematics to itself" means practically. This will be of highest relevance to the way we will do mathematics (including software) in the future, i.e. the (semi-)automation of the mathematical exploration process will be of highest relevance to the "working mathematicians".

I think that predicate logic (in some nicely readable two-dimensional - even user-definable - syntax) is the natural frame for providing a version of mathematics that pleases the wo-man from the street, society, and the working mathematicians. And all the effort and attention must go to automating natural reasoning in predicate logic for pleasing everybody with more and more attractive mathematics.

This is the reason why approximately in the middle of the 1990-ies I launched the Theorema Project.

In the talk, I will give an overview on the current state of the project, which - due to my many services for "society" that took lots of my time - is not yet at a stage which is satisfactory. However, I hope I will be able to explain a few highlights and to draw a picture from which the avenues to the future can be guessed.

Of course, Theorema is just one in a group of research projects that pursue the automation of mathematics. I am very optimistic, nervous, and excited to enjoy the dawn of a new era of mathematics that will put mathematics right into the center of modern society in a tangible, natural, artistic and surprising way.

15:00
A note on real quantifier elimination by virtual term substitution of unbounded degree

ABSTRACT. We describe work in progress on constructing a complete implementation of Weispfenning's virtual term substitution method for real closed field formulas of unbounded degree. We build upon a recent high-performance library for computing over non-Archimedean real closed fields using Thom's Lemma.

(Extended abstract)

14:30-16:00 Session 75AN: Verification Frameworks and Applications (VSTTE)
Location: FH, Zeichensaal 3
14:30
Efficient Refinement Checking in VCC

ABSTRACT. We propose a methodology for carrying out refinement proofs across declarative abstract models and concrete implementations in C, using the VCC verification tool. The main idea is to first do a systematic translation from the top-level abstract model to a ghost implementation in VCC. Subsequent refinement proofs between successively refined abstract models and between abstract and C implementations are carried out in VCC. We propose an efficient technique to carry out these refinement checks in VCC. We illustrate our methodology with a case study in which we verify a simplified C implementation of an RTOS scheduler, with respect to its abstract Z specification. Overall, our methodology leads to efficient and automatic refinement proofs for complex systems that would typically be beyond the capability of tools like Z-Eves or Rodin.

14:55
Formalizing Semantics with an Automatic Program Verifier
SPEAKER: unknown

ABSTRACT. A common belief is that formalizing semantics of programming languages requires the use of a \emph{proof assistant} providing (1) a specification language with advanced features such as higher-order logic, inductive definitions, type polymorphism (2) a corresponding proof environment where higher-order and inductive reasoning can be performed, typically with user interaction.

In this paper we show that such a formalization is nowadays possible inside a mostly-automatic program verification environment. We substantiate this claim by formalizing several semantics for a simple language, and proving their equivalence, inside the Why3 environment.

15:20
The KeY Platform for Verification and Analysis of Java Programs
SPEAKER: Daniel Bruns

ABSTRACT. The KeY system offers a platform of software analysis tools for sequential Java. Foremost, this includes full functional verification against contracts written in the Java Modeling Language. But the approach is general enough to provide a basis for other methods and purposes:

  1. complementary validation techniques to formal verification such as testing and debugging,
  2. methods that reduce the complexity of verification such as modularization and abstract interpretation,
  3. analyses of non-functional properties such as information flow security, and
  4. sound program transformation and code generation.

We show that deductive technology that has been developed for full functional verification can be used as a basis and framework for other purposes than pure functional verification. We use the current release of the KeY system as an example to explain and prove this claim.

15:45
Using Promela in a Fully Verified Executable LTL Model Checker (SHORT)
SPEAKER: René Neumann

ABSTRACT. At CAV'13 we presented an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The intended use of the checker is to provide a trusted reference implementation against which more advanced checkers can be tested. However, for CAV'13 the checker still used an ad-hoc, primitive input language.

In this paper we report on CAVA, a new version of the checker accepting inputs written in Promela. We report on our formalization of the Promela semantics within Isabelle, which is used both to define the semantics and to automatically generate code for the computation of the state space. We report on experiments on standard Promela benchmarks comparing our tool to SPIN.

14:30-16:00 Session 75AP: Contributed Talks and Poster Announcements (DL)
Location: EI, EI 7
14:30
Is Your Ontology as Hard as You Think? Rewriting Ontologies into Simpler DLs
SPEAKER: unknown

ABSTRACT. We investigate cases where an ontology expressed in a seemingly hard DL can be polynomially reduced to one in a simpler logic, while preserving classification and fact entailment reasoning outcomes. Our transformations target the elimination of inverse roles, universal and existential restrictions, and in the best case allow us to rewrite the given ontology into one of the OWL 2 profiles. Even if an ontology cannot be fully rewritten into a profile, in many cases our transformations allow us to exploit further optimisation techniques. Moreover, the elimination of some out-of-profile axioms can improve the performance of modular reasoners, such as MORe. We have tested our techniques on both classification and data reasoning tasks with encouraging results.

14:55
Abstraction Refinement for Ontology Materialization

ABSTRACT. We present a new procedure for ontology materialization (computing all entailed instances of every atomic concept) in which reasoning over a large ABox is reduced to reasoning over a smaller "abstract" ABox. The abstract ABox is obtained as the result of a fixed-point computation involving two stages: 1) abstraction: partition the individuals into equivalence classes based on told information and use one representative individual per equivalence class, and 2) refinement: iteratively split (refine) the equivalence classes, when new assertions are derived that distinguish individuals within the same class. We prove that the method is complete for Horn ALCHOI ontologies, that is, all entailed instances will be derived once the fixed-point is reached. We implement the procedure in a new database-backed reasoning system and evaluate it empirically on existing ontologies with large ABoxes. We demonstrate that the obtained abstract ABoxes are significantly smaller than the original ones and can be computed with few refinement steps.

15:20
Shape and Content: Incorporating Domain Knowledge into Shape Analysis
SPEAKER: Tomer Kotek

ABSTRACT. The verification community has studied dynamic data structures primarily in a bottom-up way by analyzing pointers and the shapes induced by them. Recent work in fields such as separation logic has made significant progress in extracting shapes from program source code. Many real world programs however manipulate complex data whose structure and content is most naturally described by formalisms from object oriented programming and databases. In this paper, we attempt to bridge the conceptual gap between these two communities. Our approach is based on description logic, a widely used knowledge representation paradigm which gives a logical underpinning for diverse modeling frameworks such as UML and ER. We show how description logic can be used on top of an existing shape analysis to add content descriptions to the shapes. Technically, we assume that we have separation logic shape invariants obtained from a shape analysis tool, and requirements on the program data in terms of description logic. We show that the two-variable fragment of first order logic with counting and trees (whose decidability was proved at LICS 2013) can be used as a joint framework to embed suitable fragments of description logic and separation logic.

15:45
An ABox Revision Algorithm for the Description Logic EL_\bot
SPEAKER: Uli Sattler

ABSTRACT. Revision of knowledge bases (KBs) expressed in description logics (DLs) has gained a lot of attention lately. Existing revision algorithms can be divided into two groups: model-based approaches (MBAs) and formula-based approaches (FBAs). MBAs are fine-grained and independent of the syntactical forms of KBs; however, they only work for some restricted forms of the DL-Lite family. FBAs can deal with more expressive DLs such as SHOIN, but they are syntax-dependent and not fine-grained. In this paper, we present a new method for instance-level revision of KBs. In our algorithm, a non-redundant depth-bounded model is firstly constructed for the KB to be revised; then a revision process based on justifications is carried out on this model by treating a model as a set of assertions; finally the resulting model is mapped back to a KB which will be returned by the algorithm. Our algorithm is syntax-independent and fine-grained, and works for the DL EL_\bot.

15:48
Instance-driven TBox Revision in DL-Lite
SPEAKER: unknown

ABSTRACT. The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and revision have attracted intensive research interest. In data-centric applications where the ontology is designed based on or automated learnt from the data, when new data instances are added that contradict to the ontology, it is often desirable to incrementally revise the ontology according to the added data. In this paper, we address this problem using TBox revision and allow TBox axioms to be revised by ABox assertions. To provide a model-theoretic characterisation of minimal change, we introduce and apply the type semantics for DL-Lite TBoxes. We define a concrete revision operator using the type semantics, and show that it possesses desired properties.

15:51
Rational Elimination of DL-Lite TBox Axioms
SPEAKER: unknown

ABSTRACT. An essential task in managing description logic (DL) ontologies is the removal of problematic axioms. Such removal is formalised as the operation of contraction in belief change. In this paper, we investigate contraction over DL-Lite$_\R$ TBoxes so as to provide methods for eliminating problematic axioms in DL-Lite$_\R$ TBoxes. In belief change, a well known approach in defining contraction is via epistemic entrenchments which are essentially preference orderings over formulas. The approach however is not applicable in DLs as classic belief change assumes an underlying logic different from DLs. Thus we reformulate the epistemic entrenchment approach so as to make it applicable to DL-Lite$_\R$. We then provide representation theorem and a tractable algorithm for the reformulated approach.

15:54
Rational closure in SHIQ
SPEAKER: unknown

ABSTRACT. We propose a nonmonotonic extension of SHIQ, which is at the base of the OWL-DL language and does not enjoy the finite model property. We extend to SHIQ the well established notion of rational closure, a nonmonotonic mechanism introduced by Lehmann and Magidor for propositional logic, built on the top of the rational logic R. We start from an extension of SHIQ with a typicality operator T, whose aim is to select the typical (the most normal) instances of a concept C: T(C) stands for typical Cs. As a further contribution, we provide a semantic characterization of the rational closure for SHIQ, namely a minimal model semantics built upon standard Description Logics models enriched with a preference relation among domain elements.

15:57
DIP: A Defeasible-Inference Platform for OWL Ontologies
SPEAKER: unknown

ABSTRACT. The preferential approach to nonmonotonic reasoning was consolidated in depth by Krause, Lehmann and Magidor (KLM) for propositional logic in the early 90’s. In recent years, there have been efforts to extend their framework to Description Logics (DLs) and a solid theoretical foundation has already been established towards this aim. Despite this foundation, and the fact that many of the desirable aspects of the approach generalise favourably to certain DLs, implementations thereof remain unpublished. We present a defeasible reasoning system for OWL ontologies demonstrating that we need not devise new decision procedures for certain preferential DLs. Our reasoning procedures are composed purely of classical DL decision steps which allows us to immediately hinge upon existing OWL and DL systems for defeasible reasoning tool support.

14:30-16:00 Session 75AQ: Invited talk: Anne Abeille, then a talk contributed to NLCS (NLCS)
Location: FH, CAD 2
14:30
TBA
SPEAKER: Anne Abeille
15:30
Computing Italian clitic and relative clauses with tupled pregroups

ABSTRACT. Pregroup grammars are introduced by Lambek in 1999 as an algebraic tool for the formal analysis of natural languages. The main focus of the present paper is placed on a special extension of this calculus known as tupled pregroup grammars. Their applications to Italian concerning different sentential structures involving clitic pronouns, relative pronouns and related phenomena are taken into consideration and the advantages of the tupled pregroup approach are shown from the point of view of both linguistic analysis and syntactic computation.

14:30-16:00 Session 75AR: Contributed Talks (LATD)
Location: MB, Festsaal
14:30
Embeddings into BiFL-algebras and conservativity
SPEAKER: unknown

ABSTRACT. We prove that every FL+ algebra can be embedded into a BiFL algebra and also provide equations that are preserved under this embedding. We obtain as corollary that the logic BiFL, which is the multiple conclusion version of Full Lambek calculus, is conservative over the logic defined by the dual-implication-free fragment of the corresponding calculus. The same conservativity result follows for all the axiomatic extensions by the above equations.

15:00
The lattice of varieties generated by small residuated lattices
SPEAKER: Peter Jipsen

ABSTRACT. We describe the bottom of the lattice of varieties of residuated lattices and FL-algebras by calculating the HS-poset of subdirectly irreducible residuated lattices up to size 5. Diagrams of parts of this poset are displayed, with the algebras grouped by well known subvarieties that contain them, such as commutative, integral, distributive and/or representable residuated lattices, BL-algebras, MV-algebras and Gödel algebras. From this data one can deduce some structural properties of residuated lattices.

15:30
Quasivarieties of MV-algebras and structurally complete Lukasiewicz logics
SPEAKER: Joan Gispert

ABSTRACT. This paper is a contribution to the study of the lattice of all quasivarieties of MV-algebras. For every proper subvariety V of MV-algebras we investigate the least quasivariety K in V that generates V as a variety. For every least V-quasivariety we obtain a finite set of generators. Finally we apply some of the algebraic results to obtain results in admissibility theory of Lukasiewicz logics.

14:30-16:00 Session 75AS: Contributed Talks (LATD)
Location: MB, Hörsaal 15
14:30
Boolean-valued judgment aggregation
SPEAKER: unknown

ABSTRACT. [From the abstract's concluding paragraph; FOR THE FULL ABSTRACT, PLEASE SEE THE ATTACHED FILE.]

We describe a framework for Boolean-valued judgment aggregation. While the major body of the literature on judgment aggregation draws attention to inconsistencies between properties of the agenda and properties of the aggregation rule, the simple (im)possibility results in this paper highlight the role of the set of truth values and its algebraic structure. In particular, it is shown that central properties of aggregation rules can be formulated as homomorphy or order-preservation conditions on the mapping between the powerset algebra over the set of individuals and the algebra of truth values. This is further evidence that the problems in aggregation theory are driven by information loss, which in our framework is given by a coarsening of the algebra of truth values.

15:00
Constructing many-valued logical functions with small influence of their variables
SPEAKER: unknown

ABSTRACT. The Boolean functions with small influence of their inputs are used in the collective coin flipping algorithms [Ben-Or, Linial: Collective coin flipping, Randomness and Computation 1989, 91-115]. In this contribution we replace the random bit generator with a random generator over a finite set and we show the existence of finitely-valued Lukasiewicz formulas with small influence of their variables.

15:30
Coupling games for Lukasiewicz logic
SPEAKER: Alex Simpson

ABSTRACT. We propose a new game interpretation for Lukasiewicz logic. The game is played between two players, Maximizer and Minimizer, who respectively try to maximize and minimize the probability that the property expressed by a formula holds. The mechanism used to achieve this is that players construct couplings of probability measures.

14:30-16:00 Session 75AT: Technical session (EC2)
Location: FH, Seminarraum 101C
14:30
Abstractions for Relaxed Memory Models
SPEAKER: Eran Yahav

ABSTRACT. We address the problems of verification and fence inference in infinite-state concurrent programs running on relaxed memory models such as TSO and PSO. The talk surveys techniques using predicate abstraction as well as techniques based on numerical domains with refinement propagation. We show how these techniques can be effectively applied to verify challenging concurrent algorithms, including state of the art concurrent work-stealing queues.

15:30
Group Communication Patterns for High Performance Computing in Scala
SPEAKER: unknown

ABSTRACT. We developed a novel Functional Object-Oriented PARallel framework (FOOPAR) for analyzable high-level high-performance computing in Scala. Central to this framework are Distributed Memory Parallel Data structures, i.e., collections of data distributed in a shared nothing system together with parallel operations on these data.

In this position paper we shortly present FOOPAR’s design and focus on its properties w.r.t. analyzability and correctness. We prove the correctness and safety of one communication algorithm and show how specification testing (via ScalaCheck) can be used to bridge the gap between proof and implementation.

14:30-16:00 Session 75AU: Arrays and Polymorphism (SMT)
Location: FH, Seminarraum 104
14:30
Weakly Equivalent Arrays
SPEAKER: unknown

ABSTRACT. The (extensional) theory of arrays is widely used to model systems. Hence, efficient decision procedures are needed to model check such systems. Current decision procedures for the theory of arrays saturate the read-over-write and extensionality axioms originally proposed by McCarthy. Various filters are used to limit the number of axiom instantiations while preserving completeness. We present an algorithm that lazily instantiates lemmas based on weak equivalence classes. These lemmas are easier to interpolate as they only contain existing terms. We formally define weak equivalence and show correctness of the resulting decision procedure.

15:00
Decision Procedures for Flat Array Properties

ABSTRACT. We present new decidability results for quantified fragments of theories of arrays. Our decision procedures are fully declarative, parametric in the theories of indexes and elements and orthogonal with respect to known results. We also discuss applications to the analysis of programs handling arrays.

15:30
Extending SMT-LIB v2 with lambda-terms and polymorphism
SPEAKER: unknown

ABSTRACT. This paper describes two syntactic extensions to SMT-LIB proof scripts: lambda-expressions and polymorphism. After extending the syntax to allow these expressions, we show how to update the typing rules of the SMT-LIB to check the validity of these new terms and commands. Since most SMT-solvers only deal with many-sorted first-order formulas, we detail a monomorphization mechanism to allow to use polymorphism in SMT-LIB syntax while retaining a monomorphic solver core.

14:30-16:00 Session 75AV: Contributed Talks: Calculus and Termination (PSC)
Location: FH, CAD 1
14:30
A typed lambda-calculus with call-by-name and call-by-value iteration

ABSTRACT. We define a simply typed lambda calculus with positive recursive types in which we study the less well-known Scott encoding of data types. Scott data types have "case distinction" as a basic feature. We shed a different light on the Scott data types, by viewing the different cases as "continuations". As Scott data types do not have iteration (or recursion) built in, we add as primitives a "call-by-name iterator" and a "call-by-value iterator" with appropriate reduction rules. We show that the calculus is strongly normalizing.

15:15
Effective Bounds on the Podelski-Rybalchenko Termination Theorem
SPEAKER: unknown

ABSTRACT. We report here on current work towards an effective proof (with explicit bounds) of Podelski and Rybalchenko Termination Theorem [1]. Our long-term project is to obtain a priori-bounds for the termination of computer programs, and compare these with bounds obtained via other intuitionistic proofs of the Termination Theorem.

References

1. Andreas Podelski and Andrey Rybalchenko. Transition invariants. In LICS, pages 32-41, 2004.

14:30-16:00 Session 75AW: Invited Talk by Pavel Pudlak, Discussions (iPRA)
Location: FH, Seminarraum 101A
14:30
Feasible Interpolation in Proof Systems based on Integer Linear Programming
SPEAKER: Pavel Pudlak

ABSTRACT. Feasible interpolation has been proved for several proof systems based on integer linear programming. To prove such a theorem one has to design a polynomial time algorithm. The methods which are used to design these algorithms range from a very simple algorithm, used in the cutting plane proof system, to semidefinite programming, used in a version of Lovasz-Schrijver proof system. This reflects the increasing strength of the proof systems.

15:45
Discussions
SPEAKER: unknown
14:30-15:00 Session 75AX: Contributed talk (PCCR)
Location: FH, Hörsaal 3
14:30
Intuitionistic modal logics: efficient fragments and parameterized complexity
SPEAKER: R. Ramanujam

ABSTRACT. A basic problem of logical inference systems is that of derivability: given a set of formulas X and a formula alpha, can alpha be derived from X using the inference rules? When the logical language is designed to express computational properties, the complexity of derivability assumes importance, and for most reasoning systems of even the propositional kind, the problem is hard. It is in this context that parameterized complexity offers hope for coping with hardness.

14:30-16:00 Session 75AY: Applications & Implementation (PLP)
Location: FH, Seminarraum 134
14:30
Exploring probabilistic grammars of symbolic music using PRISM
SPEAKER: unknown

ABSTRACT. In this paper we describe how we used the logic-based probabilistic programming language PRISM to conduct a systematic comparison of several probabilistic models of symbolic music, including zeroth and first-order Markov models over pitches and intervals, and a probabilistic grammar with two parameterisations. Using PRISM allows us to take advantage of variational Bayesian methods for assessing the goodness of fit of the models. When applied to a corpus of Bach chorales and the Essen folk song collection, we found that, depending on various parameters, the probabilistic grammars sometimes but not always out-perform the simple Markov models. Examining how the models perform on smaller subsets of pieces, we find that the simpler Markov models do out-perform the best grammar-based model at the small end of the scale.

15:00
An OpenCL implementation of a forward sampling algorithm for CP-logic
SPEAKER: unknown

ABSTRACT. We present an approximate query answering algorithm for the Probabilistic Logic Programming language CP-logic. It complements existing sampling algorithms by using the rules from body to head instead of in the other direction. We present an implementation in OpenCL, which is able to exploit the multicore architecture of modern GPUs to compute a large number of samples in parallel, and demonstrate that this is competitive with existing inference algorithms.

15:30
Lifted Inference for Probabilistic Logic Programs
SPEAKER: unknown

ABSTRACT. First-order model counting emerged recently as a novel reasoning task, at the core of efficient algorithms for probabilistic logics like MLNs. For certain subsets of first-order logic, lifted model counters were shown to run in time polynomial in the number of objects in the domain of discourse, where propositional model counters require exponential time. However, these guarantees apply only to Skolem normal form theories (i.e., no existential quantifiers). Since textbook Skolemization is not sound for model counting, these restrictions precluded efficient model counting for directed models, such as probabilistic logic programs, which rely on existential quantification. Recently, we presented a novel Skolemization algorithm for model counting problems that eliminates existential quantifiers from a first-order logic theory without changing its weighted model count. Our Skolemization procedure extends the applicability of first-order model counters to probabilistic logic programming.

14:30-16:00 Session 75AZ (VEMDP)
Location: FH, Seminarraum 136
14:30
Morphisms of reaction networks that couple structure to function
SPEAKER: Luca Cardelli
15:20
Verifying CRN Implementations: A Pathway Decomposition Approach

ABSTRACT. The emerging fields of genetic engineering, synthetic biology, DNA computing, DNA nanotechnology, and molecular programming herald the birth of a new information technology that acquires information by directly sensing molecules within a chemical environment, stores information in molecules such as DNA, RNA, and proteins, processes that information by means of chemical and biochemical transformations, and uses that information to direct the manipulation of matter at the nanometer scale. To scale up beyond current proof-of-principle demonstrations, new methods for managing the complexity of designed molecular systems will need to be developed. Here we focus on the challenge of verifying the correctness of molecular implementations of abstract chemical reaction networks, where operation in a well-mixed "soup" of molecules is stochastic, asynchronous, concurrent, and often involves multiple intermediate steps in the implementation, parallel pathways, and side reactions. This problem relates to the verification of Petri Nets, but existing approaches are not sufficient for certain situations that commonly arise in molecular implementations, such as what we call "delayed choice." We formulate a new theory of pathway decomposition that provides an elegant formal basis for comparing chemical reaction network implementations, and we present an algorithm that computes this basis. We further show how pathway decomposition can be combined with bisimulation to handle a wider class that includes all currently known enzyme-free DNA implementation techniques. We anticipate that our notion of logical equivalence between chemical reaction network implementations will be valuable for other molecular implementations such as biochemical enzyme systems, and perhaps even more broadly in concurrency theory.

15:45
Lightning talks for posters/demos
SPEAKER: Chris Thachuk
14:30-16:00 Session 75BA: Planning & Scheduling (RCRA)
Location: MB, Hörsaal 12
14:30
A Systematic Analysis of Levels of Integration between High-Level Task Planning and Low-Level Feasibility Checks
SPEAKER: unknown

ABSTRACT. We provide a systematic analysis of levels of integration between discrete high-level reasoning and continuous low-level feasibility checks to address hybrid planning problems in robotic applications. We identify four distinct strategies for such an integration: (i) low-level checks are done for all possible cases in advance and then this information is used during plan generation; (ii) low-level checks are done exactly when they are needed during the search for a plan; (iii) low-level checks are done after a plan is computed, and then a new plan is computed if the plan is found infeasible; (iv) similar to the previous strategy of replanning but a new plan is computed subject to the constraints obtained from previous low-level checks. We perform experiments on hybrid planning problems in housekeeping domain considering these four methods of integration, as well as some of their combinations. We analyze the usefulness of different levels of integration in this domain, both from the point of view of computational efficiency (in time and space) and from the point of view of plan quality relative to its feasibility. We discuss advantages and disadvantages of each strategy in the light of experimental results.

15:00
A Discrete Differential Evolution Algorithm for the Total Flowtime Flowshop Scheduling Problem
SPEAKER: unknown

ABSTRACT. In this paper a new discrete Differential Evolution algorithm for the Permutation Flowshop Scheduling Problem with the total flowtime criterion is proposed. The core of the algorithm is the distance-based differential mutation operator defined by means of a new randomized bubble sort algorithm. This mutation scheme allows the Differential Evolution to directly navigate the permutations search space. Experiments were held on a well known benchmark suite and the results show that our proposal outperforms state-of-the-art algorithms in the majority of the problems.

14:30-15:30 Session 75BB: Invited Talk (NMR)
Location: EI, EI 9
14:30
Four Floors for the Theory of Theory Change
SPEAKER: Hans Rott

ABSTRACT. The theory of theory change due to Alchourrón, Gärdenfors, and Makinson ("AGM") has been widely known as being characterised by two packages of postulates. The basic package consists of six postulates and is very weak, the full package adds two further postulates and is very strong. Revisiting the three classic constructions of partial meet contraction, safe contraction and entrenchment-based construction and tracing the idea of limited discriminative powers in agents, I argue that four intermediate levels can be distinguished that play important roles within the AGM theory.

15:00-16:00 Session 76A: Invited talk (LC)
Location: MB, Prechtlsaal
15:00
Generalizations of Hilbert's Tenth Problem

ABSTRACT. Hilbert's Tenth Problem in its original form was to find an algorithm to decide, given a multivariate polynomial equation with integer coefficients, whether it has a solution over the integers. In 1970 Matiyasevich, building on work by Davis, Putnam and Robinson, proved that no such algorithm exists, i.e. Hilbert's Tenth Problem is undecidable. Since then, analogues of this problem have been studied by asking the same question for polynomial equations with coefficients and solutions in other commutative rings. The biggest open problem in the area is Hilbert's Tenth Problem over the rational numbers. In this talk we will construct some subrings R of the rationals that have the property that Hilbert's Tenth Problem for R is Turing equivalent to Hilbert's Tenth Problem over the rationals. We will also discuss some recent undecidability results for function fields of positive characteristic.

15:20-15:45 Session 77: Technical Session (LFMTP)
Location: FH, Zeichensaal 1
15:20
Automatically Deriving Schematic Theorems for Dynamic Contexts

ABSTRACT. Hypothetical judgments go hand-in-hand with higher-order abstract syntax for meta-theoretic reasoning. Such judgments have two kinds of assumptions: those that are statically known from the specification, and the dynamic assumptions that result from building derivations out of the specification clauses. It is well known that these dynamic assumptions often have a simple regular structure of repetitions of blocks of assumptions that are introduced at the same time, with each block generally involving one or several variables and their properties that are added to the context in a single backchaining step. When this regular structure of a dynamic context is statically known, it is possible to automatically reflect on the structure to derive structural properties of its elements.

In this work, we present an extension of the Abella theorem prover, which is based on a simply typed intuitionistic reasoning logic supporting inductive and co-inductive definitions, where dynamic contexts are traditionally specified as inductive characterizations of lists of formulas. We add a new mechanism for defining particular kinds of regular context specifications, called schemas, and tacticals to derive theorems from these schemas as needed. Importantly, our extension leaves the core tactics and proof language of Abella unchanged, so it is manifestly certifying and fully compatible with existing developments. We show that these tacticals can eliminate many commonly encountered kinds of administrative lemmas that would otherwise have to be proven manually, which is a common source of complaints from Abella users.

15:30-16:00 Session 78A: UITP Contributed Talks - (Coq) (UITP)
Location: FH, Seminarraum 107
15:30
PIDE for Asynchronous Interaction with Coq
SPEAKER: Carst Tankink

ABSTRACT. This paper describes the initial progress towards integrating the Coq proof assistant with the PIDE architecture initially developed for Isabelle. The architecture is aimed at asynchronous, parallel interaction with proof assistants, and is tied in heavily with a plugin that allows the jEdit editor to work with Isabelle.

We have made some generalizations to the PIDE architecture to accommodate for more provers than just Isabelle, and adapted Coq to understand the core protocol: this delivered a working system in about two man-months.

15:30-16:00 Session 78B: Contributed Talk: Declarative Programming 2 (NMR)
Location: EI, EI 9
15:30
On Strong and Default Negation in Logic Program Updates
SPEAKER: Joao Leite

ABSTRACT. Existing semantics for answer-set program updates fall into two categories: either they consider only strong negation in heads of rules, or they primarily rely on default negation in heads of rules and optionally provide support for strong negation by means of a syntactic transformation. In this paper we pinpoint the limitations of both these approaches and argue that both types of negation should be first-class citizens in the context of updates. We identify principles that plausibly constrain their interaction but are not simultaneously satisfied by any existing rule update semantics. Then we extend one of the most advanced semantics with direct support for strong negation and show that it satisfies the outlined principles as well as a variety of other desirable properties.

15:30-16:00 Session 78C: Technical Session (HCVS)
Location: FH, Seminarraum 134A
15:30
Please consult the program at http://vsl2014.at/hcvs
SPEAKER: T.B.A
16:00-16:30Coffee Break
16:30-19:00 Session 79A: VSL Joint Award Ceremony 1 (VSL)
Location: MB, Kuppelsaal
16:30
Foundations and Technology Competitions Award Ceremony

ABSTRACT. The third round of the Kurt Gödel Research Prize Fellowships Program, under the title: Connecting Foundations and Technology, aims at supporting young scholars in early stages of their academic careers by offering highest fellowships in history of logic, kindly supported by the John Templeton Foundation. Young scholars being less or exactly 40 years old at the time of the commencement of the Vienna Summer of Logic (July 9, 2014) will be awarded one fellowship award in the amount of EUR 100,000, in each of the following categories:

  • Logical Foundations of Mathematics,
  • Logical Foundations of Computer Science, and
  • Logical Foundations of Artificial Intelligence

The following three Boards of Jurors were in charge of choosing the winners:

  • Logical Foundations of Mathematics: Jan Krajíček, Angus Macintyre, and Dana Scott (Chair).
  • Logical Foundations of Computer Science: Franz Baader, Johann Makowsky, and Wolfgang Thomas (Chair).
  • Logical Foundations of Artificial Intelligence: Luigia Carlucci Aiello, Georg Gottlob (Chair), and Bernhard Nebel.

http://fellowship.logic.at/

17:30
FLoC Olympic Games Award Ceremony 1

ABSTRACT. The aim of the FLoC Olympic Games is to start a tradition in the spirit of the ancient Olympic Games, a Panhellenic sport festival held every four years in the sanctuary of Olympia in Greece, this time in the scientific community of computational logic. Every four years, as part of the Federated Logic Conference, the Games will gather together all the challenging disciplines from a variety of computational logic in the form of the solver competitions.

At the Award Ceremonies, the competition organizers will have the opportunity to present their competitions to the public and give away special prizes, the prestigious Kurt Gödel medals, to their successful competitors. This reinforces the main goal of the FLoC Olympic Games, that is, to facilitate the visibility of the competitions associated with the conferences and workshops of the Federated Logic Conference during the Vienna Summer of Logic.

This award ceremony will host the

  • 3rd Confluence Competition (CoCo 2014);
  • Configurable SAT Solver Challenge (CSSC 2014);
  • Ninth Max-SAT Evaluation (Max-SAT 2014);
  • QBF Gallery 2014; and
  • SAT Competition 2014 (SAT-COMP 2014).
18:15
FLoC Closing Week 1
SPEAKER: Helmut Veith
16:30-18:00 Session 80A: NSV Invited Talk: Sumit Kumar Jha (NSV)
Location: FH, Hörsaal 2
16:30
Verifying Parameterized Software Models in Computational Data Science against Behavioral Specifications

ABSTRACT. Computational data science uses a variety of numerical software models to understand, predict and control the evolution of complex systems. For example, computational systems biology employs multi-scale mechanistic stochastic rule-based models while culturomics uses detailed agent-based models. Because of the inherent uncertainty involved in precisely measuring natural systems, software models employed in computational data science include a number of unknown or imprecisely known parameters. A fundamental problem in developing such software models is to discover the set of parameter values that enable a computational model to satisfy a given set of behavioral specifications. In this talk, we will present a new massively parallel extreme-scale parameter discovery algorithm for complex intelligent software models. We will discuss preliminary results obtained using a parallel implementation of the algorithm on a small 1,792-core CUDA cluster. The talk will also identify opportunities for future research in verifying numerical software being deployed by computational data scientists.

16:30-18:00 Session 80B: UITP Contributed Talks - (Isabelle, IDE, Equational Reasoning) (UITP)
Location: FH, Seminarraum 107
16:30
System description: Isabelle/jEdit in 2014

ABSTRACT. This is an updated system description for Isabelle/jEdit, according to the state of the official release Isabelle2013-2 (December 2013) and the forthcoming release Isabelle2014 that is anticipated for Summer 2014. The following new PIDE concepts are explained: asynchronous print functions and document overlays, syntactic and semantic completion, editor navigation, management of auxiliary files within the document-model.

See also Isabelle2014-RC0, which is the same pre-release snapshot that will be used in the Isabelle tutorial.

17:00
A Logic-Independent IDE
SPEAKER: Florian Rabe

ABSTRACT. Interactive deduction systems are becoming increasingly powerful and practical. While design has historically focused on questions such as the choice of logic or the strength of the prover, modern systems must additionally satisfy practical requirements such as good IDEs or easily-searchable libraries. We hypothesize that many such practical requirements can be solved independently of the specific logic or prover, and that doing so will enable a fruitful separation of concerns between logic/prover and user interface developers.

To that end, we design and implement a logic-independent IDE-style user interface, which we base on the jEdit editor and the MMT language. jEdit is a full-fledged and extensible text editor, and MMT is a logic-independent representation format that already offers a variety of generic knowledge management services. Taken together, they let us realize strong user support at comparatively little cost: Example features of our IDE include hyperlinking, awareness of inferred information, library browsing, search, and change management.

17:30
UTP2: Higher-Order Equational Reasoning by Pointing

ABSTRACT. We describe a prototype theorem prover, \UTP2, developed to match the style of hand-written proof work in the Unifying Theories of Programming semantical framework. This is based on alphabetised predicates in a 2nd-order logic, with a strong emphasis on equational reasoning. We present here an overview of the user-interface of this prover, which was developed from the outset using a point-and-click approach. We contrast this with the command-line paradigm that continues to dominate the mainstream theorem provers, and raises the question: can we have the best of both worlds?

16:30-18:00 Session 80C: New Frontiers (WST)
Location: FH, Seminarraum 101B
16:30
Ordering Networks

ABSTRACT. This extended abstract discusses the problem of defining quasi-orders that are suitable for use with network rewriting.

17:00
Discussion
SPEAKER: Everyone
16:30-17:20 Session 80D: Invited Talk (LFMTP)
Location: FH, Zeichensaal 1
16:30
Idris: Implementing a Dependently Typed Programming Language
SPEAKER: Edwin Brady

ABSTRACT. Idris is a purely functional programming language with dependent types. As well as supporting theorem proving, Idris is intended to be a general purpose programming language. As such, it provides high-level concepts such as implicit syntax, type classes and "do" notation.

Many components of a dependently-typed programming language are by now well understood, for example the underlying type theory, type checking, unification and evaluation. How to combine these components into a realistic and usable high-level language is, however, folklore, discovered anew by successive language implementors. In this talk, I will give an overview of the implementation of Idris, showing how these it brings these components together into a complete programming language.

I will briefly describe the high-level language and the underlying type theory, and present a tactic-based method for elaborating concrete high-level syntax with implicit arguments and type classes into a fully explicit core type theory. Furthermore, I will show how this method facilitates the implementation of new high-level language constructs.

16:30-18:00 Session 80E: Technical Session + Discussion (HCVS)
Location: FH, Seminarraum 134A
16:30
Please consult the program at http://vsl2014.at/hcvs
SPEAKER: T.B.A
16:30-18:00 Session 80F (REORDER)
Location: FH, Hörsaal 4
16:30
Panel session
SPEAKER: Panel Session
16:30-18:30 Session 80G: Program Analysis (PAS)
Location: FH, Seminarraum 138A
16:30
Successfully Coping with Scalability of Symbolic Analysis in Timing Analysis
SPEAKER: unknown

ABSTRACT. Symbolic analysis is known and valued for its strength of statically inferring non-trivial properties of programs. This strength, however, comes typically at high computation costs often preventing a full path coverage of programs of real-world applications.

In this presentation we reconsider and highlight two examples of successfully coping and circumventing the scalabality issue related to straightforward applications of symbolic analysis in the timing analysis of real-time systems. In the first approach, we restrict symbolic analysis to program loops matching particular structural patterns for which symbolic analysis reduces to the solution of a set of recurrence equations which does not require an explicit symbolic path analysis and coverage of these loops. In the second approach, we focus the pathwise symbolic analysis of a program to a set of program paths which are known by other program analyses to be crucial for the timing behaviour of the program that is usually small and manageable. Both approaches have important complementary orthogonal applications in different stages of the timing analysis of safety-critical real-time systems and proved to be useful in practice.

17:00
Confluence of Pattern-Based Calculi with Finitary Matching
SPEAKER: unknown

ABSTRACT. The pattern calculus described in this paper integrates the functional mechanism of the lambda-calculus with the capabilities of finitary pattern matching algorithms. We propose a generic confluence proof, where the way pattern abstractions are applied in a non-deterministic calculus is axiomatized. Instances of the matching function are presented.

17:30
The Verification of Conversion Algorithms between Finite Automata

ABSTRACT. This paper presents three verified conversion algorithms between finite automata in Isabelle/HOL. The specifications of all algorithms are formalized in a direct way, from which Isabelle/HOL can generate executable programs. The correctness of each algorithm is totally verified from three perspectives: the loops in each algorithm terminates, the output of an algorithm is an automata with the expected type, and the input and output of an algorithm accept the same language. The whole work includes over 3000 lines of code in Isabelle/HOL.

18:00
Source code profiling and classification for automated detection of logical errors

ABSTRACT. Research and industrial experience reveal that code reviews as a part of software inspection might be the most cost-effective technique a team can use to reduce defects. Tools that automate code inspection mostly focus on the detection of a priori known defect patterns and security vulnerabilities. Automated detection of logical errors, due to a faulty implementation of applications’ functionality is a relatively uncharted territory. Automation can be based on profiling the intended behavior behind the source code. In this paper, we present a code profiling method based on token classification. Our method combines an information flow analysis, the crosschecking of dynamic invariants with symbolic execution and code classification heuristics with the use of a fuzzy logic system. Our goal is to detect logical errors and exploitable vulnerabilities. The theoretical underpinnings and the practical implementation of our approach are discussed. We test the APP_LogGIC tool that implements the proposed analysis on two real-world applications. The results show that profiling the intended program behavior is feasible in diverse applications. We discuss in detail the heuristics used to overcome the problem of state space explosion and that of the large data sets. Code metrics and test results are provided to demonstrate the effectiveness of the proposed approach. This paper extends the work reported in an article that has been recently submitted to an international conference and is currently pending review. In this extended version of our method we propose new classification mechanisms and we provide a detailed description of the used source code classification techniques and heuristics.

16:30-18:00 Session 80H: Contributed talks to NLCS (NLCS)
Location: FH, CAD 2
16:30
Algebraic Effects and Handlers in Natural Language Interpretation

ABSTRACT. Phenomena on the syntax-semantics interface of natural languages have been observed to have links with programming language semantics, namely computational effects and evaluation order. We explore this connection to be able to profit from recent development in the study of effects. We propose adopting algebraic effects and handlers as tools for facilitating a uniform and integrated treatment of different non-compositional phenomena on the syntax-semantics interface.

In our paper, we give an exposition of the framework of algebraic effects and handlers with an eye towards its applicability in computational semantics. We then present some exemplary analyses in the framework: we study the interplay of anaphora and quantification by translating the continuation-based dynamic logic of de Groote into a more DRT-like theory and we propose a treatment of overt wh-movement which avoids higher-order types in the syntax.

17:00
Solving Partee's Temperature Puzzle in an EFL-Ontology

ABSTRACT. According to the received view of type-logical semantics (suggested by Montague and adopted by many of his successors), the number of a semantics' basic types depends proportionally on the syntactic and lexical diversity of the modeled natural language fragment. This paper provides a counterexample to this principle. In particular, it shows that Partee's temperature puzzle from [PTQ] – whose solution is commonly taken to require a basic type for indices (for the formation of individual concepts) or for individual concepts – can be interpreted in the poorer type system from [EFL], which only assumes basic individuals and propositions. This result contributes to the general project of identifying the minimal semantic requirements on models for certain linguistic fragments.

17:30
Toward a Logic of Cumulative Quantification
SPEAKER: unknown

ABSTRACT. This paper studies entailments between sentences like "three boys kissed five girls" involving two or more numerically quantified noun phrases that are interpreted as expressing cumulative quantification in the sense of Scha (1984). A precise characterization of when one such sentence entails another such sentence that differs from it only in the numerals is crucially important to evaluate claims about scalar implicatures arising from the use of those sentences, as pointed out by Shimada (to appear). This problem turns out to be non-trivial and surprisingly difficult. We give a characterization of these entailments for the case of sentences with two noun phrases, together with a complete axiomatization consisting of two pairs of simple inference rules. We also give some valid inference rules for sentences with three noun phrases.

16:30-18:00 Session 80I: Contributed Talks (LATD)
Location: MB, Festsaal
16:30
Densification via polynomial extensions

ABSTRACT. Inspired by the proof of standard completeness for Uninorm Logic via residuated frames we present a proof that is completely algebraic and resembles standard constructions of ring extensions from classical algebra.

17:00
Introducing an exotic MTL-chain
SPEAKER: Felix Bou

ABSTRACT. It is shown an equation which only involves the lattice operations (meet and join) and the monoidal operation that is valid in BL-chains while it fails in some MTL-chains.

17:30
On Pocrims and Hoops
SPEAKER: unknown

ABSTRACT. This extended abstract presents recent work on the algebraic structures known as pocrims and hoops. We outline a new method for verifying identities over hoops and give examples of its use. Of particular interest are the identities which show that the double negation mapping in a hoop is a hoop endomorphism. We undertake an algebraic analysis of the double negation translations of Kolmogorov, Gentzen and Glivenko viewed as variants on the standard semantics for intutionistic affine logic with respect to a given class of bounded pocrims. We give a semantic version of Troelstra's requirements for a correct double negation translation. We show that all three translations are correct for the class of bounded hoops and that the Kolmogorov translation is correct for the class of all pocrims. We exhibit classes of pocrims for which the Gentzen translation is correct but the Glivenko translation is not and vice versa.

16:30-18:00 Session 80J: Contributed Talks (LATD)
Location: MB, Hörsaal 15
16:30
Ordinal foundation for Lukasiewicz semantics

ABSTRACT. We investigate the relationship between the standard point-wise truth-value semantics for Lukasiewicz infinite-valued logic and an ordering-based semantics consisting in pairwise evaluations. The key step is to lay down sufficient conditions for a binary relation over the set of sentences to represent a real-valued Lukasiewicz valuation. We discuss the desirability of these conditions as properties of the relation intuitively interpreted as `being less true'.

17:00
Advances on elementary equivalence in model theory of fuzzy logics
SPEAKER: unknown

ABSTRACT. Elementary equivalence is a central notion in classical model theory that allows to classify first-order structures. It has received several useful characterizations, among others, in terms of systems of back-and-forth, and has yielded many important results like the general forms of Lowenheim-Skolem theorems. In the context of fuzzy predicate logics, the notion of elementarily equivalent structures was defined Hájek and Cintula. They presented a characterization of conservative extension theories using the elementary equivalence relation. A related approach is the one presented by Novák, Perfilieva and Mockor where models can be elementary equivalent in a degree d. A few recent papers have contributed to the development of model theory of predicate fuzzy logics. However, the understanding of the central notion of elementary equivalence is still far from its counterpart in classical model theory. The present contribution intends to provide some advances towards this goal.

17:30
Trakhtenbrot theorem and first-order axiomatic extensions of MTL

ABSTRACT. In 1950, B.A. Trakhtenbrot showed that the set of first-order tautologies associated to finite models is not recursively enumerable. In 1999, P. H\'ajek generalized this result to the first-order versions of \L ukasiewicz, G\"odel and Product logics. In this talk we extend the analysis to the first-order axiomatic extensions of MTL. Our main result is the following: let L be an axiomatic extension L of MTL whose corresponding variety is generated by a chain. Then if L is an extension of BL or an extension of SMTL or an extension of WNM, for every generic L-chain $\mathcal{A}$ the set fTAUT$^\mathcal{A}_{\forall}$ (the set of first-order tautologies associated to the finite $\mathcal{A}$-models) is $\Pi_1$-complete. More in general, for every axiomatic extension L of MTL there is no L-chain $\mathcal{A}$ such that L$\forall$ is complete w.r.t. the class of finite $\mathcal{A}$-models. We have negative results also if we expand the language with the $\Delta$ operator.

16:30-17:30 Session 80K: Invited talk (EC2)
Location: FH, Seminarraum 101C
16:30
Verifiable Concurrent Systems Programming: A Garbage Collector Case Study

ABSTRACT. We describe a proof system for concurrent programs built on top of the Boogie framework. The proof system combines key features of several proof techniques, including separation-logic, the Owicki-Gries and rely-guarantee techniques, and movers, reduction, and abstraction in the style of our own earlier work on the QED proof system.

We have been using a concurrent mark-sweep garbage collector as a case study to drive the proof system and tool design. The top-level functional specification for the garbage collector is to appear invisible to application threads. We explain how we refine this top-level specification down to a concurrent implementation described at the level of individual memory accesses. We highlight challenges we encountered while carrying out a proof spanning such a large abstraction gap and how features of our proof system help us address them.

16:30-18:00 Session 80L: Contributed Talks: Observation and Justification (PSC)
Location: FH, CAD 1
16:30
Observational Equivalence for Behavioural Differential Equations
SPEAKER: unknown

ABSTRACT. Recently, Abel and others have proposed a type system, in which coinductive types are programmed by means of observations. We investigate category theoretical and topological properties of an observational equivalence on terms, which arises naturally in this type system. Moreover, we introduce a proof system, allowing us to verify proofs of observational equivalence.

17:15
Multi-agent justification logic
SPEAKER: unknown

ABSTRACT. We introduce the explicit counterparts and prove realization theorem for bimodal logics, describing communication of two agents in case when knowledge (belief) of both agents is described by one of the logics $\mathsf{K}$, $\mathsf{T}$ or $\mathsf{K4}$. The agents may interact by either adopting or verifying each others knowledge; interaction can be either one-way or two-ways.

16:30-18:00 Session 80N: iPrA Regular talks (iPRA)
Location: FH, Seminarraum 101A
16:30
On Enumerating Query Plans via Interpolants from Tableau Proofs (abstract)
SPEAKER: unknown

ABSTRACT. (see attached abstract)

17:00
Application Patterns of Projection/Forgetting

ABSTRACT. It is useful to consider projection and forgetting as second-order operators with elimination as associated computational processing: They can be nested and further second-order operators can be defined in terms of them. In the presentation, it is outlined for a variety of properties, concepts and tasks in knowledge representation and reasoning how these can be expressed in this framework.

17:30
Generalized Craig Interpolation for Stochastic Satisfiability Modulo Theory Problems
SPEAKER: Ahmed Mahdi

ABSTRACT. Craig interpolation is used in solving bounded model checking (BMC) SAT and SMT problems.

In 2012, Teige et al. succeeded to compute interpolants for stochastic boolean satisfiability problems, i.e. SSAT.

That was achieved by defining sound and complete resolution calculus for SSAT and computing the Craig interpolation for (A and B and not S_{A,B}), where not S_{A,B} represents a conflict clause of A and B. Consequently, one can obtain two interpolants for the formula \Phi: A and B, where one interpolates A, and the second one interpolates B.

As a result of that, an approach to solve efficiently stochastic bounded model checker problems, e.g. MDPs, was addressed by using SSAT interpolation.

In this work we present a notion of Craig Interpolation for the stochastic modulo theories framework i.e. SSMT. An SSMT formula consists of free and quantified variables where the latter variables are bound to discrete domains by existential or randomized quantifiers. In the case of a randomized variable, the variable can be assigned to one of its domain values with a certain probability, where the summation of all probabilities of the same variable has to be one. The significance of SSMT comes from the fact that one can encode probabilistic hybrid automata problems as SSMT formulae.

Our approach to compute the Craig interpolation for SSMT is as follows: we assume that arithmetic constraints can be relaxed as simple bounds by using interval constraint propagation (ICP) as implemented in iSAT tool. After that, we define a sound and relatively-complete resolution calculus for SSMT formula as done in case of SSAT, such that if a conflict clause is derived with a certain probability, then the latter probability is a safe upper bound of the real satisfaction probability of the SSMT formula. Then we extend P'udlak rules in a way such that simple bounds are treated as literals and if a conflict clause is derived, then we get a generalized Craig interpolation for the whole formula (A and B and not S_{A,B}).

Finally, we adapt BMC to the probabilistic case, i.e. PBMC by encoding a step-bounded reachability problems as an SSMT formula, where a generalized Craig interpolation is used to compute an upper bound of the maximum probability of reaching the target states.

16:30-19:00 Session 80P: Perspectives - Invited Perspectives (CICLOPS-WLPE and PLP)
Location: FH, Hörsaal 7
16:30
A unified approach to generative and discriminative modeling
SPEAKER: Taisuke Sato
17:00
Inference and learning for PLP

ABSTRACT. This talk will discuss the recent results achieved in inference and learning for probabilistic logic programs under the distribution semantics  and will propose directions for future work.

 

17:30
TBA
18:00
TBA
SPEAKER: Wannes Meert
18:30
Discussion on the implementation of PLP systems.
SPEAKER: P.L.P. Chairs
16:30-17:45 Session 80Q (VEMDP)
Location: FH, Seminarraum 136
16:30
Boolean modelling and formal verification of tiered-rate chemical reaction networks
16:55
Formal mean field theories for graph rewriting
SPEAKER: Vincent Danos
17:20
Rule based modelling of DNA repair
SPEAKER: Jean Krivine
16:30-18:30 Session 80R: Contributed Talks: Belief Change (NMR)
Location: EI, EI 9
16:30
Inference in the FO(C) Modelling Language
SPEAKER: Bart Bogaerts

ABSTRACT. Recently, FO(C), the integration of C-Log with classical logic, was introduced as a knowledge representation language. Up to this point, no systems exist that perform inference on FO(C), and very little is known about properties of inference in FO(C). In this paper, we study both of the above problems. We define normal forms for FO(C), one of which corresponds to FO(ID). We define transformations between these normal forms, and show that, using these transformations, several inference tasks for FO(C) can be reduced to inference tasks for FO(ID), for which solvers exist. We implemented a prototype of this transformation, and thus present the first system to perform inference in FO(C). We also provide results about the complexity of reasoning in FO(C).

17:00
FO(C) and Related Modelling Paradigms
SPEAKER: Bart Bogaerts

ABSTRACT. Recently, C-Log was introduced as a language for modelling causal processes. Its formal semantics has been defined together with introductory examples, but the study of this language is far from finished. In this paper, we compare C-Log to other declarative modelling languages. More specifically, we compare to first-order logic (FO), and argue that C-Log and FO are orthogonal and that their integration, FO(C), is a knowledge representation language that allows for clear and succinct models. We compare FO(C) to E-disjunctive logic programming with the stable semantics, and define a fragment on which both semantics coincide. Furthermore, we discuss object-creation in FO(C), relating it to mathematics, business rules systems, and data base systems.

17:30
Belief merging within fragments of propositional logic
SPEAKER: unknown

ABSTRACT. Recently, belief change within the framework of fragments of propositional logic has gained increasing attention. Previous works focused on belief contraction and belief revision on the Horn fragment. However, the problem of belief merging within fragments of propositional logic has been neglected so far. This paper presents a general approach to define new merging operators derived from existing ones such that the result of merging remains in the fragment under consideration. Our approach is not limited to the case of Horn fragment but applicable to any fragment of propositional logic characterized by a closure property on the sets of models of its formulae. We study the logical properties of the proposed operators in terms of satisfaction of merging postulates, considering in particular distance-based merging operators for Horn and Krom fragments.

18:00
Belief Revision and Trust
SPEAKER: Aaron Hunter

ABSTRACT. Belief revision is the process in which an agent incorporates a new piece of information together with a pre-existing set of beliefs. When the new information comes in the form of a report from another agent, then it is clear that we must first determine whether or not that agent should be trusted. In this paper, we provide a formal approach to modeling trust as a pre-processing step before belief revision. We emphasize that trust is not simply a relation between agents; the trust that one agent has in another is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state-partition with each agent, then relativizing all reports to this state partition before performing belief revision. In this manner, we incorporate only the part of a report that falls under the perceived domain of expertise of the reporting agent. Unfortunately, state partitions based on expertise do not allow us to compare the relative strength of trust held with respect to different agents. To address this problem, we introduce pseudometrics over states to represent differing degrees of trust. This allows us to incorporate simultaneous reports from multiple agents in a way that ensures the most trusted reports will be believed

16:50-18:30 Session 81: Contributed Talks (DL)
Location: EI, EI 7
16:50
Goal-Directed Tracing of Inferences in EL Ontologies
SPEAKER: unknown

ABSTRACT. EL is a family of tractable Description Logics (DLs) that is the basis of the OWL~2 EL profile. Unlike for many expressive DLs, reasoning in EL can be performed by computing a deductively-closed set of logical consequences of some specific form. In some ontology-based applications, e.g., for ontology debugging, knowing the logical consequences of the ontology axioms is often not sufficient. The user also needs to know from which axioms and how the consequences were derived. Although it is possible to keep track of all inferences applied during reasoning, this is usually not done in practice to avoid the overheads. In this paper, we present a goal-directed method that can generate inferences for selected consequences in the deductive closure without re-applying all rules from scratch. We provide an empirical evaluation demonstrating that the method is fast and economical for large EL ontologies. Although the main benefits are demonstrated for EL reasoning, the method can be easily extended to other procedures based on deductive closure computation using fixed sets of rules.

17:15
Brave and Cautious Reasoning in EL

ABSTRACT. Developing and maintaining ontologies is an expensive and error-prone task. After an error is detected, users may have to wait for a long time before a corrected version of the ontology is available. In the meantime, one might still want to derive meaningful knowledge from the ontology, while avoiding the known errors. We introduce brave and cautious reasoning and show that it is hard for EL. We then propose methods for improving the reasoning times by precompiling information about the known errors and using proof-theoretic techniques for computing justifications. A prototypical implementation shows that our approach is feasible for large ontologies used in practice.

17:40
Matching with respect to general concept inclusions in the Description Logic EL
SPEAKER: unknown

ABSTRACT. Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics almost 20 years ago, motivated by applications in the Classic system. For the Description Logic EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBox that satisfy a certain restriction on cyclic dependencies between concepts, but the general case ist still open. For matching, we solve the general case in this paper: we show that matching in EL w.r.t. general TBoxes is NP-complete by introducing a goal-oriented matching algorithm that uses nondeterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem.

18:05
Contextualized Knowledge Repositories with Justifiable Exceptions
SPEAKER: Loris Bozzato

ABSTRACT. Representation of context dependent knowledge in the Semantic Web has been recognized as a relevant issue: as a consequence, a number of logic based formalisms have been proposed in this regard. In response to this need, in previous works, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. Starting from this point, the first contribution of the paper is an extension of CKR with the possibility to represent defaults in context dependent axioms and a translation of extended CKRs to datalog programs with negation under answer set semantics. The translation generates datalog programs which are sound and complete w.r.t. instance checking in CKRs. Exploiting this result, we have developed as a second contribution a prototype implementation that compiles a CKR based on OWL2RL to a datalog program. Finally, we compare our approach with major non-monotonic formalisms for description logics and contextual knowledge representation.

17:20-18:10 Session 82: Technical Session (LFMTP)
Location: FH, Zeichensaal 1
17:20
Some constructions on ω-groupoids
SPEAKER: unknown

ABSTRACT. Weak ω-groupoids are the higher dimensional generalisation of setoids and are an essential ingredient of the constructive semantics of Homotopy Type Theory [10]. Following up on our previous formalisation [3] and Brunerie’s notes [5], we present a new formalisation of the syntax of weak ω-groupoids in Agda using heterogeneous equality. We show how to recover basic constructions on ω-groupoids using suspension and replacement. In particular we show that any type forms a groupoid and we outline how to derive higher dimensional composition. We present a possible semantics using globular sets and discuss the issues which arise when using globular types instead.

17:45
Gentle Formalisation of Stoughton’s Lambda Calculus Substitution
SPEAKER: unknown

ABSTRACT. In [1], Allen Stoughton proposed a notion of substitution for the Lambda calculus formulated in its original syntax with only one sort of symbols (names) for variables --without identifying $\alpha$-convertible terms. According to such formulation, the action of substitution on terms is defined by simple structural recursion and an interesting theory arises concerning e.g. $\alpha$ conversion. In this paper we present a formalisation of Stoughton's development in Constructive Type Theory using the language Agda, up to the Substitution Lemma for $\alpha$ conversion. The code obtained is quite concise and we are able to formulate some improvements over the original presentation. For instance, our definition of $\alpha$ conversion is just syntax directed but we are nevertheless able to prove in an easy way that it gives rise to an equivalence relation, whereas in [1] the latter was included as part of the definition. As a result of this work we are inclined to assert that Stoughton's is the right way to formulate the Lambda calculus in its original, conventional syntax and that it is a formulation amenable to fully formal treatment.

[1] A. Stoughton. Substitution revisited. Theor. Comput. Sci., 59:317-325, 1988.

17:45-18:30 Session 83: Poster and demo session (VEMDP)
Location: FH, Seminarraum 136
17:45
DyNAMiC Workbench: Automated design and verification of dynamic DNA nanosystems
SPEAKER: Casey Grun
17:45
An Aspect Oriented Design and Modelling Framework for Synthetic Biology
SPEAKER: Philip Boeing
17:45
Guiding the development of DNA walker systems to guarantee reliability and performance
17:45
Software tools for analysing the chemical master equation
SPEAKER: Neil Dalchau
17:45
Identification of Components and Modular Verification of Biochemical Pathways
SPEAKER: Paolo Milazzo