VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR SATURDAY, JULY 19TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 105: Invited Talk (Scott) (LATD and LC)
Location: MB, Festsaal
08:30
Geometry without points
SPEAKER: Dana Scott

ABSTRACT. Ever since the compilers of Euclid's Elements gave the "definitions" that "a point is that which has no part" and "a line is breadthless length", philosophers and mathematicians have worried that the the basic concepts of geometry are too abstract and too idealized. In the 20th century writers such as Husserl, Lesniewski, Whitehead, Tarski, Blumenthal, and von Neumann have proposed "pointless" approaches.

A problem more recent authors have emphasized it that there are difficulties in having a rich theory of a part-whole relationship without atoms and providing both size and geometric dimension as part of the theory.

A solution will be proposed using the Boolean algebra of measurable sets modulo null sets along with relations derived from the group of rigid motions in Euclidean n-space.

(This is a preliminary report on on-going joint work with Tamar Lando, Columbia University.)

08:45-10:15 Session 106A: FLoC Panel (joint session of 10 meetings)
Location: FH, Hörsaal 1
08:45
FLoC Panel: Computational Complexity and Logic: Theory vs. Experiments
SPEAKER: unknown

ABSTRACT. The success of SAT solving, Answer Set Programming, and Model Checking has changed our view of computational complexity and (un)decidability. Program committees are increasingly discussing the value of theoretical and empirical complexity evaluations. What is the intellectual value of a completeness result? Which standards do we apply to reproducibility of experiments? What is the role of competitions? What are the minimal requirements for an experimental proof of concept?

08:45-10:15 Session 106B: Contributed Talks: Causality and Inference (NMR)
Location: EI, EI 9
08:45
Tableau vs. Sequent Calculi for Minimal Entailment
SPEAKER: unknown

ABSTRACT. In this paper we compare two proof systems for minimal entailment: a tableau system OTAB and a sequent calculus MLK, both developed by Olivetti (1992). Our main result shows that OTAB-proofs can be efficiently translated into MLK-proofs, i.e., MLK p-simulates OTAB. The simulation is technically very involved and answers an open question posed by Olivetti (1992) on the relation between the two calculi. We also show that the two systems are exponentially separated, i.e., there are formulas which have polynomial-size MLK-proofs, but require exponential-size OTAB-proofs.

09:15
Revisiting Chase Termination for Existential Rules and their Extension to Nonmonotonic Negation
SPEAKER: unknown

ABSTRACT. Existential rules have been proposed for representing ontological knowledge, specifically in the context of Ontology- Based Data Access. Entailment with existential rules is undecidable. We focus in this paper on conditions that ensure the termination of a breadth-first forward chaining algorithm known as the chase. Several variants of the chase have been proposed. In the first part of this paper, we propose a new tool that allows to extend existing acyclicity conditions ensuring chase termination, while keeping good complexity properties. In the second part, we study the extension to existential rules with nonmonotonic negation under stable model semantics, discuss the relevancy of the chase variants for these rules and further extend acyclicity results obtained in the positive case.

09:45
Causality in Databases: The Diagnosis and Repair Connections
SPEAKER: unknown

ABSTRACT. In this work we establish and investigate the connections between causality for query answers in databases, database repairs wrt. denial constraints, and model-based diagnosis. The first two are relatively new problems in databases, and the third one is an established subject in knowledge representation. We show how to obtain database repairs from causes and the other way around. The vast body of research on database repairs can be applied to the newer problem of determining actual causes for query answers. By formulating a causality problem as a diagnosis problem, we manage to characterize causes in terms of the system’s diagnoses.

09:35-10:15 Session 107A: Contributed talks A4 (LC)
Location: MB, Aufbaulabor
09:35
Provability logics and proof-theoretic ordinals

ABSTRACT. A recent approach by Beklemishev uses provability logics to represent reflection principles in formal theories and, in turn, uses said principles to callibrate a theory's consistency strength. There are several benefits to this approach, including semi-finitary consistency proofs and independent combinatorial statements.

A key ingredient is Japaridze's polymodal provability logic GLPω. In order to study stronger theories one needs to go beyond GLPω to the logics GLPΛ, where Λ is an arbitrary ordinal. These logics have for each ordinal ξ<Λ a modality [ξ]. As it turns out, proof theoretic ordinals below Γ0 may be represented in the closed fragment of GLPΛ (i.e., where no propositional variables occur) and 'worms' therein. Worms are iterated consistency statements of the form ‹ξ0›...‹ξn›T and are well-ordered by their consistency strength.

We will present a calculus for computing the order types of worms and compare the resulting ordinal representation system with standard systems based on Veblen functions. We will also discuss how larger ordinals arising from impredicative proof theory may be represented within provability logics, by interpreting infinitary inference rules which are stronger than the ω-rule.

09:55
On Uniform Interpolation for the Guarded Fragment

ABSTRACT. As it is well known, Craig Interpolation fails for the Guarded Fragment GF. In this paper we show that we can recuperate even a stronger form of it, the Uniform Interpolation Property, by considering the Modal character of GF more seriously.

09:35-10:15 Session 107B: Contributed talks B4 (LC)
Location: MB, Hörsaal 14A
09:35
Degree spectra of sequences of structures

ABSTRACT. There is a close parallel between classical computability and the effective definability on abstract structures. For example, the $\Sigma^0_{n+1}$ sets correspond to the sets definable by means of computable infinitary $\Sigma_{n+1}$ formulae on a structure A. In his last paper, Soskov gives an analogue for abstract structures of Ash's reducibilities between sets of natural numbers and sequences of sets of natural numbers. He shows that for every sequence of structures A, there exists a structure M such that the sequences that are omega-enumeration reducible to A coincide with the c.e. in M sequences. He generalizes the method of Marker's extensions for a sequence of structures. Soskov demonstrates that for any sequence of structures its Marker's extension codes the elements of the sequence so that the n-th structure of the sequence appears positively at the n-th level of the definability hierarchy. The results provide a general method given a sequence of structures to construct a structure with n-th jump spectrum contained in the spectrum of the n-th member of the sequence. As an application a structure with spectrum consisting of the Turing degrees which are non-low_n for all n is obtained. Soskov shows also an embedding of the omega-enumeration degrees into the Muchnik degrees generated by spectra of structures.

We apply these results and generalize the notion of degree spectrum with respect to an infinite sequence of structures A in two ways as Joint spectra of A and Relative spectra of A. We study the set of all lower bounds of the generalized notions in terms of enumeration and omega-enumeration reducibility.

09:55
Computability in generic extensions
SPEAKER: Noah Schweber

ABSTRACT. In this talk we will explore connections between computable structure theory and generic extensions of the set-theoretic universe, $V$. Recall the definition of {\em Muchnik reducibility} for countable structures: $\mathcal{A}\le_w\mathcal{B}$ if every copy of $\mathcal{B}$ computes a copy of $\mathcal{A}$. We will begin by introducing the notion of {\em generic Muchnik reducibility}, $\le_w^*$: we say $\mathcal{A}\le_w^*\mathcal{B}$ for uncountable structures $\mathcal{A}, \mathcal{B}$ if $\mathcal{A}\le_w\mathcal{B}$ in some (=every) generic extension $V[G]$ in which $\mathcal{A}$ and $\mathcal{B}$ are both countable. We will discuss the basic properties and give some examples of generic Muchnik (non-)reducibilities among natural uncountable structures.

We will then turn our attention to {\em generic presentability}. Roughly speaking, an object $\mathcal{X}$ is generically presentable if a ``copy" of $\mathcal{X}$, up to the appropriate equivalence relation, exists in every generic extension of the universe by some fixed forcing notion. Solovay \cite{Sol70} showed that all generically presentable {\em sets} (up to equality) already exist in the ground model; we will investigate the situation for {\em countable structures} (up to isomorphism) and {\em infinitary formulas} (up to semantic equivalence). We will present two Solovay-type results (and some consequences): (1) any structure generically presentable by a forcing not making $\omega_2$ countable has a copy in $V$, and (2) (under $CH$) any structure generically presentable by a forcing not collapsing $\omega_1$ has a {\em countable} copy in $V$. Time permitting, we will discuss a contrasting result coming from work by Laskowski and Shelah \cite{SL93}.

This is joint work with Julia Knight and Antonio Montalban \cite{KMS14}.

09:35-10:15 Session 107C: Contributed talks C4 (LC)
Location: MB, GUT Schulungsraum
09:35
Forcing with finite conditions and preserving CH

ABSTRACT. In the last years there has been a second boom of the technique of forcing with side conditions (see for instance the recent works of Aspero-Mota, Krueger and Neeman describing three different perspectives of this technique). The first boom took place in the 1980s when Todorcevic discovered a method of forcing in which elementary substructures are included in the conditions of a forcing poset to ensure that the forcing poset preserves cardinals. More than twenty years later, Friedman and Mitchell independently took the first step in generalizing the method from adding small (of size at most the first uncountable cardinal) generic objects to adding larger objects by defining forcing posets with finite conditions for adding a club subset on the second uncountable cardinal. However, neither of these results show how to force (with side conditions together with another finite set of objects) the existence of such a large object together with the continuum being small. In this talk we will discuss new results in this area.

09:55
Reflection principle of list-chromatic number of graphs

ABSTRACT. Let $G=\langle V, \mathcal{E} \rangle$ be a simple graph, that is, $V$ is a non-empty set of vertexes and $\mathcal E \subseteq [V]^2$ is a set of edges. The \emph{list chromatic number of $G$}, $\mathrm{List}(G)$, is the minimal (finite or infinite) cardinal $\kappa$ such that for every function $F$ on $V$ with $|F(x)|=\kappa$ for $x \in V$, there is a function $f$ on $V$ satisfying that $f(x) \in F(x)$ and if $x \mathcal E y$ then $f(x) \neq f(y)$. The \emph{coloring number of $G$}, $\mathrm{Col}(G)$, is the minimal (finite or infinite) cardinal $\kappa$ such that there is a well-ordering $\triangleleft$ on $V$ such that $|\{y \in V:y \triangleleft x, y \mathcal E x\}|<\kappa$ for every $x \in V$. It is known that $\mathrm{List}(G) \le \mathrm{Col}(G) \le |V|$.

The \emph{reflection principle of coloring number of graphs}, $\mathrm{RP(Col)}$, is the assertion that every graph with uncountable coloring number has a subgraph of size $\aleph_1$ with uncountable coloring number. This principle wad studied in \cite{F1} and \cite{F2}, and it was appeared that this principle is a very strong large cardinal property. On the other hand, Komj\'ath \cite{K} showed the consistency of the statement that $\mathrm{Col}(G)=\mathrm{List}(G)$ for every graph $G$ with infinite coloring number. Using his result, Fuchino and Sakai \cite{FS} proved that the standard model with $\mathrm{RP(Col)}$ also satisfies the \emph{reflection principle of list-chromatic number of graphs}, $\mathrm{RP(List)}$, which assets that every graph with uncountable list-chromatic number has a subgraph of size $\aleph_1$ with uncountable list-chromatic number. They also constructed a model in which $\mathrm{RP(Col)}$ holds but $\mathrm{RP(List)}$ fails. These results suggest the natural question: Does $\mathrm{RP(List)}$ imply $\mathrm{RP(Col)}$?

In this talk, we prove the following consistency results, which show that $\mathrm{RP(List)}$ does not imply $\mathrm{RP(Col)}$, and the bounded version of $\mathrm{RP(List)}$ is not a large cardinal property: \begin{enumerate} \item Suppose GCH. Let $\lambda$ be a cardinal $>\omega_1$. Then there is a poset which preserves all cardinals, and forces that ``$\mathrm{RP(List)}$ restricted to graphs of size $\le \lambda$ holds''. \item Relative to a certain large cardinal assumption, it is consistent that $\mathrm{RP(List)}$ holds but $\mathrm{RP(Col)}$ fails. \end{enumerate}

\begin{thebibliography}{10} \bibitem{F1} {\scshape S.~Fuchino}, {\itshape Remarks on the coloring number of graphs}, {\bfseries\itshape RIMS K\^oky\^uroku}, vol.~1754 (2011), pp.~6--16.

\bibitem{F2} {\scshape S.~Fuchino, H.~Sakai, L.~Soukop, T.~Usuba}, {\itshape More about the Fodor-type reflection principle}, {\bfseries\itshape preprint},

\bibitem{FS} {\scshape S.~Fuchino, H.~Sakai}, {\itshape On reflection and non-reflection of countable list-chromatic number of graphs}, {\bfseries\itshape RIMS K\^oky\^uroku}, vol.1790 (2012), pp.~31--44.

\bibitem{K} {\scshape P.~Komj\'ath}, {\itshape The list-chromatic number of infinite graphs}, {\bfseries\itshape Israel Journal of Mathemathics}, vol.~196 (2013), no.~1, pp.~67--94. \end{thebibliography}

09:35-10:15 Session 107D: Contributed talks D4 (LC)
Location: MB, Seminarraum Kuppel
09:35
Justifying proof-theoretic reflection principles

ABSTRACT. It can be argued that by accepting the axioms of a theory as formally expressing our intuitive grasp of a mathematical structure—e.g. PA for arithmetic—we thereby implicitly commit ourselves to accepting certain other statements that are not formally provable from the axioms because of the incompleteness phenomena—such as the statement expressing the soundness of the axioms—and therefore to a fundamentally stronger theory. It follows that any formal theory that aims at capturing our pre-theoretic understanding of the natural numbers structure must admit of extensions; the question then arises as to how the axioms of arithmetic should be extended in order to construct a formal system that allows us to talk rigorously about the scope and limits of our arithmetical knowledge.

The process of recognising the soundness of the axioms is conceived of as a process of reflection on the given theory and the kinds of methods of proof that we recognise as correct. For this reason, the addition of proof-theoretic reflection principles as new axioms can be thought of as representing a natural way of extending PA in order to capture arithmetical knowledge.

I will distinguish two main strategies to justify the addition of reflection principles to be found in the literature (via transfinite induction, and via our truth-theoretic commitments), and I will argue that, contrary to these approaches, proof-theoretic reflection should be justified on the same fundamental grounds as our acceptance of the axioms of the initial system (see e.g. Feferman [1962] and [1988]). Furthermore, I will argue that on these grounds only uniform reflection is justified.

09:55
C. I. Lewis' Influence on the Early Work of Emil Post
SPEAKER: Mate Szabo

ABSTRACT. Post's paper [2] from 1921 contains the first published proof of the completeness of the propositional subsystem of Principia Mathematica and a decision procedure for it. His unsuccessful attempts in the following years to extend his results to the whole of Principia Mathematica lead him to anticipate the Incompleteness and Undecidability results of Godel and Turing [3]. Being deeply influenced by Lewis' 'Heterodox view' [1], Post considered logical systems as "purely formal developments" to "reach the highest generality possible." This "preoccupation with the outward forms of symbolic expressions" allowed, according to Post, for "greater freedom of method and technique." It made his developments recognizably different from the others, but it was in part "perhaps responsible for the fragmentary nature of [his] development." Moreover, Post views the logical process as "Essentially Creative"; that makes "the mathematician much more than a kind of clever being who can do quickly what a machine could." Post interprets this conclusion as being contrary to Lewis' view. In my talk I will summarize Lewis' 'Heterodox view' and make transparent his influence on Post's early work. At the end I will show that Post's interpretation of his conclusion is not in conflict with Lewis' views as expressed in [1].

[1] C. I. Lewis, A Survey of Symbolic Logic, University of California Press, 1918.

[2] E. Post, Introduction to a General Theory of Elementary Propositions, American Journal of Mathematics, vol.43 (1921), no.3, pp. 163-185.

[3] E. Post, Absolutely Unsolvable Problems and Relatively Undecidable Propositions - Account of an Anticipation, The Undecidable (Martin Davis, editor), Raven Press, Hewlett, 1965, pp. 340-433.

09:35-10:15 Session 107E: Contributed talks E4 (LC)
Location: MB, Hörsaal 14
09:35
Indiscernible extraction and Morley sequences

ABSTRACT. We present a new proof of the existence of Morley sequences in simple theories. We avoid using the Erd\H{o}s-Rado theorem and instead use Ramsey's theorem. The proof shows that the basic theory of forking in simple theories can be developed inside $\left< H ((2^{2^{|T|}})^+), \in \right>$ without using the axiom of replacement, answering a question of Grossberg, Iovino and Lessmann, as well as a question of Baldwin.

09:55
Consequences of the existence of ample generics and automorphism groups of homogeneous metric structures

ABSTRACT. A Polish group G has ample generics if the diagonal action of G on G^n by conjugation has a comeager orbit for every natural n. The existence of ample generics has very strong consequences. Every Polish group $G$ with ample generics has the small index property (that is, every subgroup H of G with [G:H]<2^\omega is open), the automatic continuity property (that is, every homomorphism from G into a separable group is continuous), and uncountable cofinality for non-open subgroups (that is, every countable exhaustive chain of non-open subgroups of G is finite.)

What is surprising is that all known examples of groups with ample generics are isomorphic to the automorphism group of some countable structure, and the question of whether there exists a Polish group with ample generics which is not of this form, is still open. In particular, the isometry group of the Urysohn space Iso(U), the automorphism group of the measure algebra Aut(MA), and the unitary group U(l_2) have meager conjugacy classes. On the other hand, it is known that these groups share some of the consequence of the existence of ample generics. For example, U(l_2) has the automatic continuity property, while Aut(MA) has the automatic continuity property, and the small index property.

Very recently, M.Sabok proposed a model theoretic approach that sheds new light on the structure of these groups, and more generally, automorphism groups of certain classes of homogeneous metric structures. In particular, he formulated a general criterion for a homogeneous metric structure $X$ that implies that Aut(X) has the automatic continuity property, and he verified it for U, MALG, and l_2.

We propose a criterion that implies all the main consequences of the existence of ample generics: the small index property, the automatic continuity property, and uncountable cofinality for non-open subgroups, which suggests that it may be regarded as a counterpart of the notion of ample generics in the realm of homogeneous metric structures. We also verify it for U, MALG, and l_2, thus proving that the groups Iso(U), Aut(MA), U(l_2) satisfy these properties.

09:45-10:15 Session 108A: Contributed Talk (LATD)
Location: MB, Festsaal
09:45
A Semi-relevant, Paraconsistent Dual of {\L}ukasiewicz Logic
SPEAKER: Arnon Avron

ABSTRACT. We introduce proof systems and semantics for two paraconsistent extensions of the system $\mathbf{T}$ of Anderson and and Belnap, and prove strong soundness, completeness, and decidability for both. The semantics of both systems is based on excluding just one element from the set of designated values. One of the systems has the variable sharing property, and so it is a relevant logic. The other is an extension of the first that may be viewed as a semi-relevant dual of {\L}ukasiewicz Logic.

09:45-10:15 Session 108B: Contributed Talk (LATD)
Location: MB, Hörsaal 15
09:45
Recent advances in the structural description of involutive FL$_e$-chains
SPEAKER: Sándor Jenei

ABSTRACT. For involutive FL$_e$-algebras, several construction and decomposition theorems will be presented: connected co-rotations, connected and disconnected co-rotation-annihilations, pistil-petal construction and decomposition, and involutive ordinals sums. The constructions provide us with a huge set of examples of negative or zero rank involutive FL$_e$-algebras.

10:15-10:45Coffee Break
10:45-13:05 Session 109A: Software Verification (CAV)
Location: FH, Hörsaal 1
10:45
The Spirit of Ghost Code
SPEAKER: unknown

ABSTRACT. In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this article, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

11:05
SMT-based Model Checking for Recursive Programs

ABSTRACT. We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both "over-" and "under-approximations" of procedure summaries. Under-approximations are used to propagate information over procedure calls. Over-approximations are used to block infeasible counterexamples and detect convergence. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE "lazily". We use existing interpolation techniques to over-approximate QE and introduce "Model-Based Projection" to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.

11:25
Property-Directed Shape Analysis

ABSTRACT. This paper addresses the problem of automatically generating quantified invariants for programs that manipulate singly and doubly linked-list data structures. Our algorithm is property-directed -- i.e., its choices are driven by the properties to be proven. The algorithm is able to establish that a correct program has no memory-safety violations -- i.e., there are no null-pointer dereferences, no memory leaks, etc. -- and that data-structure invariants are preserved. For programs with errors, the algorithm produces concrete counterexamples.

More broadly, the paper describes how to integrate IC3/PDR with full predicate abstraction. The analysis method is complete in the following sense: if an inductive invariant that proves that the program satisfies a given property is expressible as a Boolean combination of a given set of predicates, then the analysis will find such an invariant. To the best of our knowledge, this method represents the first shape-analysis algorithm that is capable of (i) reporting concrete counterexamples, or alternatively (ii) establishing that the predicates in use is not capable of proving the property in question.

11:45
Shape Analysis via Second-Order Bi-Abduction
SPEAKER: unknown

ABSTRACT. We present a new modular shape analysis that can synthesize heap memory specification on a per method basis. We rely on a second-order bi-abduction mechanism that can give interpretations to unknown shape predicates. There are several novel features in our shape analysis. Firstly, it is grounded on second-order bi-abduction. Secondly, we distinguish unknown pre-predicates in pre-conditions, from unknown post-predicates in post-condition; since the former may be strengthened, while the latter may be weakened. Thirdly, we provide a new heap guard mechanism to support more precise preconditions for heap specification. Lastly, we formalise a set of derivation and normalization rules to give concise definitions for unknown predicates. Our approach has been proven sound and is implemented on top of an existing automated verification system. We show its versatility in synthesizing a wide range of intricate shape specifications.

12:05
ICE: A Robust Learning Framework for Synthesizing Invariants
SPEAKER: unknown

ABSTRACT. We introduce a new paradigm for using black-box learning to synthesize invariants called ICE-learning that learns using examples, counter-examples, and implications, and show that it allows building honest teachers and convergent mechanisms for invariant synthesis. We observe that existing algorithms for black-box abstract interpretation can be interpreted as monotonic ICE learning algorithms, develop two classes of new non-monotonic ICE-learning domains, one for learning numerical invariants for scalar variables and one for quantified invariants for arrays and dynamic lists, and establish convergence results for them. We implement these ICE learning algorithms in a prototype verifier and show that the robustness that it brings is practical and effective.

12:25
From Invariant Checking to Invariant Inference Using Randomized Search
SPEAKER: Rahul Sharma

ABSTRACT. We describe a general framework C2I for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants, C2I generates an inference procedure that iteratively invokes two phases. The search phase uses randomized search to discover candidate invariants and the validate phase uses the checker to either prove or refute that the candidate is an actual invariant. To demonstrate the applicability of C2I, we use it to generate inference procedures that prove safety properties of numerical programs, prove non-termination of numerical programs, prove functional specifications of array manipulating programs, prove safety properties of string manipulating programs, and prove functional specifications of heap manipulating programs that use linked list data structures. We show that performance is competitive with state-of-the-art invariant inference techniques for more specialized domains. To the best of our knowledge, this work is the first to demonstrate such a generally applicable invariant inference technique.

12:45
SMACK: Decoupling Source Language Details from Verifier Implementations

ABSTRACT. A major obstacle to putting software verification research into practice is the high cost of developing the infrastructure enabling the application of verification algorithms to actual production code, in all of its complexity. Handling an entire programming language is a huge endeavor which few researchers are willing to undertake, and even fewer could invest the effort to implement their verification algorithms for many source languages. To decouple the implementations of verification algorithms from the details of source languages, and enable rapid prototyping on production code, we have developed SMACK. At its core, SMACK is a translator from the LLVM compiler's popular intermediate representation (IR) into the Boogie intermediate verification language (IVL). Sourcing LLVM IR exploits an increasing number of compiler frontends, optimizations, and analyses. Targeting Boogie exploits a canonical platform which simplifies the implementation of algorithms for verification, model checking, and abstract interpretation. Our initial experience verifying C language programs is encouraging: SMACK is competitive in SV-COMP benchmarks, is able to translate large programs (100 KLOC), and is used in several verification research prototypes.

12:55
Syntax-Guided Synthesis Competition Results
SPEAKER: Rajeev Alur
10:45-13:00 Session 109B: Conference Opening / Technical Session - Languages and Logic (ICLP)
Location: FH, Hörsaal 8
10:45
Conference Opening
11:00
A Linear Logic Programming Language for Concurrent Programming over Graph Structures
SPEAKER: Flavio Cruz

ABSTRACT. We have designed a new logic programming language called LM (Linear Meld) for programming graph-based algorithms in a declarative fashion. Our language is based on linear logic, an expressive logical system where logical facts can be consumed. Because LM integrates both classical and linear logic, LM tends to be more expressive than other logic programming languages. LM programs are naturally concurrent because facts are partitioned by nodes of a graph data structure. Computation is performed at the node level while communication happens between connected nodes. In this paper, we present the syntax and operational semantics of our language and illustrate its use through a number of examples.

11:30
Lifted Variable Elimination for Probabilistic Logic Programming

ABSTRACT. Lifted inference has been proposed for various probabilistic logical frameworks in order to compute the probability of queries in a time that is polynomial in the size of the domains of the logical variables rather than exponential. Even if various authors have underlined its importance for probabilistic logic programming (PLP), lifted inference has been applied up to now only to relational languages outside of logic programming. In this paper we adapt Generalized Counting First Order Variable Elimination (GC-FOVE) to the problem of computing the probability of queries to probabilistic logic programs under the distribution semantics. In particular, we extend the Prolog Factor Language (PFL) to include two new types of factors that are needed for representing ProbLog programs. These factors take into account the existing causal independence relationships among some of the random variables and are managed by the extension to variable elimination proposed by Zhang and Poole for dealing with convergent variables and heterogeneous factors. Two new operators are added to GC-FOVE for treating heterogeneous factors. The resulting algorithm, called LP$^2$ for Lifted Probabilistic Logic Programming, has been implemented by modifying the PFL implementation of GC-FOVE and tested on three benchmarks for lifted inference. The comparison with PITA and ProbLog2 shows the potential of the approach.

12:00
Minimum Model Semantics for Extensional Higher-order Logic Programming with Negation
SPEAKER: unknown

ABSTRACT. Extensional higher-order logic programming has been introduced as a generalization of classical logic programming. An important characteristic of this paradigm is that it preserves all the well-known properties of traditional logic programming. In this paper we consider the semantics of negation in the context of the new paradigm. Using some recent results from non-monotonic fixed-point theory, we demonstrate that every higher-order logic program with negation has a unique minimum infinite-valued model. In this way we obtain the first purely model-theoretic semantics for negation in extensional higher-order logic programming. Using our approach, we resolve an old paradox that was introduced by W. W. Wadge in order to demonstrate the semantic difficulties of higher-order logic programming.

12:30
Abstract Diagnosis for tccp using a Linear Temporal Logic
SPEAKER: Laura Titolo

ABSTRACT. Automatic techniques for program verification usually suffer the well-known state explosion problem. Most of the classical approaches are based on browsing the structure of some form of model (which represents the behavior of the program) to check if a given specification is valid. This implies that a subset of the model has to be built, and sometimes the needed fragment is quite huge.

In this work, we provide an alternative automatic decision method to check whether a given property, specified in a linear temporal logic, is valid w.r.t. a tccp program. Our proposal (based on abstract interpretation techniques) does not require to build any model at all. Our results guarantee correctness but, as usual when using an abstract semantics, completeness is lost.

10:45-13:00 Session 109C: Invited Talk & Higher-Order Logic (IJCAR)
Location: FH, Hörsaal 5
10:45
And-Or Tableaux for Fixpoint Logics with Converse: LTL, CTL, PDL and CPDL
SPEAKER: Rajeev Gore

ABSTRACT.   Over the last forty years, computer scientists have invented or borrowed
  numerous logics for reasoning about digital systems. Here, I would like to
  concentrate on three of them: Linear Time Temporal Logic (LTL),
  branching time Computation Tree temporal Logic (CTL), and
  Propositional Dynamic Logic (PDL), with and without converse. More
  specifically, I would like to present results and techniques on how
  to solve the satisfiability problem in these logics, with global
  assumptions, using the tableau method. The issues that arise are the
  typical tensions between computational complexity, practicality and
  scalability. This is joint work with Linh Anh Nguyen, Pietro Abate,
  Linda Postniece, Florian Widmann and Jimmy Thomson.

11:45
Unified Classical Logic Completeness: A Coinductive Pearl

ABSTRACT. Codatatypes are absent from many programming languages and proof assistants. We make a case for their importance by revisiting a classic result: the completeness theorem for first-order logic established through a Gentzen system. The core of the proof establishes an abstract property of possibly infinite derivation trees, independently of the concrete syntax or inference rules. This separation of concerns simplifies the presentation. The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of first- order logic. The corresponding Isabelle/HOL formalization demonstrates the recently introduced support for codatatypes and the Haskell code generator.

12:15
A Focused Sequent Calculus for Higher-Order Logic

ABSTRACT. We present a focused intuitionistic sequent calculus for higher- order logic. It has primitive support for equality and mixes λ-term con- version with equality reasoning. Classical reasoning is enabled by ex- tending the system with rules for reductio ad absurdum and the axiom of choice. The resulting system is proved sound with respect to Church’s simple type theory. A theorem prover based on bottom up search in the calculus has been implemented. It has been tested on the TPTP higher-order problem set with good results. The problems for which the theorem prover performs best require higher-order unification more fre- quently than the average higher-order TPTP problem. Being strong at higher-order unification the system may serve as a complement to other theorem provers in the field. The theorem prover produces proofs that can be mechanically verified against the soundness proof.

10:45-12:45 Session 109D: FLoC Inter-Conference Topic: Security. Software Security (CAV and CSF)
Location: FH, Hörsaal 6
10:45
Declarative Policies for Capability Control
SPEAKER: unknown

ABSTRACT. In capability-safe languages, components can access a resource only if they possess a capability for that resource. As a result, a programmer can prevent an untrusted component from accessing a sensitive resource by ensuring that the component never acquires the corresponding capability. In order to reason about which components may use a sensitive resource it is necessary to reason about how capabilities propagate through a system. This may be difficult, or, in the case of dynamically composed code, impossible to do before running the system.

To counter this situation, we propose extensions to capability-safe languages that restrict the use of capabilities according to declarative policies. We introduce two independently useful semantic security policies to regulate capabilities and describe language-based mechanisms that enforce them. Access control policies restrict which components may use a capability and are enforced using higher-order contracts. Integrity policies restrict which components may influence (directly or indirectly) the use of a capability and are enforced using an information-flow type system. Finally, we describe how programmers can dynamically and soundly combine components that enforce access control or integrity policies with components that enforce different policies or even no policy at all.

11:15
Portable Software Fault Isolation
SPEAKER: Joshua Kroll

ABSTRACT. We present a set of novel techniques for architecture portable software fault isolation (SFI) together with a working, proved-sound prototype in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the CompCert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of CompCert, we can obtain binary modules that satisfy the SFI memory safety policy for any of CompCert's supported architectures (currently: PowerPC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems. Specifically, we prove that any program output by our transformer will be safe with respect to a modified Cminor operational semantics that respects the SFI policy. Further, we arrange that this safety extends to assembly code when our transformer is composed with CompCert. Finally, we argue that our transformer does not modify the observable behavior of safe executions in the Cminor semantics.

11:45
Certificates for Verifiable Forensics
SPEAKER: Corin Pitcher

ABSTRACT. Digital forensics reports typically document the search process that has led to a conclusion; the primary means to verify the report is to repeat the search process. We believe that, as a result, the Trusted Computing Base for digital forensics is unnecessarily large and opaque.

We advocate the use of forensic certificates as intermediate artifacts between search and verification. Because a forensic certificate has a precise semantics, it can be verified without knowledge of the search process and forensic tools used to create it. In addition, this precision opens up avenues for the analysis of forensic specifications. We present a case study using the specification of a “deleted” file.

We propose a verification architecture that addresses the enormous size of digital forensics data sets. As a proof of concept, we consider a computer intrusion case study, drawn from the Honeynet project. Our Coq formalization yields a verifiable certificate of the correctness of the underlying forensic analysis. The Coq development for this case study is available at http://fpl.cs.depaul.edu/projects/forensics/.

12:15
Information flow monitoring as abstract interpretation for relational logic

ABSTRACT. A number of systems have been developed for dynamic information flow control (IFC). In such systems, the security policy is expressed by labeling input and output channels; it is enforced by tracking and checking labels on data. Some systems have been proved to enforce some form of noninterference (NI), formalized as a property of two runs of the program. In practice, NI is too strong and it is desirable to enforce some relaxation of NI that allows downgrading under constraints that have been classified as `what', `where', `who', or `when' policies. To encompass a broad range of policies, relational logic has been proposed as a means to specify and statically enforce policy. This paper shows how relational logic policies can be dynamically checked. To do so, we provide a new account of monitoring, in which the monitor state is viewed as an abstract interpretation of certain pairs of program runs.

10:45-13:00 Session 109E: Invited Talk, Contributed Talks, and Poster Announcements (DL)
Location: EI, EI 7
10:45
Invited Talk: Structured Data on the Web (or, a Personal Journey Away From and Back To Ontologies)
SPEAKER: Alon Halevy

ABSTRACT. For the first time since the emergence of the Web, structured data is playing a key role in search engines and is therefore being collected via a concerted effort. Much of this data is being extracted from the Web, which contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that encourages publishing more data sets from governments and other public organizations. The Web also supports new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets.

I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services. The talk will touch at various points on the role of ontologies in managing all this structured data and the new kind of ontological tools we may require.

11:45
On Faceted Search over Knowledge Bases
SPEAKER: unknown

ABSTRACT. An increasing number of applications are relying on RDF format for storing data, on OWL ontologies to enhance the data with semantics, and SPARQL for accessing the data. SPARQL, however, is not targeted towards end-users, and suitable query interfaces are needed. Faceted search is a prominent approach for end-user data access, and in this paper we discuss how this approach can be applied to RDF and OWL. We implemented our ideas in a proof of concept prototype, SemFacet, and a preliminary evaluation of the system is encouraging.

12:10
Pushing the CFDnc Envelope
SPEAKER: unknown

ABSTRACT. We consider the consequences on basic reasoning problems of allowing negated primitive concepts on left-hand-sides of inclusion dependencies in the description logic dialect CFDnc. Although earlier work has shown that this makes CQ answering coNP-complete, we show that TBox consistency and concept satisfiability remain in PTIME. We also show that knowledge base consistency and instance retrieval remain in PTIME if a CFDnc knowledge base satisfies a number of additional conditions, and that failing any one of these conditions will alone lead to intractability for these problems.

12:35
OptiqueVQS: Visual Query Formulation for OBDA
SPEAKER: unknown

ABSTRACT. Many applications nowadays expose data in semantic formats such as RDF and OWL. The standard query language to access such data is SPARQL. Writing SPARQL requires requires training and it is not feasible for unexperienced users. In this paper we present OptiqueVQS, a query formulation interface that addresses this problem and allows to construct conjunctive SPARQL queries in an intuitive way. Our approach relies on a graph based representation paradigm to visualise queries. Users can construct queries step by step and OptiqueVQS provides them with suggestions on how to continue query construction. These suggestions depend on specific parts of the constructed query and automatically computed by reasoning over the ontology underlying the interface. OptiqueVQS has two types of suggestions which are based on data and object properties. Preliminary user studies of the interface with geologists gave us encouraging results.

12:38
Visualization and management of mappings in ontology-based data access (progress report)
SPEAKER: unknown

ABSTRACT. In this paper we present an in progress prototype for the graphical visualization and management of mappings for Ontology-based data access (OBDA). The tool supports the specification of expressive mappings linking a DL-Lite ontology to a relational source database, and allows the designer to graphically navigate them, according to various possible views, modify their textual representation, and validate their syntactic correctness. Furthermore, it gives preliminary support to the semantic analysis of the mappings, which aims to identify anomalies in the representation, like the specification of mapping queries that cannot return answers without contradicting the ontology. Such functionalities are currently being enhanced in our ongoing development of the tool.

12:41
Graphol: Ontology representation through diagrams
SPEAKER: unknown

ABSTRACT. In this paper we present Graphol, a novel language for the diagrammatic representation of Description Logic (DL) ontologies. Graphol is designed with the aim of offering a completely visual representation to the users (notably, no formulae need to be used in our diagrams), thus helping the understanding of people not skilled in logic. At the same time, it provides designers with simple mechanisms for ontology editing, which free them from having to write down complex textual syntax. Graphol offers most of the classical constructs used in DLs, and allows for specifying expressive ontologies. It is also suitably extended to capture the same expressivity of OWL 2. In this respect, we developed a basic tool to translate Graphol ontologies realized with the yEd graph editor into OWL 2 encoding. User evaluation tests, conducted with designers skilled in conceptual or ontology modeling and users without specific logic background, demonstrate the effectiveness of our language for both visualization and editing of ontologies.

12:44
Reducing global consistency to local consistency in Ontology-based Data Access - Extended Abstract
SPEAKER: unknown

ABSTRACT. Ontology-based data access (OBDA) is a new paradigm aiming at accessing and managing data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. In the last years, this new paradigm has been used for providing users with suitable mechanisms for querying the data residing at the information system sources. Most of the research has been concentrating on making query answering efficient. However, query answering is not the only service that an OBDA system must provide. Another crucial service is consistency checking. Current approaches to this problem involves executing expensive queries at run-time. In this paper we address a fundamental problem for OBDA system: given an OBDA specification, can we avoid the consistency check on the whole OBDA system (global consistency check), and rely instead on the constraint checking carried out by the DBMS on the data source (local consistency checking)? We present algorithms and complexity analysis for this problem, showing that it can be solved efficiently for a class of OBDA systems that is very relevant in practice.

12:47
Expressive Identification Constraints to Capture Functional Dependencies in Description Logics
SPEAKER: unknown

ABSTRACT. Mapping relational data to RDF is an important task for the development of the Semantic Web. To this end, the W3C has recently released a Recommendation for the so-called direct mapping of relational data to RDF. In this work, we propose an enrichment of the direct mapping to make it more faithful by transferring also semantic information present in the relational schema from the relational world to the RDF world. For this purpose we enrich DL-Lite_{RDFS} with expressive identification constraints to capture functional dependencies and define an RDF Normal Form, which precisely captures the classical Boyce-Codd Normal Form of relational schemas.

12:50
OBDA Using RL Reasoners and Repairing

ABSTRACT. In previous work it has been shown how a SROIQ ontology O can be `repaired' for an RL system ans---that is, how we can compute a set of axioms R that is independent from the data and such that ans that is generally incomplete for O becomes complete for all SPARQL queries when used with O and R. However, the initial implementation and experiments were very preliminary and hence it is currently unclear whether the approach can be applied to large and complex ontologies. Moreover, the approach could not support non-SPARQL CQs. In the current paper we thoroughly investigate repairing as an approach to scalable (and complete) ontology-based data access. First, we present several non-trivial optimisations to the first prototype. Second, we show how (arbitrary) conjunctive queries can be supported by integrating well-known query rewriting techniques with RL systems via repairing. Third, we perform an extensive experimental evaluation obtaining encouraging results. In more detail, our results show that we can compute repairs even for very large real-world ontologies in a reasonable amount of time, that the performance overhead introduced by repairing is negligible in small to medium sized ontologies and noticeable but manageable in large and complex one, and that our approach to handling non-SPARQL CQs can very efficiently compute the correct answers for real-world challenging TBoxes.

12:53
A Method to Develop Description Logic Ontologies Iteratively with Automatic Requirement Traceability
SPEAKER: unknown

ABSTRACT. Competency Questions (CQs) play an important role in the ontology development life-cycle, as they represent the ontology requirements. Although the main methodologies use them, there is room for improvement in the current practice of ontology engineering. One of the main problems is the lack of tools to check if CQs defined in OWL, are being fulfilled by an ontology, preferably in an automated way. Moreover, requirement (or CQ) traceability is rarely explored. Recently there has been a trend on checking CQs against ontologies using RDF and SPARQL. Naturally, this language, being created for Semantic Networks, is inadequate to check the fulfillment of OWL CQs. In this paper, we introduce a semi-automatic, full-fledged method to develop ontologies iteratively, using CQs as requirements. It presents many novelties: a tracer to monitor the relations among CQs and the OWL code; an NLP component that translates CQs in natural language into OWL queries; a logic checker that confirms whether CQs are satisfied or not; a generator of new CQs, which comes into play when the CQ being dealt is not satisfied yet; and an axiom generator that suggests to the user new axioms to be included in the ontology.

12:56
Evaluation of Extraction Techniques for Ontology Excerpts
SPEAKER: unknown

ABSTRACT. In this paper we introduce the notion of an ontology excerpt as being a fixed-size subset of an ontology that preserves as much as possible of the meaning of the terms in a given signature as described in the ontology. We consider different extraction techniques for ontology excerpts based on methods from information retrieval. To evaluate these techniques we measure the degree of incompleteness of the resulting excerpts using the notion of logical difference. As such a measure we use the number of concept names from an input signature that witness the logical concept difference between the ontology and its excerpt. We provide an experimental evaluation of the extraction techniques using the biomedicalal ontology SNOMED CT.

10:45-12:45 Session 109F: Invited Talk (Ghilardi) and Tutorial (Baader) (LATD)
Location: MB, Festsaal
10:45
Step frames analysis in single- and multi-conclusion calculi
11:45
Tutorial: Fuzzy Description Logics (Part 2)
SPEAKER: Franz Baader
10:45-12:15 Session 109G: Contributed Talks: Declarative Programming 3 (NMR)
Location: EI, EI 9
10:45
Interactive Debugging of ASP Programs

ABSTRACT. Broad application of answer set programming (ASP) for declarative problem solving requires the development of tools supporting the coding process. Program debugging is one of the crucial activities within this process. Recently suggested ASP debugging approaches allow efficient computation of possible explanations of a fault. However, even for a small program a debugger might return a large number of possible explanations and selection of the correct one must be done manually. In this paper we present an interactive query-based ASP debugging method which extends previous approaches and finds a preferred explanation by means of observations. The system queries a programmer whether a set of ground atoms must be true in all (cautiously) or some (bravely) answer sets of the program. Since some queries can be more informative than the others, we discuss query selection strategies which, given user's preferences for an explanation, can find the best query. That is, the query an answer of which reduces the overall number of queries required for the identification of a preferred explanation.

11:15
Semantics and Compilation of Answer Set Programming with Generalized Atoms
SPEAKER: unknown

ABSTRACT. Answer Set Programming (ASP) is logic programming under the stable model or answer set semantics. During the last decade, this paradigm has seen several extensions by generalizing the notion of atom used in these programs. Among these, there are aggregate atoms, HEX atoms, generalized quantifiers, and abstract constraints. In this paper we refer to these constructs collectively as generalized atoms. The idea common to all of these constructs is that their satisfaction depends on the truth values of a set of (non-generalized) atoms, rather than the truth value of a single (non-generalized) atom. Motivated by several examples, we argue that for some of the more intricate generalized atoms, the previously suggested semantics provide unintuitive results and provide an alternative semantics, which we call supportedly stable or SFLP answer sets. We show that it is equivalent to the major previously proposed semantics for programs with convex generalized atoms, and that it in general admits more intended models than other semantics in the presence of non-convex generalized atoms. We show that the complexity of supportedly stable models is on the second level of the polynomial hierarchy, similar to previous proposals and to stable models of disjunctive logic programs. Given these complexity results, we provide a compilation method that compactly transforms programs with generalized atoms in disjunctive normal form to programs without generalized atoms. Variants are given for the new supportedly stable and the existing FLP semantics, for which a similar compilation technique has not been known so far.

11:45
A Family of Descriptive Approaches To Preferred Answer Sets

ABSTRACT. In logic programming under the answer set semantics, preferences on rules are used to choose which of the conflicting rules are applied. Many interesting semantics have been proposed. Brewka and Eiter's Principle I expresses the basic intuition behind the preferences. All the approaches that satisfy Principle I introduce a rather imperative feature into otherwise declarative language. They understand preferences as the order, in which the rules of a program have to be applied. In this paper we present two purely declarative approaches for preference handling that satisfy Principle I, and work for indirectly conflicting rules. The first approach is based on the idea that a rule cannot be defeated by a less preferred conflicting rule. This approach is able to ignore preferences between non-conflicting, and ,e.g., is equivalent with the answer set semantics for the subclass of stratified programs. It is suitable for the scenarios, when developers do not have full control over preferences. The second approach relaxes the requirement for ignoring conflicting rules, which ensures that it stays in the NP complexity class. It is based on the idea that a rule cannot be defeated by a rule that is less preferred or depends on a less preferred rule. The second approach can be also characterized by a transformation to logic programs without preferences. It turns out that the approaches form a hierarchy independent of the hierarchy of the approaches by Delgrande et. al., Wang et. al., and Brewka and Eiter. The approaches of the paper are equivalent with our earlier approach for direct conflicts, when the preferences are only between directly conflicting rules. Finally, we show an application for which the existing approaches are not usable, and the approaches of this paper produce expected results.

11:00-13:00 Session 110: Tutorial and invited talk (LC)
Location: MB, Prechtlsaal
11:00
Tutorial on Classical Realizability and Forcing 3
12:00
Reductions in computability theory from a constructive point of view
SPEAKER: Andrej Bauer

ABSTRACT. In constructive mathematics we often consider implications between non-constructive reasoning principles. For instance, it is well known that the Limited principle of omniscience implies that equality of real numbers is decidable. Most such reductions proceed by reducing an instance of the consequent to an instance of the antecedent. We may therefore define a notion of \emph{instance reducibility}, which turns out to have a very rich structure. Even better, under Kleene's function realizability interpretation instance reducibility corresponds to Weihrauch reducibility, while Kleene's number realizability relates it to truth-table reducibility. We may also ask about a constructive treatment of other reducibilities in computability theory. I shall discuss how one can tackle Turing reducibility constructively via Kleene's number realizability. One can then ask whether the constructive formulation of Turing degrees relates them to standard mathematical concepts.

12:15-13:00 Session 111: Contributed Talks: Systems 2 (NMR)
Location: EI, EI 9
12:15
Integrating Declarative Programming and Probabilistic Graphical Models for Knowledge Representation and Reasoning in Robotics
SPEAKER: unknown

ABSTRACT. This paper describes an architecture that combines the complementary strengths of declarative programming and probabilistic graphical models to enable robots to represent, reason with, and learn from, qualitative and quantitative descriptions of uncertainty and knowledge. An action language is used for the low-level (LL) and high-level (HL) system descriptions in the architecture, and the definition of recorded histories in the HL is expanded to allow prioritized defaults. For any given goal, tentative plans created in the HL using default knowledge and commonsense reasoning are implemented in the LL using probabilistic algorithms, with the corresponding observations used to update the HL history. Tight coupling between the two levels enables automatic selection of relevant variables and generation of suitable action policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in large and complex domains. The architecture is evaluated in simulation and on physical robots transporting objects in indoor domains; the benefit on robots is a reduction in task execution time of 39% compared with a purely probabilistic, but still hierarchical, approach.

12:37
An ASP-Based Architecture for Autonomous UAVs in Dynamic Environments: Progress Report
SPEAKER: unknown

ABSTRACT. Traditional AI reasoning techniques have been used successfully in many domains, including logistics, scheduling and game playing. This paper is part of a project aimed at investigating how such techniques can be extended to coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments. Specifically challenging are real-world environments where UAVs and other network-enabled devices must communicate to coordinate---and communication actions are neither reliable nor free. Such network-centric environments are common in military, public safety and commercial applications, yet most research (even multi-agent planning) usually takes communications among distributed agents as a given. We address this challenge by developing an agent architecture and reasoning algorithms based on Answer Set Programming (ASP). ASP has been chosen for this task because it enables high flexibility of representation, both of knowledge and of reasoning tasks. Although ASP\ has been used successfully in a number of applications, and ASP-based architectures have been studied for about a decade, to the best of our knowledge this is the first practical application of a complete ASP-based agent architecture. It is also the first practical application of ASP involving a combination of centralized reasoning, decentralized reasoning, execution monitoring, and reasoning about network communications. This work has been empirically validated using a distributed network-centric software evaluation testbed and the results provide guidance to designers in how to understand and control intelligent systems that operate in these environments.

13:00-14:30Lunch Break
14:00-16:00 Session 112: Invited talk and Karp prize winner (LC)
Location: MB, Prechtlsaal
14:00
On a theorem of McAloon
SPEAKER: Albert Visser

ABSTRACT. A theory is *restricted* if there is a fixed bound on the complexity of its axioms. Kenneth McAloon proved that every restricted arithmetical theory that is consistent with Peano Arithmetic has a model in which the standard natural numbers are definable. In slogan, one could say that McAloon shows that one needs the full language to exclude the standard numbers in principle.

In this talk we discuss the idea of generalizing McAloon's result to the class of consistent restricted sequential theories. We only obtain a weaker statement for the more general case. Whether the stronger statement holds remains open.

Sequential theories are, as a first approximation, theories with sufficient coding machinery for the construction of partial satisfaction predicates of a certain sort. Specifically, we have satisfaction for classes of formulas with complexity below $n$ for a complexity measure like *depth of quantifier alternations*. There are several salient general results concerning sequential theories. For example the degrees of interpretability of sequential theories have many good properties. Examples of sequential theories are PA^-, S^1_2, I\Sigma_1, PA, ACA_0, ZF, GB.

To any sequential model M we can uniquely assign an arithmetical model J_M. This is, roughly, the intersection of all definable cuts of an internal model N of a weak arithmetic like S^1_2. We can show that J_M is independent of the specific choice of N. Our theorem says that any consistent restricted sequential theory U has a model M such that J_M is isomorphic to the standard model.

In the talk, we will briefly indicate how McAloon's proof works and discuss some immediate generalizations. Then, we will outline the basic ideas behind the proof of the result concerning consistent restricted sequential theories.

15:00
The Singular Cardinals Problem after 130 years or so
SPEAKER: Matt Foreman
14:30-16:00 Session 113A: FLoC Inter-Conference Topic: Security (CAV and CSF)
Chair:
Location: FH, Hörsaal 1
14:30
Synthesis of Masking Countermeasures against Side Channel Attacks
SPEAKER: unknown

ABSTRACT. When mathematicians and computer scientists design cryptographic algorithms, they assume that sensitive information can be manipulated in a closed computing environment. Unfortunately, real computers and microchips leak information about the software code they execute, e.g. through power dissipation and electromagnetic radiation. Such side channel leaks has been exploited in many commercial systems in the embedded space. In this paper, we propose a counterexample guided inductive synthesis method to generating software countermeasures for implementations of cryptographic algorithms. Our new method guarantees that the resulting software code is "perfectly masked", that is, all intermediate computation results are statistically independent from the secret data. We have implemented our method based on the LLVM compiler and the Yices SMT solver. Our experiments on a set of cryptographic software code show that the new method is both effective in eliminating side channel leaks and scalable for practical use.

14:50
Temporal Mode-Checking for Runtime Monitoring of Privacy Policies
SPEAKER: Limin Jia

ABSTRACT. Fragments of first-order temporal logic are useful for representing many practical privacy and security policies. Past work has proposed two strategies for checking event trace (audit log) compliance with policies: online monitoring and offline audit. Although online monitoring is space- and time-efficient, existing techniques insist that satisfying instances of all subformulas of the policy be amenable to caching, which limits expressiveness when some subformulas have infinite support. In contrast, offline audit is brute-force and can handle such policies but is not as efficient. This paper proposes a new online monitoring algorithm that caches satisfying instances when it can, and falls back to the brute-force search when it cannot. Our key technical insight is a new static check, called the temporal mode check, which determines subformulas for which such caching is feasible and those for which it is not and, hence, guides our algorithm. We prove the correctness of our algorithm, and evaluate its performance over synthetic traces and realistic policies.

15:10
Program Verification through String Rewriting
SPEAKER: unknown

ABSTRACT. We consider the problem of verifying programs operating on strings, in combination with other datatypes like arithmetic or heap. As a tool to check verification conditions, analyse feasibility of execution paths, or build finite abstractions, we present a new decision procedure for a rich logic including strings. Main applications of our framework include, in particular, detection of vulnerabilities of web applications such as SQL injections and cross-site scripting.

15:30
A Conference Management System with Verified Document Confidentiality

ABSTRACT. We present a case study in verified security for realistic systems: the implementation of a conference management system, whose functional kernel is faithfully represented in the Isabelle theorem prover, where we specify and verify confidentiality properties. The various theoretical and practical challenges posed by this development led to a novel security model and verification method generally applicable to systems describable as input-output automata.

15:50
VAC - Verifier of Administrative Role-based Access Control Policies

ABSTRACT. In this paper we present VAC an automatic tool for verifying security properties of administrative Role-based Access Control (RBAC). RBAC has become an increasingly popular access control model, particularly suitable for large organizations, and it is implemented in several software. Automatic security analysis of administrative RBAC systems is recognized as an important problem, as an analysis tool can help designers to check whether their policies meet expected security properties. VAC converts administrative RBAC policies to imperative programs that simulate the policies both precisely and abstractly and supports several automatic verification back-ends to analyze the resulting programs. In this paper, we describe the architecture of VAC and overview the analysis techniques that have been implemented in the tool. We also report on experiments with several benchmarks from the literature.

14:30-16:00 Session 113B: Invited Talk / Technical Session: Domain Specific Languages (ICLP)
Location: FH, Hörsaal 8
14:30
A Module System for Domain-Specific Languages

ABSTRACT. Domain-specific languages (DSLs) are routinely created to simplify difficult or specialized programming tasks. They expose useful abstractions and design patterns in the form of language constructs, provide static semantics to eagerly detect misuse of these constructs, and dynamic semantics to completely define how language constructs interact. However, implementing and composing DSLs is a non-trivial task, and there is a lack of tools and techniques.

We address this problem by presenting a complete module system over LP for DSL construction, reuse, and composition. LP is already useful for DSL design, because it supports executable language specifications using notations familiar to language designers. We extend LP with a module system that is simple (with a few concepts), succinct (for key DSL specification scenarios), and composable (on the level of languages, compilers, and programs). These design choices reflect our use of LP for industrial DSL design. Our module system has been implemented in the FORMULA language, and was used to build key Windows 8 device drivers via DSLs. Though we present our module system as it actually appear in our FORMULA language, our emphasis is on concepts adaptable to other LP languages.

15:00
Combinatorial Search With Picat
SPEAKER: Neng-Fa Zhou
14:30-16:00 Session 113C: FLoC Inter-Conference Topic: SAT/SMT/QBF (CAV and IJCAR)
Location: FH, Hörsaal 5
14:30
SAT-based Decision Procedure for Analytic Pure Sequent Calculi
SPEAKER: Yoni Zohar

ABSTRACT. We identify a wide family of analytic sequent calculi for propositional non-classical logics whose derivability problem can be uniformly reduced to SAT. The proposed reduction is based on interpreting these calculi using non-deterministic semantics. Its time complexity is polynomial, and, in fact, linear for a useful subfamily. We further study an extension of such calculi with "Next" operators, and show that this extension preserves analyticity and is subject to a similar reduction to SAT. A particular interesting instance of these results is a HORNSAT-based linear-time decision procedure for Gurevich and Neeman's primal infon logic and several natural extensions of it.

15:00
A Unified Proof System for QBF Preprocessing
SPEAKER: Marijn Heule

ABSTRACT. For quantified Boolean formulas (QBFs), preprocessing is essential to solve many real-world formulas. Application of a preprocessor, however, prevented the extraction of proofs to independently validate correctness of the solver's result. Especially for universal expansion proof checking was not possible so far. In this paper, we introduce a unified proof system based on three simple and elegant quantified asymmetric tautology QRAT rules. In combination with an extended version of universal reduction, they are sufficient to efficiently express all preprocessing techniques used in state-of-the-art preprocessors including universal expansion. Moreover, these rules give rise to new preprocessing techniques. We equip our preprocessor bloqqer with QRAT proof logging and provide a proof checker for QRAT proofs.

15:30
The Fractal Dimension of SAT Formulas

ABSTRACT. Modern SAT solvers have experienced a remarkable progress on solving industrial instances. Most of the techniques have been developed after an intensive experimental testing process. It is believed that these techniques exploit the underlying structure of industrial instances. However, there is not a precise definition of the notion of structure. Recently, there have been some attempts to analyze the structure of these formulas in terms of complex networks, with the long-term aim of explaining the success of these SAT solving techniques, and possibly improving them. We study the fractal dimension of SAT instances with the aim of complementing the model that describes the structure of industrial instances. We show that many industrial families of formulas are self-similar, with a small fractal dimension. We also show how this dimension is affected by the addition of learnt clauses during the execution of SAT solvers.

14:30-16:00 Session 113D: Contributed Talks and Poster Announcements (DL)
Location: EI, EI 7
14:30
Query Inseparability by Games
SPEAKER: unknown

ABSTRACT. We investigate conjunctive query inseparability of description logic knowledge bases (KBs) with respect to a given signature, a fundamental problem for KB versioning, module extraction, forgetting and knowledge exchange. We develop a game-theoretic technique for checking query entailment of KBs expressed in fragments of Horn-ALCHI, and show a number of complexity results ranging from P to EXPTIME and 2EXPTIME.

14:55
Detecting Conjunctive Query Differences between ELHr-Terminologies using Hypergraphs
SPEAKER: unknown

ABSTRACT. We present a new method for detecting logical differences between EL-terminologies extended with role inclusions, domain and range restrictions of roles using a hypergraph representation of ontologies. In this paper we only consider differences that are given by pairs consisting of a conjunctive query together with an ABox formulated over a signature of interest. We define a simulation notion between such hypergraph representations and we show that its existence coincides with the absence of a logical difference. To demonstrate the practical applicability of our approach, we evaluate the performance of our prototype implementation on large ontologies.

15:20
Forgetting and Uniform Interpolation for ALC-Ontologies with ABoxes
SPEAKER: unknown

ABSTRACT. We present a method to compute uniform interpolants of ALC ontologies with ABoxes. Uniform interpolants are restricted views of ontologies that only use a specified set of symbols, but share all entailments in that signature with the original ontology. This way, it allows to select or remove information from an ontology based on a signature, which has applications in privacy, ontology analysis and ontology reuse. In turns out, that in general, uniform interpolants of ALC ontologies with ABoxes may require the use of disjunctions in the ABox or nominals. Our evaluation of the method suggests however, that in most practical cases uniform interpolants can be represented as a classical ALC ontology.

15:45
TBox abduction in ALC using a DL tableau
SPEAKER: unknown

ABSTRACT. The formal definition of abduction asks what needs to be added to a knowledge base to enable an observation to be entailed. TBox abduction in description logics (DLs) asks what TBox axioms need to be added to a DL knowledge base to allow a TBox axiom to be entailed. We describe a sound and complete algorithm, based on the standard DL tableau, that takes a TBox abduction problem in ALC and generates solutions in a restricted language. We then show how this algorithm can be enhanced to deal with a broader range of problems in ALC.

15:48
Understandable Explanations in Description Logic
SPEAKER: unknown

ABSTRACT. We present a method for generating understandable explanations in description logic. Such explanations could be of potential use for e.g. engineers, doctors, and users of the semantic web. Users commonly need to understand why a logical statement follows from a set of hypotheses. Then automatically generated explanations that are easily understandable could be of help. A proof system for description logic that can be used for generating understandable explanations is proposed. Similar systems have been proposed for propositional logic and first-order logic.

15:51
WApproximation: computing approximate answers for ontological queries
SPEAKER: Yahia Chabane

ABSTRACT.  

This paper investigates a new approach for computing ap- proximate answers of ontological queries based on a notion of an edit distance of a given individual w.r.t. a given query. Such a distance is computed by counting the number of elementary operations needed to be applied to an ABox in order to make a given individual a correct an- swer to a given query. The considered elementary operations are adding to or removing from an ABox, assertions of the form of an atomic con- cept (or its negation) and/or atomic roles. We describe some preliminary results regarding the problem of computing such approximate answers in the context of the Open World Assumption (OWA) and the Generalized Closed World Assumption (GCWA) approaches.

15:54
Towards Parallel Repair: An Ontology Decomposition-based Approach
SPEAKER: Yue Ma

ABSTRACT. Ontology repair remains one of the main bottlenecks for the development of ontologies for practical use. Many automated methods have been developed for suggesting potential repairs, but ultimately human intervention is required for selecting the adequate one, and the human expert might be overwhelmed by the amount of information delivered to her. We propose a decomposition of ontologies into smaller components that can be repaired in parallel. We show the utility of our approach for ontology repair, provide algorithms for computing this decomposition through standard reasoning, and study the complexity of several associated problems.

15:57
Analyzing the Complexity of Consistent Query Answering under Existential Rules

ABSTRACT. We consider the problem of consistent query answering under existential rules. More precisely, we aim at answering conjunctive queries under ontologies expressed by existential rules in the presence of negative constraints. These constraints may make the knowledge base inconsistent. Semantics have already been defined in order to keep meaningful answers even in the presence of such inconsistencies. However, it is known that even under very simple ontologies, consistent query answering is intractable under some reasonable semantics. The aim of this paper is to study more precisely where the complexity comes from, and to provide theoretical tools that make the complexity of the problem decrease in reasonable settings. To that purpose, we introduce the notion of archipelago and adapt the notion of exchangeable literals that has been introduced in the setting of conjunctive query answering with atomic negation. We exploit them to derive tractability results, first in the absence of existential rules then when simple (but nonetheless expressive) ontologies are used.

14:30-16:00 Session 113E: Contributed Talks (LATD)
Location: MB, Festsaal
14:30
MV-algebras with product and the Pierce-Birkhoff conjecture

ABSTRACT. Our main issue was to understand the connection between Lukasiewicz logic with product and the Pierce-Birkhoff conjecture, and to express it in a mathematical way. To do this we define the class of fMV-algebras, which are MV-algebras endowed with both an internal binary product and a scalar product with scalars from [0,1]. The proper quasi-variety generated by [0,1], with both products interpreted as the real product, provides the desired framework: the normal form theorem of its corresponding logical system can be seen as a local version of the Pierce-Birkhoff conjecture. We survey the theory of MV-algebras with product (PMV-algebras, Riesz MV-algebras, fMV-algebras) with a special focus on the normal form theorems and their connection with the Pierce-Birkhoff conjecture.

15:00
On tensor product in Lukasiewicz logic

ABSTRACT. Our goal is to establish functorial adjunctions between the category of MV-algebras and the categories of structures obtained by adding product operations, i.e. Riesz MV-algebras, PMV-algebras and fMV-algebras. We succed to do this for the corresponding subclasses of semisimple structures. Our main tool is the semisimple tensor product defined by Mundici. As consequence we prove the amalgamation property for unital and semisimple fMV-algebras. On our way we also prove that the MV-algebraic tensor product commutes, via Mundici's $\Gamma$ functor, with the tensor product of abelian lattice-ordered groups defined by Martinez.

15:30
On the Logic of Perfect MV-algebras

ABSTRACT. The paper is devoted to the logic Lp of perfect MV-algebras. It is shown that a) m-generated perfect MV-algebra is projective if and only if it is finitely presented; b) there exists a one-to-one correspondence between projective formulas of Lp with m-variables and the m-generated projective subalgebras of the m-generated free algebras of the variety generated by perfect MV-algebras; c) Lp is structurally complete.

14:30-16:00 Session 113F: Contributed Talks (LATD)
Location: MB, Hörsaal 15
14:30
Bases for admissible rules for fragments of RMt

ABSTRACT. In this work, we provide bases for admissible rules for certain fragments of RMt, the logic R-mingle extended with a constant t.

15:00
The Admissible Rules of Subframe Logics

ABSTRACT. The admissible rules of a logic are exactly those rules under which its set of theorems is closed. In this talk we will consider a class of intermediate logics, known as the subframe logics, and explore some of the structure of their admissible rules. We will show a particular scheme of admissible rules to be admissible in all subframe logics. Using this scheme we provide a complete description of the admissible rules of the intermediate logic BD2.

15:30
Admissible rules and almost structural completeness in some first-order modal logics
SPEAKER: unknown

ABSTRACT. Almost Structural Completeness is proved and the form of admissible rules is found for some first-order modal logics extending S4.3. Bases for admissible rules are also investigated.

14:30-15:30 Session 113G: Invited Talk (NMR)
Location: EI, EI 9
14:30
Revisiting Postulates for Inconsistency Measures

ABSTRACT. We discuss postulates for inconsistency measures as proposed in the literature. We examine them both individually and as a collection.
Although we criticize two of the main postulates, we mostly focus on the meaning of the postulates as a whole. Accordingly, we discuss a number of new postulates as substitutes and/or as alternative families.

15:30-16:00 Session 114: Contributed Talk: Nonmonotonic Logics (NMR)
Location: EI, EI 9
15:30
Implementing Default and Autoepistemic Logics via the Logic of GK
SPEAKER: unknown

ABSTRACT. The logic of knowledge and justified assumptions, also known as logic of grounded knowledge (GK), was proposed by Lin and Shoham as a general logic for nonmonotonic reasoning. To date, it has been used to embed in it default logic (propositional case), autoepistemic logic, Turner's logic of universal causation, and general logic programming under stable model semantics. Besides showing the generality of GK as a logic for nonmonotonic reasoning, these embeddings shed light on the relationships among these other logics. In this paper, for the first time, we show how the logic of GK can be embedded into disjunctive logic programming in a polynomial but non-modular translation with new variables. The result can then be used to compute the extension/expansion semantics of default logic, autoepistemic logic and Turner's logic of universal causation by disjunctive ASP solvers such as claspD(-2), DLV, GNT and cmodels.

16:00-16:30Coffee Break
16:30-17:10 Session 116A: Automata (CAV)
Location: FH, Hörsaal 1
16:30
From LTL to Deterministic Automata: A Safraless Compositional Approach
SPEAKER: Jan Kretinsky

ABSTRACT. We present a new algorithm to construct a deterministic Rabin automaton for an LTL formula $\varphi$. The automaton is the product of a master automaton and an array of slave automata, one for each $\G$-subformula of $\varphi$. The slave automaton for $\G\psi$ is in charge of recognizing whether $\F\G\psi$ holds, As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows to apply various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods.

16:50
Symbolic Visibly Pushdown Automata

ABSTRACT. Nested words model data with both linear and hierarchical structure such as XML documents and program traces. A nested word is a sequence of positions together with a matching relation that connects open tags (calls) with the corresponding close tags (returns). Visibly Pushdown Automata are a restricted class of pushdown automata that process nested words, and have many appealing theoretical properties such as closure under Boolean operations and decidable equivalence. However, like any classical automata models, they are limited to finite alphabets. This limitation is restrictive for practical applications to both XML processing and program trace analysis, where values for individual symbols are usually drawn from an unbounded domain. With this motivation, we introduce Symbolic Visibly Pushdown Automata (SVPA) as an executable model for nested words over infinite alphabets. In this model, transitions are labeled with first-order predicates over the input alphabet, analogous to symbolic automata processing strings over infinite alphabets. A key novelty of SVPAs is the use of binary predicates to model relations between open and close tags in a nested word. We show how SVPAs still enjoy the decidability and closure properties of Visibly Pushdown Automata. We use SVPAs to model XML validation policies and program properties that are not naturally expressible with previous formalisms and provide experimental results on the performance of our implementation.

16:30-18:00 Session 116B: Technical Session: Tabling and the Web (ICLP)
Location: FH, Hörsaal 8
16:30
Pengines: Web Logic Programming Made Easy

ABSTRACT. When developing a (web) interface for a deductive database, functionality required by the client is provided by means of HTTP handlers that wrap the logical data access predicates. These handlers are responsible for converting between client and server data representations and typically include options for paginating results. Designing the web accessible API is difficult because it is hard to predict the exact requirements of clients. Pengines changes this picture. The client provides a Prolog program that selects the required data by accessing the logical API of the server. The pengine infrastructure provides general mechanisms for converting Prolog data and handling Prolog non-determinism. The Pengines library is small (2000 lines Prolog, 100 lines JavaScript). It greatly simplifies defining an AJAX based client for a Prolog program, but also provides non-deterministic RPC between Prolog processes as well as interaction with Prolog engines similar to Paul Tarau's engines. Pengines are available as a standard package for SWI-Prolog~7.

17:00
Incremental Tabling for Knowledge Representation and Reasoning

ABSTRACT. Resolution-based Knowledge Representation and Reasoning (KRR) systems, such as Flora-2, Silk or Ergo, can scale to tens or hundreds of millions of facts, while supporting reasoning that includes Hilog, inheritance, defeasibility theories, and equality theories. These systems handle the termination and complexity issues that arise from the use of these features by a heavy use of tabling. In fact, such systems table by default all rules defined by users, unless they are simple facts.

Performing dynamic updates within such systems is nearly impossible unless the tables themselves can be made to react to changes. Incremental tabling as implemented in XSB~\cite{Saha06} partially addressed this problem, but the implementation was limited in scope and not always easy to use. In this paper, we describe {\em automatic incremental tabling} which at the semantic level supports updates in the 3-valued well-founded semantics, while guaranteeing full consistency of all tabled queries. Automatic Incremental Tabling also has significant performance improvements over previous implementations, including lazy incremental tabling, and control over the dependency structures used to determine how tables are updated. %Benchmarks indicate that \automatic incremental tabling has major %performance improvements over previous versions of incremental %tabling.

17:30
Tabling, Rational Terms, and Coinduction Finally Together!

ABSTRACT. Tabling is a commonly used technique in logic programming for avoiding cyclic behavior of logic programs and enabling more declarative program definitions. Furthermore, tabling often results in improved computation performance. Rational term are terms with one or more infinite sub-terms but with a finite representation. Rational terms can be generated in Prolog by omitting the occurs check when unifying two terms. Applications of rational terms include definite clause grammars, constraint handling systems, and coinduction. In this paper we report our extension of Yap's Prolog tabling mechanism to support rational terms. We describe the internal representation of rational terms within the table space and prove its correctness. We use this extension to implement a novel approach to coinduction with the use of tabling. We present the similarities with current coinductive transformations and describe the implementation. In addition, we address some limitations of rational terms support by Prolog compilers and describe an approach to ensure a canonical representation for rational terms.

16:30-17:00 Session 116C: FLoC Inter-Conference Topic: SAT/SMT/QBF (CAV and IJCAR)
Location: FH, Hörsaal 5
16:30
A Gentle Non-Disjoint Combination of Satisfiability Procedures
SPEAKER: unknown

ABSTRACT. A satisfiability problem is very often expressed in a combination of theories, and a very natural approach consists in solving the problem by combining the satisfiability procedures available for the component theories. This is the purpose of the combination method introduced by Nelson and Oppen. However, in its initial presentation, the Nelson-Oppen combination method requires the theories to be signature-disjoint and stably infinite (to guarantee the existence of an infinite model). The notion of gentle theory has been introduced in the last few years as one solution to go beyond the restriction of stable infiniteness, but in the case of disjoint theories. In this paper, we adapt the notion of gentle theory to the non-disjoint combination of theories sharing only unary predicates (plus constants and the equality). Like in the disjoint case, combining two theories, one of them being gentle, requires some minor assumptions on the other one. We show that major classes of theories, i.e Löwenheim and Bernays-Schönfinkel-Ramsey, satisfy the appropriate notion of gentleness introduced for this particular non-disjoint combination framework.

16:30-18:30 Session 116D: Invited talk and Gödel lecture (LC)
Location: MB, Prechtlsaal
16:30
The problem of a model without collection and without exponentiation

ABSTRACT. IΔ0 is the fragment of first-order arithmetic obtained by restricting the induction scheme to bounded formulas. BΣ1 extends IΔ0 by the collection scheme for bounded formulas, that is by the axioms [∀x<v∃y ψ(x,y)] ⇒ [∃w ∀x<v ∃y<w ψ(x,y)], where ψ(x,y) is bounded (and may contain additional parameters).

It has been known since the seminal work of Parsons and of Paris and Kirby in the 1970s that BΣ1 does not follow from IΔ0, even though it is Π02-conservative over IΔ0. However, all constructions of a model of IΔ0 not satisfying BΣ1 make use of the axiom Exp, which asserts that 2x is a total function. From the perspective of IΔ0, which does not prove the totality of any function of superpolynomial growth, the totality of exponentiation is a very strong unprovable statement. This led Wilkie and Paris [1] to ask whether IΔ0 + ¬Exp proves BΣ1.

It is generally believed that the answer to Wilkie and Paris's question is negative, and there are various statements from computational complexity theory, in some cases mutually contradictory, known to imply a negative answer. However, an unconditional proof of a negative answer remains elusive.

I plan to survey some facts related to Wilkie and Paris's question, focusing on two recent groups of theorems:

  • the results of the paper [1], which seem to suggest that we are a ``small step'' away from building a model of IΔ0 + ¬Exp without collection, 
  • some new results suggesting that the ``small step'' will be very hard to take, because there is a complexity-theoretic statement, almost certainly false but possibly not disprovable using present-day methods, which implies that $B\Sigma_1$ does follow from ¬Exp.

References:

[1] A. Wilkie and J. Paris, On the existence of end extensions of models of bounded induction, Logic, Methodology, and Philosophy of Science VIII (Moscow 1987), J.E. Fenstad, I.T. Frolov, and R. Hilpinen (eds.), North-Holland, 1989, pp. 143-162.

[2] Z. Adamowicz, L. A. Kołodziejczyk, and J. Paris, Truth definitions without exponentiation and the Σ1 collection scheme, Journal of Symbolic Logic, vol. 77 (2012), no. 2, pp. 649-655.

17:30
Computable structure theory and formulas of special forms
SPEAKER: Julia Knight

ABSTRACT. In computable structure theory, we ask questions about complexity of structures and classes of structures. For a particular countable structure $\mathcal{M}$, how hard is it to build a copy? Can we do it effectively? How hard is it to describe $\mathcal{M}$, up to isomorphism, distinguishing it from other countable structures? For a class $K$, how hard is it to characterize the class, distinguishing members from non-members? How hard is it to classify the elements of $K$, up to isomorphism. In the lecture, I will describe some results on these questions, obtained by combining ideas from computability, model theory, and descriptive set theory. Of special importance are formulas of special forms.

16:30-17:30 Session 116E: Invited Talk (CSF)
Location: FH, Hörsaal 6
16:30
Towards a Zero-Software Trusted Computing Base for Extensible Systems

ABSTRACT. The construction of reliable and secure software systems is known to be challenging. An important source of problems is the size and complexity of infrastructural software (such as operating systems, virtual machines, device drivers, application servers) one needs to trust to securely run a software application. In other words, the Trusted Computing Base (TCB) for software running on standard infrastructure is unreasonably large. Over the past 5-10 years, researchers have been developing a novel kind of security architecture that addresses this concern. These so-called protected module architectures can run security-critical software modules in an isolated area of the system where even the operating system cannot interfere or tamper with the state of the module. This enables software modules to pick and choose what software to include in their Trusted Computing Base instead of having to trust the entire infrastructure they run on. With Intel's announcement of their support for Software Guard eXtensions (Intel SGX), these security architectures are about to become mainstream, and they offer tremendous opportunities for improving the state of software security. In this talk, we discuss an example design of a protected module architecture, and we show how it can be used as a foundation for building trustworthy software applications. In particular, we discuss how to provide provable security guarantees for software modules on open extensible systems while only assuming the correctness of the hardware.

16:30-17:00 Session 116F: Contributed Talk (LATD)
Location: MB, Festsaal
16:30
Separating intermediate predicate logics of well-founded and dually well-founded structures by monadic sentences

ABSTRACT. We consider intermediate predicate logics defined by fixed well-ordered (or dually well-ordered) linear Kripke frames with constant domains where the order-type of the well-order is strictly smaller than ω^ω. We show that two such logics of different order-type are separated by a first-order sentence using only one monadic predicate symbol. Previous results by Minari, Takano and Ono, as well as the second author, obtained the same separation but relied on the use of predicate symbols of unbounded arity.

16:30-17:00 Session 116G: Contributed Talk (LATD)
Location: MB, Hörsaal 15
16:30
A meta-Logic of multiple-conclusion rules
SPEAKER: Alex Citkin

ABSTRACT. Our goal is to introduce a framework for working with generalized multiple-conclusion rules (in propositional logics) containing asserted and rejected propositions. We construct a meta-logic that gives the syntactic means for manipulations with such type of rules. And we discuss the different flavors of the notion of admissibility that arises due to presence of multiple conclusions.

16:30-18:30 Session 116H: Contributed Talks: Argumentation 2 (NMR)
Location: EI, EI 9
16:30
Compact Argumentation Frameworks

ABSTRACT. Abstract argumentation frameworks (AFs) are one of the most studied formalisms in AI. In this work, we introduce a certain subclass of AFs which we call compact. Given an extension-based semantics, the corresponding compact AFs are characterized by the feature that each argument of the AF occurs in at least one extension. This not only guarantees a certain notion of fairness; compact AFs are thus also minimal in the sense that no argument can be removed without changing the outcome. We address the following questions in the paper: (1) How are the classes of compact AFs related for different semantics? (2) Under which circumstances can AFs be transformed into equivalent compact ones? (3) Finally, we show that compact AFs are indeed a non-trivial subclass, since the verification problem remains coNP-hard for certain semantics.

17:00
Extension--based Semantics of Abstract Dialectical Frameworks

ABSTRACT. One of the most common tools in abstract argumentation are the argumentation frameworks and their associated semantics. While the framework is used to represent a given problem, the semantics define methods of solving it, i.e. they describe requirements for accepting and rejecting arguments. The basic available structure is the Dung framework, AF for short. It is accompanied by a variety of semantics including grounded, complete, preferred and stable. Although powerful, AFs have their shortcomings, which led to development of numerous enrichments. Among the most general ones are the abstract dialectical frameworks, also known as the ADFs. They make use of the so--called acceptance conditions to represent arbitrary relations. This level of abstraction brings not only new challenges, but also requires addressing problems inherited from other frameworks. One of the most controversial issues, recognized not only in argumentation, concerns the support cycles. In this paper we introduce a new method to ensure acyclicity of the chosen arguments and present a family of extension--based semantics built on it. We also continue our research on the semantics that permit cycles and fill in the gaps from the previous works. Moreover, we provide ADF versions of the properties known from the Dung setting. Finally, we also introduce a classification of the developed sub--semantics and relate them to the existing labeling--based approaches.

17:30
Credulous and Skeptical Argument Games for Complete Semantics in Conflict Resolution based Argumentation
SPEAKER: Jozef Frtús

ABSTRACT. Argumentation is one of the most popular approaches of defining a non-monotonic formalism and several argumentation based semantics were proposed for defeasible logic programs. Recently, a new approach based on notions of conflict resolutions was proposed, however with declarative semantics only. This paper gives a more procedural counterpart by developing skeptical and credulous argument games for complete semantics and soundness and completeness theorems for both games are provided. After that, distribution of defeasible logic program into several contexts is investigated and both argument games are adapted for multi-context system.

18:00
On the Relative Expressiveness of Argumentation Frameworks, Normal Logic Programs and Abstract Dialectical Frameworks
SPEAKER: Hannes Strass

ABSTRACT. We analyse the expressiveness of the two-valued semantics of abstract argumentation frameworks, normal logic programs and abstract dialectical frameworks. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional signature using only atoms from that signature. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness.

16:50-18:30 Session 117: Contributed Talks (DL)
Location: EI, EI 7
16:50
Pay-as-you-go Ontology Query Answering Using a Datalog Reasoner
SPEAKER: unknown

ABSTRACT. We describe a hybrid approach to conjunctive query answering over OWL 2 ontologies that combines a datalog reasoner with a fully-fledged OWL 2 reasoner in order to provide scalable “pay as you go” performance. Our approach delegates the bulk of the computation to the highly scalable datalog engine and resorts to expensive OWL 2 reasoning only as necessary to fully answer the query. We have implemented a prototype system that uses RDFox as a datalog reasoner, and HermiT as an OWL 2 reasoner. Our evaluation over both benchmark and realistic ontologies and datasets suggests the feasibility of our approach.

17:15
Hybrid Query Answering Over DL Ontologies
SPEAKER: unknown

ABSTRACT. Query answering over SROIQ TBoxes is an important reasoning task for many modern applications. Unfortunately, due to its high computational complexity, SROIQ reasoners are still not able to cope with datasets containing billions of data. Consequently, application developers often employ provably scalable systems which only support a fragment of SROIQ and which are, hence, most likely incomplete for the given input. However, this notion of completeness is too coarse since it implies that there exists \emph{some} query and \emph{some} dataset for which these systems would miss answers. Nevertheless, there might still be a large number of user queries for which they can compute all the right answers even over SROIQ TBoxes. In the current paper, we investigate whether, given a (ground) query Q over a SROIQ TBox T and a system ans, it is possible to identify in an efficient way if ans is complete for Q,T and every dataset. We give sufficient conditions for (in)completeness and present a hybrid query answering algorithm which uses ans when it is complete, otherwise it falls back to a fully-fledged SROIQ reasoner. However, even in the latter case, our algorithm still exploits ans as much as possible in order to reduce the search space of the SROIQ reasoner. Finally, we have implemented our approach using a concrete system ans and SROIQ reasoner obtaining encouraging results.

17:40
Optimised Absorption for Expressive Description Logics

ABSTRACT. Absorption is a well-known and very important optimisation technique that is widely used to reduce non-determinism for tableau-based reasoning systems. In this paper, we present the partial absorption algorithm which allows for absorbing parts of concepts of very expressive Description Logics and, thus, further reduces non-determinism. In addition, we present several extensions of the basic algorithm, which, for example, can be used to improve the handling of datatypes. The described absorption techniques are used in the reasoning system Konclude and they are essential for Konclude's performance.

18:05
Planning Problems for Graph Structured Data in Description Logics

ABSTRACT. We consider the setting of graph-structured data that evolve as a result of operations carried out by users or applications. We rely on a simple yet powerful action language in which actions are finite sequences of insertions and deletions of nodes and labels, and on the use of Description Logics for describing integrity constraints and (partial) states of the data. For this setting, we study variants of planning problems, which range from ensuring the satisfaction of a given set of integrity constraints after executing a given sequence of actions, to deciding the existence of a sequence of actions that takes the data to an (un)desirable state, starting either from a specific data instance or from an incomplete description of it. For these problems we establish (un)decidability results and study the computational complexity, also considering various restrictions.

17:10-18:00 Session 119A: Invited Talk (CAV)
Location: FH, Hörsaal 1
17:10
Designing and verifying molecular circuits and systems made of DNA
SPEAKER: Erik Winfree

ABSTRACT. Inspired by the information processing core of biological organisms and its ability to fabricate intricate machinery from the molecular scale up to the macroscopic scale, research in synthetic biology, molecular programming, and nucleic acid nanotechnology aims to create information-based chemical systems that carry out human-defined molecular programs that input, output, and manipulate molecules and molecular structures. For chemistry to become the next information technology substrate, we will need improved tools for designing, simulating, and analyzing complex molecular circuits and systems. Using DNA nanotechnology as a model system, I will discuss how programming languages can be devised for specifying molecular systems at a high level, how compilers can translate such specifications into concrete molecular implementations, how both high-level and low-level specifications can be simulated and verified according to behavioral logic and the underlying biophysics of molecular interactions, and how Bayesian analysis techniques can be used to understand and predict the behavior of experimental systems that, at this point, still inevitably contain many ill-characterized components and interactions.

17:30-18:30 Session 120: FLoC Inter-Conference Topic: Security. Information Flow 1 (CAV and CSF)
Location: FH, Hörsaal 6
17:30
On Dynamic Flow-sensitive Floating-Label Systems
SPEAKER: unknown

ABSTRACT. Flow-sensitive analysis for information-flow control (IFC) allows data structures to have mutable security labels, i.e., labels that can change over the course of the computation. This feature is beneficial for several reasons: it often boosts permissiveness, reduces the burden of explicit annotations, and enables reuse of single-threaded data structures. However, in a purely dynamic setting, mutable labels can expose a covert channel capable of leaking secrets at a high rate. We present an extension for LIO---a language-level floating-label system--that safely handles flow-sensitive references. The key insight to safely manipulating the label of a reference is to not only consider the label on the data stored in the reference, i.e, the reference label, but also the label on the reference label itself. Taking this into consideration, we provide an upgrade primitive that can be used to change the label of a reference, when deemed safe. To eliminate the burden of determining when it is safe to upgrade a reference, we additionally provide a mechanism for the automatic upgrades. Our approach naturally extends to a concurrent setting, not previously considered by flow-sensitive systems. For both our sequential and concurrent calculi we prove non-interference by embedding the flow-sensitive system into the flow-insensitive LIO calculus; the embedding itself is a surprising result.

18:00
Noninterference under Weak Memory Models
SPEAKER: unknown

ABSTRACT. Research on information flow security for concurrent programs usually assumes sequential consistency although modern multi-core processors often support weaker consistency guarantees. In this article, we clarify the impact that relaxations of sequential consistency have on information flow security. We consider four memory models and prove for each of them that information flow security under this model does not imply information flow security in any of the other models. These results suggest that research on security needs to pay more attention to the consistency guarantees provided by contemporary hardware. The other main technical contribution of this article is a program transformation that soundly enforces information flow security under different memory models. This program transformation is significantly less restrictive than a transformation that first establishes sequential consistency and then applies a traditional information flow analysis for concurrent programs.

19:00-21:30 Session 122: VSL Reception 2 (VSL)
Location: University of Vienna, Arkadenhof
22:00-23:59 Session 123: VSL Student Reception 2 (VSL)
Location: Säulenhalle (Volksgarten)