previous day
next day
all days

View: session overviewtalk overview

10:45-13:00 Session 56A: Models and Expressiveness (CSL-LICS)
Location: FH, Hörsaal 1
Graph Logics with Rational Relations: The Role of Word Combinatorics

ABSTRACT. Graph databases make use of logics that combine traditional first-order features with navigation on paths, in the same way logics for model checking do. However, modern applications of graph databases impose a new requirement on the expressiveness of the logics: they need comparing labels of paths based on word relations (such as prefix, subword, or subsequence). This has led to the study of logics that extend basic graph languages with features for comparing labels of paths based on regular relations, or the strictly more powerful rational relations. The evaluation problem for the former logic is decidable (and even tractable in data complexity), but already extending this logic with such a common rational relation as subword or suffix turns evaluation undecidable.

In practice, however, it is rare to have the need for such powerful logics. Therefore, it is more realistic to study the complexity of less expressive logics that still allow comparing paths based on practically motivated rational relations. Here we concentrate on the most basic such languages, which extend graph pattern logics with path comparisons based only on suffix, subword or subsequence. We pinpoint the complexity of evaluation for each one of these logics, which shows that all of them are decidable in elementary time (NEXPTIME). Furthermore, the extension with suffix is even tractable in data complexity (but the other two are not). To obtain our results we establish a link between the evaluation problem for graph logics and two important problems in word combinatorics: word equations with regular constraints and square shuffling.

Model Checking Existential Logic on Partially Ordered Sets
SPEAKER: Simone Bova

ABSTRACT. We study the problem of checking whether an existential sentence (that is, a first-order sentence in prefix form built using existential quantifiers and all Boolean connectives) is true in a finite partially ordered set (in short, a poset). A poset is a reflexive, antisymmetric, and transitive digraph. The problem encompasses the fundamental embedding problem of finding an isomorphic copy of a poset as an induced substructure of another poset.

The expression complexity of existential logic is NP-hard on posets; thus we investigate structural properties of posets yielding conditions for fixed-parameter tractability when the problem is parameterized by the sentence. We identify width as a central structural property (the width of a poset is the maximum size of a subset of pairwise incomparable elements); our main algorithmic result is that model checking existential logic on classes of finite posets of bounded width is fixed-parameter tractable. We observe a similar phenomenon in classical complexity, where we prove that the isomorphism problem is polynomial-time tractable on classes of posets of bounded width; this settles an open problem in order theory.

We surround our main algorithmic result with complexity results on less restricted, natural neighboring classes of finite posets, establishing its tightness in this sense. We also relate our work with (and demonstrate its independence of) fundamental fixed-parameter tractability results for model checking on digraphs of bounded degree and bounded clique-width.

Infinite-State Energy Games
SPEAKER: unknown

ABSTRACT. Energy games are a well-studied class of 2-player turn based games on a finite graph where transitions are labeled with integer vectors which represent changes in a multidimensional resource (the energy). One player tries to keep the cumulative changes non-negative in every component while the other tries to frustrate this.

We consider a generalization of energy games by replacing the finite game graph with an infinite game graph derived from a pushdown automaton (modelling recursion) or its subclass of a one-counter machine.

Our main result is that energy games are decidable in the case where the game graph is induced by a one-counter machine and the energy is one-dimensional. On the other hand, every generalized energy game is undecidable: Energy games on one-counter machines with a 2-dimensional energy are undecidable, and energy games on pushdown automata are undecidable even if the energy is one-dimensional.

Furthermore, we show that energy games and simulation games are inter-reducible, and thus we additionally obtain several new (un)decidability results for the problem of checking simulation preorder between pushdown automata and vector addition systems.

Weak MSO: Automata and Expressiveness Modulo Bisimilarity

ABSTRACT. We prove that the bisimulation-invariant fragment of weak monadic second-order logic (WMSO) is equivalent to the fragment of the modal $\mu$-calculus where the application of the least fixpoint operator $\mu x.\varphi$ is restricted to formulas $\varphi$ that are continuous in $x$. Our proof is automata-theoretic in nature; in particular, we introduce a class of automata characterizing the expressive power of WMSO over tree models of arbitrary branching degree. The transition map of these automata is defined in terms of a logic $\olque$ that is the extension of first-order logic with a generalized quantifier $\qu$, where $\qu x. \phi$ means that there are infinitely many objects satisfying $\phi$. An important part of our work consists of a model-theoretic analysis of $\olque$.

10:45-13:00 Session 56B: Proofs and Verification (CSL-LICS)
Location: FH, Hörsaal 5
Axioms and Decidability for Type Isomorphism in the Presence of Sums
SPEAKER: Danko Ilik

ABSTRACT. We consider the problem of characterizing isomorphisms of types, or equivalently constructive cardinality of sets, in simultaneous presence of disjoint unions, Cartesian products, and exponentials. Mostly relying on classic results concerning polynomials with exponentiation that have not been used in our context, we derive: that the usual finite axiomatization known as High School Identities (HSI) is complete for a significant subclass of types; that it is decidable for that subclass when two types are isomorphic; that, for the whole of the set of types, a recursive extension of the axioms of HSI exists that is complete; and that, for the whole of the set of types, the question whether two types are isomorphic is decidable, that is, at least when base types are to be interpreted by finite sets. We also point out certain related open problems.

A functional functional interpretation

ABSTRACT. In this paper, we present a modern reformulation of the Dialectica interpretation based on the linearized version of De Paiva. Contrarily to Gödel's original translation which translated HA into system T, our presentation applies on untyped λ-terms and features nicer proof-theoretical properties.

In the Curry-Howard perspective, we show that the computational behaviour of this translation can be accurately described by the explicit stack manipulation of the Krivine abstract machine, thus giving it a direct-style operational explanation.

Finally, we give direct evidence that supports the fact our presentation is quite relevant, by showing that we can apply it to the dependently-typed system CCω almost without any adaptation. This answers the question of the validity of Dialectica-like constructions in a dependent setting.

Symmetric Normalisation for Intuitionistic Logic

ABSTRACT. We present two proof systems for implication-only intuitionistic logic in the calculus of structures. The first system is a direct adaptation of the sequent calculus to the deep inference setting, and we provide a procedure for cut elimination, similar to the one from the sequent calculus, using on non-local rewriting. The second system is the symmetric completion of the first, as normally given in deep inference for logics with DeMorgan duality: each inference rule has a dual rule, cut being dual to the identity axiom. We prove a generalisation of cut elimination, that we call symmetric normalisation, where all rules dual to standard ones are permuted up in the derivation. The result is a decomposition theorem having cut elimination and interpolation as corollaries.

Satisfiability Modulo Counting: A New Approach for Analyzing Privacy Properties

ABSTRACT. Applications increasingly derive functionality from sensitive personal information, forcing developers who wish to preserve some notion of privacy or confidentiality to reason about partial information leakage. New definitions of privacy and confidentiality, such as differential privacy, address this by offering precise statements of acceptable disclosure that are useful in common settings. However, several recent published accounts of flawed implementations have surfaced, highlighting the need for verification techniques.

In this paper, we pose the problem of model-counting satisfiability, and show that a diverse set of privacy and confidentiality verification problems can be reduced to instances of it. In this problem, constraints are placed on the outcome of model-counting operations, which occur over formulas containing parameters. The object is to find an assignment to the parameters that satisfies the model-counting constraints, or to demonstrate unsatisfiability.

We present a logic for expressing these problems, and a theory that defines the semantics of this logic. We develop an abstract decision procedure for model-counting satisfiability problems fashioned after CDCL-based SMT procedures, encapsulating functionality specific to the underlying logic in which counting occurs in a small set of black-box routines similar to those required of theory solvers in SMT. We describe an implementation of this procedure for linear-integer arithmetic, as well as an effective strategy for generating lemmas. We conclude by applying our decision procedure to the verification of privacy properties over programs taken from a well-known privacy-preserving compiler, demonstrating its ability to find flaws or prove correctness sometimes in a matter of seconds.

10:45-13:00 Session 56C: Mathematics (ITP)
Location: FH, Hörsaal 8
A Computer-Algebra-Based Formal Proof of the Irrationality of ζ(3)

ABSTRACT. This paper describes the formal verification of an irrationality proof of ζ(3), the evaluation of the Riemann zeta function, using the Coq proof assistant. This result was first proved by Apéry in 1978, and the proof we have formalized follows the path of his original presentation. The crux of this proof is to establish that some sequences verify a common recurrence. We formally prove this result by an a posteriori verification of calculations performed by computer algebra algorithms in a Maple session. The rest of the proof combines arithmetical ingredients and some asymptotic analysis that we conduct by extending the Mathematical Components libraries.

A Coq Formalization of Finitely Presented Modules

ABSTRACT. This paper presents a formalization of constructive module theory in the intuitionistic type theory of Coq. We build an abstraction layer on top of matrix encodings, in order to represent finitely presented modules, and obtain clean definitions with short proofs justifying that it forms an abelian category. The goal is to use it as a first step to compute certified topological invariants, like homology groups and Betti numbers.

Formal Verification of Optical Quantum Flip Gate
SPEAKER: unknown

ABSTRACT. Quantum computers are promising to efficiently solve hard computational problems, especially NP problems. In this paper, we propose to tackle the formal verification of quantum circuits using theorem proving. In particular, we focus on the verification of quantum computing based on coherent light, typically light produced by laser sources. We formally verify the behavior of the quantum flip gate in HOL Light: we prove that it can flip a zero-quantum-bit to a one-quantum-bit and vice versa. To this aim, we model two optical devices: the beam splitter and the phase conjugating mirror. This requires the formalization of some fundamental mathematics like exponentiation of linear transformations.

On the Formalization of Z-Transform in HOL

ABSTRACT. System analysis based on difference or recurrence equations is the most fundamental technique to analyze biological, electronic, control and signal processing systems. Z-transform is one of the most popular tool to solve such difference equations. In this paper, we present the formalization of Z-transform to extend the formal linear system analysis capabilities using theorem proving. In particular, we use differential, transcendental and topological theories of multivariate calculus to formally define Z-transform in higher-order logic and reason about the correctness of its properties, such as linearity, time shifting and scaling in z-domain. To illustrate the practical effectiveness of the proposed formalization, we present the formal analysis of an infinite impulse response (IIR) digital signal processing filter.

Mechanical Certification of Loop Pipelining Transformations: A Preview
SPEAKER: unknown

ABSTRACT. We describe our ongoing effort using theorem proving to certify loop pipelining, a critical and complex transformation employed by behavioral synthesis. Our approach is mechanized in the ACL2 theorem prover. We discuss how our approach connects to a larger framework for certifying synthesized hardware through a combination of automated and interactive verification techniques. We also discuss some of the formalization and proof challenges and our early attempts at addressing them.

10:45-13:00 Session 56D (RTA-TLCA)
Location: Informatikhörsaal
Cut Admissibility by Saturation

ABSTRACT. Deduction modulo is a framework in which theories are integrated into proof systems such as natural deduction or sequent calculus by presenting them using rewriting rules. When only terms are rewritten, cut admissibility in those systems is equivalent to the confluence of the rewriting system, as shown by Dowek, RTA 2003, LNCS 2706. This is no longer true when considering rewriting rules involving propositions. In this paper, we show that, in the same way that it is possible to recover confluence using Knuth-Bendix completion, one can regain cut admissibility in the general case using standard saturation techniques. This work relies on a view of proposition rewriting rules as oriented clauses, like term rewriting rules can be seen as oriented equations. This also leads us to introduce an extension of deduction modulo with *conditional* term rewriting rules.

Predicate Abstraction of Rewrite Theories
SPEAKER: Jose Meseguer

ABSTRACT. For an infinite-state concurrent system S with a set AP of state predicates, its predicate abstraction defines a finite-state system whose states are subsets of AP, and its transitions s --> s' are witnessed by concrete transitions between states in S satisfying the respective sets of predicates s and s'. Since it is not always possible to find such witnesses, an over-approximation adding extra transitions is often used. For systems S described by formal specifications, predicate abstractions are typically built using various automated deduction techniques. This paper presents a new method--based on rewriting, semantic unification, and variant narrowing--to automatically generate a predicate abstraction when the formal specification of S is given by a conditional rewrite theory. The method is illustrated with concrete examples showing that it naturally supports abstraction refinement and is quite accurate, i.e., it can produce abstractions not needing over-approximations.

All-Path Reachability Logic
SPEAKER: unknown

ABSTRACT. This paper presents a language-independent proof system for reachability properties of programs written in non-deterministic (e.g. concurrent) languages, referred to as *all-path reachability logic*. It derives partial-correctness properties with all-path semantics (a state satisfying a given precondition reaches states satisfying a given postcondition on all terminating execution paths). The proof system takes as axioms any unconditional operational semantics, and is sound (partially correct) and (relatively) complete, independent of the object language; the soundness has also been mechanized (Coq). This approach is implemented in a tool for semantics-based verification as part of the K framework.

Construction of retractile proof structures

ABSTRACT. In this talk we present a paradigm of focusing proof search based on an incremental construction of ''correct'' (i.e, sequentializable) proof structures of the pure (units free) multiplicative and additive fragment of linear logic (J.-Y. Girard, 1987). The correctness of proof construction steps (or expansion steps) is ensured by means of a system of graph retraction rules; this graph rewriting system is shown to be convergent, that is, terminating and confluent. The proposed proof construction follows a specific - ''parsimonious'', indeed - retraction strategy that, at each expansion attempt, allows to take into account (abstract) graphs that are ''smaller'' (w.r.t. the size) than the starting proof structures.

10:45-11:55 Session 56E: Applications (SAT)
Location: FH, Hörsaal 6
A SAT Attack on the Erdos Discrepancy Conjecture
SPEAKER: unknown

ABSTRACT. In 1930s Paul Erdos conjectured that for any positive integer C in any infinite +1 -1 sequence (x_n) there exists a subsequence x_d, x_{2d}, ... , x_{kd} for some positive integers k and d, such that |x_d + x_{2d} + ... + x_{kd}|> C. The conjecture has been referred to as one of the major open problems in combinatorial number theory and discrepancy theory. For the particular case of C=1 a human proof of the conjecture exists; for C=2 a bespoke computer program had generated sequences of length 1124 having discrepancy 2, but the status of the conjecture remained open even for such a small bound. We show that by encoding the problem into Boolean satisfiability and applying the state of the art SAT solvers, one can obtain a sequence of length 1160 with discrepancy 2 and a proof of the Erdos discrepancy conjecture for C=2, claiming that no sequence of length 1161 and discrepancy 2 exists. We also present our partial results for the case of C=3.

Dominant Controllability Check Using QBF-Solver and Netlist Optimizer
SPEAKER: unknown

ABSTRACT. This paper presents an application of formal methods to the verification of power management modules in hardware designs. This paper identifies a distinct property of power management module and names it Dominant Controllability. This is a property of a netlist node and a subset of the inputs. The property holds if there exists an assignment to the subset of the inputs such that it sets the node to 0/1 regardless of the values of the rest of the inputs. Verification of power management modules in recent CPU and GPU designs includes hundreds of such properties. Dominant Controllability is different than controllability or random controllability which require only that the node is set to 0/1 by an assignment or random assignment to the inputs. The new application verifies Dominant Controllability using two formal methods: QBF-solver and netlist optimizer. Each method can be use independently or combined by a third algorithm that heuristically selects a method based on their performance in a specific design. The experimental results show that using QBF-solver is significantly faster when Dominant Controllability fails to hold (there is a bug in the design). Dominant Controllability fails when no assignment sets the node to 0/1, or when each such assignment is not dominant over other assignments to the rest of the inputs. In this case QBF-solver is faster because it concludes if Dominant Controllability fails without enumerating the large number of assignments. If the QBF-solver concludes that Dominant Controllability holds, netlist optimizer is used to verify the result, with a linear cost. Our experimental results show that even though using the netlist optimizer is incomplete, it is often successful when the power management module is very simple. The results also show that the third algorithm named Alternating selects the faster method successfully.

Fast DQBF Refutation

ABSTRACT. Dependency Quantified Boolean Formulas (DQBF) extend QBF with Henkin quantifiers, which allow for non-linear dependencies between the quantifiers. This extension is useful in verification problems for incomplete designs such as the partial equivalence checking problem (PEC), where a partial circuit, with some parts left open as ``black boxes'', is compared against a full circuit. The PEC problem is to decide whether the black boxes in the partial circuit can be filled in such a way that the two circuits become equivalent, while respecting that each black box only observes the subset of the signals that are designated as its input. We present a new algorithm that efficiently refutes unsatisfiable DQBF formulas. The algorithm detects situations in which already a subset of the possible assignments of the universally quantified variables suffices to rule out a satisfying assignment of the existentially quantified variables. Our experimental evaluation on PEC benchmarks shows that the new algorithm is a significant improvement both over approximative QBF-based methods, where our results are much more accurate, and over precise methods based on variable elimination, where the new algorithm scales much better in the number of Henkin quantifiers.

10:45-12:45 Session 56F: Invited Talk (Urquhart) and Tutorial (Marra) (LATD)
Location: MB, Festsaal
Relevance Logic: Problems Open and Closed

ABSTRACT. I discuss a collection of problems in relevance logic.

The main problems discussed are: the decidability of the positive semilattice system,

decidability of the fragments of  in a restricted number of variables,

and the complexity of the decision problem for the implicational fragment of R.

Some related problems are discussed along the way.

Tutorial 1/2: The more, the less, and the much more: An introduction to Lukasiewicz logic as a logic of vague propositions, and to its applications

ABSTRACT. In the first talk of this tutorial I offer an introduction to Lukasiewicz propositional logic that differs from
the standard ones in that it does not start from real-valued valuations as a basis for the semantical definition of the
system. Rather, I show how a necessarily informal but rigorous analysis of the semantics of certain vague predicates naturally
leads to axiomatisations of Lukasiewicz logic. It is then the deductive system itself, now motivated by the intended
semantics in terms of vagueness, that inescapably leads to magnitudes — the real numbers or their non-Archimedean

11:00-13:00 Session 57: Tutorial and invited talk (LC)
Location: MB, Prechtlsaal
Tutorial on Stategic and Extensive Games 3
SPEAKER: Krzysztof Apt
The Birth of Semantic Entailment

ABSTRACT. The relation of semantic entailment, i.e. of a conclusion’s being true on every model of its premises, currently plays a central role in logic, and is arguably the canonical entailment-relation in most contexts. But it wasn’t always this way; the relation doesn’t come into its own until shortly before its starring role in the completeness theorem for first-order logic. This talk investigates the development of the notion of model from the mid-19th century to the early 20th, and the parallel emergence of logic’s concern with the relation of semantic entailment. We will be especially interested in clarifying some ways in which the emergence of the modern conceptions of model and of entailment are tied to a changing view of the nature of axiomatic foundations.

08:45-10:15 Session 57: VSL Keynote Talk (VSL)
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
VSL Keynote Talk: The theory and applications of o-minimal structures
SPEAKER: Alex Wilkie

ABSTRACT. This is a talk in the branch of logic known as model theory, more precisely, in o-minimality. The first example of an o-minimal structure is the ordered algebraic structure on the set of real numbers and I shall focus on expansions of this structure. Being o-minimal means that the first order definable sets in the structure do not exhibit wild phenomena (this will be made precise). After discussing some basic theory of such structures I shall present some recent applications to diophantine geometry.

10:15-10:45Coffee Break
12:00-13:00 Session 58: Structure (SAT)
Location: FH, Hörsaal 6
Variable Dependencies and Q-Resolution

ABSTRACT. We propose Q(D)-resolution, a proof system for Quantified Boolean Formulas. Q(D)-resolution is a generalization of Q-resolution parameterized by a dependency scheme D. This system is motivated by the generalization of the QDPLL algorithm using dependency schemes implemented in the solver DepQBF. We prove soundness of Q(D)-resolution for a dependency scheme D that is strictly more general than the standard dependency scheme; the latter is currently used by DepQBF. This result is obtained by proving correctness of an algorithm that transforms Q(D)-resolution refutations into Q-resolution refutations and could be of independent practical interest. We also give an alternative characterization of resolution-path dependencies in terms of directed walks in a formula's implication graph which admits an algorithmically more advantageous treatment.

Impact of Community Structure on SAT Solver Performance
SPEAKER: unknown

ABSTRACT. Modern CDCL SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time. It is clear that these solvers somehow exploit the structure of real-world instances. How- ever, to-date there have been few results that precisely characterise this structure. In this paper, we provide evidence that the community structure of real-world SAT instances is correlated with the running time of CDCL SAT solvers. It has been known for some time that real-world SAT instances, viewed as graphs, have natural communities in them. A community is a sub-graph of the graph of a SAT instance, such that this sub-graph has more internal edges than outgoing to the rest of the graph. The community structure of a graph is often characterised by a quality metric called Q. Intuitively, a graph with high-quality community structure (high Q) is easily separable into smaller communities, while the one with low Q is not. We provide three results based on empirical data which show that community structure of real-world industrial instances is a better predictor of the running time of CDCL solvers than other commonly considered factors such as variables and clauses. First, we show that there is a strong correlation between the Q value and Literal Block Distance metric of quality of conflict clauses used in clause-deletion policies in Glucose-like solvers. Second, using regression analysis, we show that the the number of communities and the Q value of the graph of real-world SAT instances is more predictive of the running time of CDCL solvers than traditional metrics like number of variables or clauses. Finally, we show that randomly-generated SAT instances with 0.05 ≤ Q ≤ 0.13 are dramatically harder to solve for CDCL solvers than otherwise.

13:00-14:30Lunch Break
14:30-16:00 Session 59A: Graphs and Logic (CSL-LICS)
Location: FH, Hörsaal 1
On Hanf-equivalence and the number of embeddings of small induced subgraphs
SPEAKER: unknown

ABSTRACT. Two graphs are Hanf-equivalent with respect to radius r if there is a bijection between their vertex sets which preserves the isomorphism types of the vertices' neighbourhoods of radius r. For r=1 this means that the graphs have the same degree sequence.

In this paper we relate Hanf-equivalence to the graph-theoretical concept of subgraph equivalence. To make this concept applicable to graphs that are not necessarily connected, we first generalise the notion of the radius of a connected graph to general graphs in a suitable way, which we call the generalised radius. We say that two graphs G and H are subgraph-equivalent up to generalised radius r if for all graphs S of generalised radius r, the number of induced subgraphs isomorphic to S in G is the same as the number of such subgraphs in H.

We prove that Hanf-equivalence with respect to radius r is equivalent to subgraph-equivalence up to generalised radius r, thereby relating the purely logical and the graph-theoretical concepts in a very strong way.

The notion of subgraph-equivalence up to order s is defined accordingly, where all graphs S of order at most s are taken into account. As a corollary we obtain that Hanf-equivalence with respect to radius r implies subgraph-equivalence up to order s, provided that r is of size at least 3s/4. In particular, this implies that two graphs which are Hanf-equivalent with respect to radius r satisfy the same purely existential first-order sentences of quantifier rank r.

One Hierarchy Spawns Another: Graph Deconstructions and the Complexity Classification of Conjunctive Queries
SPEAKER: unknown

ABSTRACT. We study the problem of conjunctive query evaluation over classes of queries; this problem is formulated here as the relational homomorphism problem over a class of structures A, wherein each instance must be a pair of structures such that the first structure is an element of A.

We present a comprehensive complexity classification of these problems, which strongly links graph-theoretic properties of A to the complexity of the corresponding homomorphism problem. In particular, we define a binary relation on graph classes and completely describe the resulting hierarchy given by this relation. This binary relation is defined in terms of a notion which we call graph deconstruction and which is a variant of the well-known notion of tree decomposition. We then use this graph hierarchy to infer a complexity hierarchy of homomorphism problems which is comprehensive up to a computationally very weak notion of reduction, namely, a parameterized version of quantifier-free reductions. In doing so, we obtain a significantly refined complexity classification of homomorphism problems, as well as a unifying, modular, and conceptually clean treatment of existing complexity classifications.

We then present and develop the theory of Ehrenfeucht-Fra\"{i}ss\'{e}-style pebble games which solve the homomorphism problems where the cores of the structures in A have bounded tree depth; as a fruit of this development, we characterize the homomorphism problems solvable in logarithmic space. Finally, we use our framework to classify the complexity of model checking existential sentences having bounded quantifier rank.

The Tractability Frontier of Graph-Like First-Order Query Sets
SPEAKER: Hubie Chen

ABSTRACT. We study first-order model checking, by which we refer to the problem of deciding whether or not a given first-order sentence is satisfied by a given finite structure. In particular, we aim to understand on which sets of sentences this problem is tractable, in the sense of parameterized complexity theory. To this end, we define the notion of a graph-like sentence set, which definition is inspired by previous work on first-order model checking in which the permitted connectives and quantifiers were restricted. Our main theorem is the complete tractability classification of such graph-like sentence sets, which is (to our knowledge) the first complexity classification theorem concerning a class of sentences that has no restriction on the connectives and quantifiers. To present and prove our classification, we introduce and develop a novel complexity-theoretic framework which is built on parameterized complexity and includes new notions of reduction.

14:30-16:00 Session 59B: Types and Specifications (CSL-LICS)
Location: FH, Hörsaal 5
System F with Coercion Constraints
SPEAKER: Julien Cretin

ABSTRACT. We present a second-order calculus with coercion constraints that generalizes our previous extension of System F with parametric coercion abstractions by allowing multiple but simultaneous type and coercion abstractions, as well as recursive coercions and equi-recursive types. This allows to present in a uniform way many type system features that had previously been studied separately: type containment, bounded and instance-bounded polymorphism, which are already encodable with parametric coercion abstraction, and ML-style but second-order subtyping constraints. Our framework allows for a clear separation of language constructs with and without computational content. We also distinguish coherent coercions that are fully erasable from potentially incoherent coercions that must suspend the evaluation---and enable the encoding of GADTs. Technically, type coercions that witness subtyping relations between types are replaced by a more expressive notion of typing coercions that witness subsumption relations between typings, e.g. pairs composed of a typing environment and a type. Our calculus is equipped with a strong notion of reduction that allows reduction under abstractions---but we also introduce a form of weak reduction as reduction cannot proceed under incoherent type abstractions. Type soundness is proved by adapting the step-indexed semantics technique to strong reduction strategies, moving indices inside terms so as to control the reduction steps internally.

Anchored LTL Separation

ABSTRACT. Gabbay's separation theorem is a fundamental result for linear temporal logic (LTL). We show that separating a restricted class of LTL formulas, called anchored LTL, is elementary if and only if the translation from LTL to the linear temporal logic with only future temporal connectives is elementary. To prove this result, we define a canonical separation for LTL, and establish a correspondence between a canonical separation of anchored LTL formulas and the omega-automata that recognize these formulas. The canonical separation of anchored LTL formulas has two further applications. First, we constructively prove that the safety closure of any LTL property is an LTL property, thus proving the decomposition theorem for LTL: every LTL formula is equivalent to the conjunction of a safety LTL formula and a liveness LTL formula. Second, we characterize safety, liveness, absolute liveness, stable, and fairness properties in LTL. Our characterization is effective: We reduce the problem of deciding whether an LTL formula defines any of these properties to the validity problem for LTL.

Probably Safe or Live

ABSTRACT. This paper presents a formal characterisation of safety and liveness properties à la Alpern and Schneider for fully probabilistic systems. As for the classical setting, it is established that any (probabilistic tree) property is equivalent to a conjunction of a safety and liveness property. A simple algorithm is provided to obtain such property decomposition for flat probabilistic CTL (PCTL). A safe fragment of PCTL is identified that provides a sound and complete characterisation of safety properties. For liveness properties, we provide two PCTL fragments, a sound and a complete one. We show that safety properties only have finite counterexamples, whereas liveness properties have none. We compare our characterisation for qualitative properties with the one for branching time properties by Manolios and Trefler, and present sound and complete PCTL fragments for characterising the notions of strong safety and absolute liveness by Sistla.

14:30-16:00 Session 59C: Invited Talk (ITP)
Location: FH, Hörsaal 8
Showing invariance compositionally for a process algebra for network protocols

ABSTRACT. This paper presents the mechanization of a process algebra for Mobile Ad hoc Networks and Wireless Mesh Networks, and the development of a compositional framework for proving invariant properties. Mechanizing the core process algebra in Isabelle/HOL is relatively standard, but its layered structure necessitates special treatment. The control states of reactive processes, such as nodes in a network, are modelled by terms of the process algebra. We propose a technique based on these terms to streamline proofs of inductive invariance. This is not sufficient, however, to state and prove invariants that relate states across multiple processes (entire networks). To this end, we propose a novel compositional technique for lifting global invariants stated at the level of individual nodes to networks of nodes.

Invited talk: Retrofitting Rigour
SPEAKER: Peter Sewell

ABSTRACT. We rely on an enormous number of software components, which continue to be developed ---as they have been since the 1940s--- by methods based on a test-and-debug cycle and informal-prose specifications. Replacing this legacy infrastructure and development practices wholesale is not (and may never be) feasible, so if we want to improve system quality by using mathematically rigorous methods, we need a strategy to do so incrementally. A first step is to focus on the more stable extant interfaces: processor architectures, programming languages, and key APIs, file formats, and protocols. By characterising these with new precise abstractions, we can start to retrofit rigour into the system, using those (1) to improve existing components by testing, (2) as interfaces for replacement formally verified components, and (3) simply to gain understanding of what the existing behavioural interfaces are. Many issues arise in building such abstractions and in establishing confidence in them. To validate them against existing systems, they must be made executable, not necessarily in the conventional sense but as test oracles, to check whether observed behaviour of the actual system is admitted by the specification. The appropriate treatment of loose specification is key here, and varies from case to case. To validate them against the intent of the designers, they must be made broadly comprehensible, by careful structuring, annotation, and presentation. And to support formal reasoning by as broad a community as possible, they must be portable into a variety of reasoning tools. Moreover, the scale of real-world specifications adds further engineering concerns. This talk will discuss the issues of loose specification and some lessons learnt from developing and using our Ott and Lem lightweight specification tools, which are aimed at supporting all these pre-proving activities (and which target multiple theorem provers).

14:30-16:00 Session 59D (RTA-TLCA)
Location: Informatikhörsaal
Concurrent Programming Languages and Methods for Semantic Analyses

ABSTRACT. The focus will be a presentation of new results and successes of semantic analyses of  concurrent programs. These are accomplished by contextual equivalence, observing may- and should-convergence, and by adapting known techniques from deterministic programs to  non-determinism and concurrency.  The techniques are context lemmata, diagram techniques, applicative similarities, infinite tree reductions, and translations. The results are equivalences, correctness of program transformations, correctness of implementations and translations.

A Model of Countable Nondeterminism in Guarded Type Theory
SPEAKER: Aleš Bizjak

ABSTRACT. We show how to construct a logical relation for countable nondeterminism in a guarded type theory, corresponding to the internal logic of the topos $Sh(\omega_1)$ of sheaves over $\omega_1$. In contrast to earlier work on abstract step-indexed models, we not only construct the logical relations in the guarded type theory, but also give an internal proof of the adequacy of the model with respect to standard contextual equivalence. To state and prove adequacy of the logical relation, we introduce a new propositional modality. In connection with this modality we show why it is necessary to work in the logic of $\Sh(\omega_1)$.

14:30-15:30 Session 59E: Invited Talk II (SAT)
Location: FH, Hörsaal 6
A Model-Constructing Satisfiability Calculus
SPEAKER: unknown
14:30-16:00 Session 59G: Contributed Talks (LATD)
Location: MB, Festsaal
Axiomatising a fuzzy modal logic over the standard product algebra

ABSTRACT. The study of modal extensions of main systems of mathematical fuzzy logic is a currently ongoing research topic. Nonetheless, several recent works have been already published on these modal extensions, covering different aspects. Among others, Hansoul and Teheux have investigated modal extensions of Lukasiewicz logic, Caicedo and Rodriguez have focused on the study of modal extensions of Godel fuzzy logic, while Bou et al. have studied modal logics over finite residuated lattices. However, the study of modal extensions over the product fuzzy logic has remained unexplored. In this paper we present some results that partially fill this gap by studying modal extensions of product fuzzy logic with Kripke style semantics where accessibility relations are crisp, and where the underlying product fuzzy logic is expanded with truth-constants, the Monteiro-Baaz Delta operator, and with two infinitary inference rules. We also explore algebraic semantics for these fuzzy modal logics.

Decidability of order-based modal logics
SPEAKER: Jonas Rogger

ABSTRACT. Decidability of the validity problem is established for a family of many-valued modal logics (including modal logics based on infinite-valued G̈odel logics), where propositional connectives are evaluated locally at worlds according to the order of values in a complete chain and box and diamond modalities are evaluated as infima and suprema of values in (many-valued) Kripke frames. When the chain is infinite and the language is sufficiently expressive, the standard semantics for such a logics lacks the finite model property. However, the finite model property does hold for a new equivalent semantics for the same logics and thus decidability is obtain.

Many-valued modal logic over residuated lattices via duality

ABSTRACT. Recent work in many-valued modal logic has introduced a very general method for defining the (least) many-valued modal logic over a given finite residuated lattice. The logic is defined semantically by means of Kripke models that are many-valued in two different ways: the valuations as well as the accessibility relation among possible worlds are both many-valued. Providing complete axiomatizations for such logics, even if we enrich the propositional language with a truth-constant for every element of the given lattice, is a non-trivial problem, which has been only partially solved to date. In this presentation we report on ongoing research in this direction, focusing on the contribution that the theory of natural dualites can give to this enterprise. We show in particular that duality allows us to extend the above method to prove completeness with respect to local modal consequence, obtaining completeness for global consequence, too. Besides this, our study is also a contribution towards a better general understanding of quasivarieties of (modal) residuated lattices from a topological perspective.

14:30-16:00 Session 59H: Contributed Talks (LATD)
Location: MB, Hörsaal 15
Semantic information and fuzziness

ABSTRACT. We propose a framework in terms of domain theory for semantic information models. We show how an artificial agent (the computer) can operate within such a model in a multiple attitude environment (fuzziness) where information is conveyed. The approximation relation in some of these models can be expressed in terms of deducibility in some relevant logics.

Qualified Syllogisms with Fuzzy Predicates

ABSTRACT. The notion of {\it fuzzy quantifier} as a generalization of the classical `for all'' and `there exists' was introduced by L.A. Zadeh in 1975. This provided a semantics for fuzzy modifiers such as {\it most, many, few, almost all}, etc. and introduced the idea of reasoning with syllogistic arguments along the lines of `$\!${\it Most} men are vain; Socrates is a man; therefore, it is {\it likely} that Socrates is vain', where vanity is given as a fuzzy predicate. This and numerous succeeding publications developed well-defined semantics also for {\it fuzzy probabilities} (e.g., {\it likely, very likely, uncertain, unlikely}, etc.) and fuzzy {\it usuality modifiers} (e.g., {\it usually, often, seldom}, etc.). In addition, Zadeh has argued at numerous conferences over the years that these modifiers offer an appropriate and intuitively correct approach to nonmonotonic reasoning.

The matter of exactly how these various modifiers are interrelated, however, and therefore of a concise semantics for such syllogisms, was not fully explored. Thus while a new reasoning methodology was suggested, it was never developed. The present work has grown out of an effort to realize this goal. A previous paper in Artificial Intelligence defined a formal system {\bf Q} of `qualified syllogisms', together with a detailed discussion of how the system may be used to address some well-known issues in the study of default-style nonmonotonic reasoning. That system falls short of the overall goal, however, in that it deals only with crisp predicates. A research note recently submitted to Artificial Intelligence takes the next step by creating a system that accommodates fuzzy predicates. The present abstract overviews these works.

Connecting Fuzzy Sets and Pavelka's Fuzzy Logic
SPEAKER: Esko Turunen

ABSTRACT. During the last decades Fuzzy Set Theory has become an important method in dealing with vagueness in engineering, economics and many other applied sciences. Alongside this development, there has been significant segregation: fuzzy logic in broad sense include everything that is related to fuzziness and is mostly oriented to real-world applications, while fuzzy logic in narrow sense, also called mathematical fuzzy logic, develops mathematical methods to model vagueness and fuzziness by well-defined logical tools. These two approaches do not always meet each other. This is unfortunate, since theory should always reflect practice, and practice should draw upon the best theories. In this work, we try to bridge the gap between practical applications of Fuzzy Set Theory and mathematical fuzzy logic. We demonstrate how continuous [0,1]-valued fuzzy sets can be naturally interpreted as open formulas in a simple first order fuzzy logic of Pavelka style; a logic whose details we discuss here. The main idea is to understand truth values as continuous functions; for single elements x the truth values are constant function defined by the membership degree, for open formulas they are the membership functions, where the base set X is scaled to the unit interval [0,1], for universally closed formulas truth values are definite integrals understood as constant functions. We also introduce constructive existential quantifiers. We show that this logic is complete in Pavelka's sense and generalize all classical tautologies that are definable in the language of this logic. However, all proves and many details are omitted.

15:30-16:00 Session 60: Simplification and Solving I / 1 (SAT)
Location: FH, Hörsaal 6
Efficient implementation of SLS solvers and new heuristics for k-SAT with long clauses
SPEAKER: unknown

ABSTRACT. Stochastic Local Search (SLS) is considered to be one of the best solving technique for randomly generated problems and more recently also has shown great promise for several types of hard combinatorial problems. Within this work, we provide a thorough analysis of different implementation variants of SLS solvers on random and on hard combinatorial problems. By analyzing existing SLS implementations, we are able to discover new improvements inspired by CDCL solvers, which can speed up the search of all types of SLS solvers. Further, our analysis reveals that multilevel break values of variables can be easily computed and used within the decision heuristic. By augmenting the probSAT solver with the new heuristic, we are able to reach new state-of-the-art performance on several types of SAT problems, especially on those with long clauses. We further provide a detailed analysis of the clause selection policy used in focused search SLS solvers.

16:00-16:30Coffee Break
16:30-18:30 Session 61A: Guarded and Separation Logics (CSL-LICS)
Location: FH, Hörsaal 1
Effective Interpolation and Preservation in Guarded Logics

ABSTRACT. Desirable properties of a logic include decidability, and a model theory that inherits properties of first-order logic, such as interpolation and preservation theorems. It is known that the Guarded Fragment (GF) of first-order logic is decidable and satisfies some preservation properties from first-order model theory; however, it fails to have Craig interpolation. The Guarded Negation Fragment (GNF), a recently-defined extension, is known to be decidable and to have Craig interpolation. Here we give the first results on effective interpolation for extensions of GF. We provide an interpolation procedure for GNF whose complexity matches the doubly exponential upper bound for satisfiability of GNF. We show that the same construction gives not only Craig interpolation, but Lyndon interpolation and Relativized interpolation, which can be used to provide effective proofs of some preservation theorems. We provide upper bounds on the size of GNF interpolants for both GNF and GF input, and complement this with matching lower bounds.

Expressive Completeness of Separation Logic With Two Variables and No Separating Conjunction
SPEAKER: Morgan Deters

ABSTRACT. We show that first-order separation logic with one record field restricted to two variables and the separating implication (no separating conjunction) is as expressive as weak second-order logic, substantially sharpening a previous result. Capturing weak second-order logic with such a restricted form of separation logic requires substantial updates to known proof techniques. We develop these, and as a by-product identify the smallest fragment of separation logic known to be undecidable: first-order separation logic with one record field, two variables, and no separating conjunction.

A Decision Procedure for Satisfiability in Separation Logic with Inductive Predicates
SPEAKER: Carsten Fuhs

ABSTRACT. We show that the satisfiability problem for the "symbolic heap" fragment of separation logic with general inductively defined predicates - which includes most fragments employed in program verification - is *decidable*. Our decision procedure is based on the computation of a certain fixed point from the definition of an inductive predicate, called its "base", that exactly characterises its satisfiability.

A complexity analysis of our decision procedure shows that it runs, in the worst case, in exponential time. In fact, we show that the satisfiability problem for our inductive predicates is EXPTIME-complete (by reduction from the succinct circuit value problem), and becomes NP-complete when the maximum arity over all predicates is bounded by a constant.

Finally, we provide an implementation of our decision procedure, and analyse its performance both on a synthetically generated set of test formulas, and on a second test set harvested from the separation logic literature. For the large majority of these test cases, our tool reports times in the low milliseconds.

16:30-18:30 Session 61B: Proof Theory (CSL-LICS)
Location: FH, Hörsaal 5
Substitution, jumps, and algebraic effects
SPEAKER: unknown

ABSTRACT. Algebraic structures abound in programming languages. The starting point for this paper is the following theorem: (first-order) algebraic signatures can themselves be described as free algebras for a (second-order) algebraic theory of substitution. Transporting this to the realm of programming languages, we investigate a computational metalanguage based on the theory of substitution, demonstrating that substituting corresponds to jumping in an abstract machine. We use the theorem to give an interpretation of a programming language with arbitrary algebraic effects into the metalanguage with substitution/jumps.

No proof nets for MLL with units: Proof equivalence in MLL is PSPACE-complete
SPEAKER: unknown

ABSTRACT. MLL proof equivalence is the problem of deciding whether two proofs in multiplicative linear logic are related by a series of inference permutations. It is also known as the word problem for star-autonomous categories. Previous work has shown the problem to be equivalent to a rewiring problem on proof nets, which are not canonical for full MLL due to the presence of the two units. Drawing from recent work on reconfiguration problems, in this paper it is shown that MLL proof equivalence is PSPACE-complete, using a reduction from Nondeterministic Constraint Logic. An important consequence of the result is that the existence of a satisfactory notion of proof nets for MLL with units is ruled out (under current complexity assumptions).

Equality and Fixpoints in the Calculus of Structures
SPEAKER: unknown

ABSTRACT. The standard proof theory for logics with equality and fixpoints suffers from limitations of the sequent calculus, where reasoning is separated from computational tasks such as unification or rewriting. We propose in this paper an extension of the calculus of structures, a deep inference formalism, that supports incremental and contextual reasoning with equality and fixpoints in the setting of linear logic. This system allows deductive and computational steps to mix freely in a continuum which integrates smoothly into the usual versatile rules of multiplicative-additive linear logic in deep inference.

16:30-18:00 Session 61C: Automation (ITP)
Location: FH, Hörsaal 8
A heuristic prover for real inequalities

ABSTRACT. We describe a general method for verifying inequalities between real-valued expressions, especially the kinds of straightforward inferences that arise in interactive theorem proving. In contrast to approaches that aim to be complete with respect to a particular language or class of formulas, our method establishes claims that require heterogeneous forms of reasoning, relying on a Nelson-Oppen-style architecture in which special-purpose modules collaborate and share information. The framework is thus modular and extensible. A prototype implementation shows that the method is promising, complementing techniques that are used by contemporary interactive provers.

Unified Decision Procedures for Regular Expression Equivalence
SPEAKER: Tobias Nipkow

ABSTRACT. We formalize a unified framework for verified decision procedures for regular expression equivalence. Five recently published formalizations of such decision procedures (three based on derivatives, two on marked regular expressions) can be obtained as instances of the framework. We discover that the two approaches based on marked regular expressions, which were previously thought to be the same, are different, and one seems to produce uniformly smaller automata. The common framework makes it possible to compare the performance of the different decision procedures in a meaningful way.

Verified Abstract Interpretation Techniques for Disassembling Low-level Self-modifying Code

ABSTRACT. Static analysis of binary code is challenging for several reasons. In particular, standard static analysis techniques operate over control flow graphs, which are not available when dealing with self-modifying programs which can modify their own code at runtime. We formalize in the Coq proof assistant some key abstract interpretation techniques that automatically extract memory safety properties from binary code. Our analyzer is formally proved correct and has been run on several self-modifying challenges, provided by Cai et al. in their PLDI 2007 paper.

16:30-18:00 Session 61D (RTA-TLCA)
Location: Informatikhörsaal
Abstract datatypes for real numbers in type theory
SPEAKER: Alex Simpson

ABSTRACT. We add an abstract datatype for a closed interval of real numbers to type theory, providing a representation-independent approach to programming with real numbers. The abstract datatype requires only function types and a natural numbers type for its formulation, so can be added to any type theory that extends Goedel's System T. Our main result establishes that programming with the abstract datatype is equivalent in power to programming intensionally with representations of real numbers. We also consider representing arbitrary real numbers using a mantissa-exponent representation in which the mantissa is taken from the abstract interval.

Local stores in string diagrams

ABSTRACT. We establish that the local store monad defined by Plotkin and Power is a monad with arities for an appropriate notion of arities in the category of presheaves over the category of injections. From this follows that the monad is associated to a graded Lawvere theory. We explain how to present this graded theory by generators and relations, which we then depict in the graphical language of string diagrams.

Preciseness of subtyping on intersection and union types

ABSTRACT. The notion of subtyping has gained an important role both in theoretical and applicative domains: in lambda and concurrent calculi as well as in programming languages. The soundness and the completeness, together referred to as the preciseness of subtyping, can be considered from two different points of view: denotational and operational. The former preciseness is based on the denotation of a type which is a mathematical object that describes the meaning of the type in accordance with the denotations of other expressions from the language. The latter preciseness has been recently developed with respect to type safety, i.e. the safe replacement of a term of a smaller type when a term of a bigger type is expected.

We proposes a technique for formalising and proving operational preciseness of the subtyping relation in the setting of a concurrent lambda calculus with intersection and union types. The key feature is the link between typings and the operational semantics. We prove then soundness and completeness getting that the subtyping relation of this calculus enjoys both denotational and operational preciseness.

16:30-18:00 Session 61E: Simplification and Solving I / 2 (SAT)
Location: FH, Hörsaal 6
Everything You Always Wanted to Know About Blocked Sets (But Were Afraid to Ask)
SPEAKER: Tomáš Balyo

ABSTRACT. Blocked clause elimination is a powerful technique in SAT solving. In recent work, it has been shown that it is possible to decompose any propositional formula into two subsets (blocked sets) such that both can be solved by blocked clause elimination. We extend this work in several ways. First, we prove new theoretical properties of blocked sets. We then present additional and improved ways to efficiently solve blocked sets. Further, we propose novel decomposition algorithms for faster decomposition or which produce blocked sets with desirable attributes. We use decompositions to reencode CNF formulas and to obtain circuits, such as AIGs, which can then be simplified by algorithms from circuit synthesis and encoded back to CNF. Our experiments demonstrate that these techniques can increase the performance of the SAT solver Lingeling on hard to solve application benchmarks.

Simplifying Pseudo-Boolean Constraints in Residual Number Systems
SPEAKER: unknown

ABSTRACT. We present an encoding of pseudo-Boolean constraints based on decomposition with respect to a residual number system. We illustrate that careful selection of the base for the residual number system, and when bit-blasting modulo arithmetic, results in a powerful approach when solving hard pseudo-Boolean constraints. We demonstrate, using a range of pseudo-Boolean constraint solvers, that the obtained constraints are often substantially easier to solve.

Detecting cardinality constraints in CNF

ABSTRACT. We present novel approaches to detect cardinality constraints expressed in CNF. The first approach is based on a syntactic analysis of specific data structures used to represent binary and ternary clauses, whereas the second approach is based on a semantic analysis by unit propagation.

The syntactic approach computes an approximation of the cardinality constraints AtMost-$1$ and AtMost-$2$ constraints very fast, whereas the semantic approach has the property to be generic, i.e. it can detect cardinality constraints AtMost-$k$ for any $k$, at a higher computation cost.

Our experimental results suggest that the syntactic approach is efficient at recovering AtMost-$1$ and that the semantic one is efficient at recovering AtMost-$2$ cardinality constraints.

16:30-18:00 Session 61F: Contributed Talks (LATD)
Location: MB, Festsaal
Embedding partially ordered sets into distributive lattices

ABSTRACT. We characterize the class of partially ordered sets that are embeddable into distributive lattices.

Bitopological Duality and Three-valued Logic

ABSTRACT. Duality theory provides well-understood and parametrizable machinery for relating logics (more generally algebraic structures) of various kinds and their topological semantics. Stone and Priestley duality are of course the best known examples, but many others, including some versions of multi-valued logic have been profitably sent through this machinery. The results, however, are inherently two-valued in that the resulting topological structures are always Stone spaces. Any ``multi-valuedness'' is carried by the structure of a discrete dualizing (truth value) object. So the topological structure of the semantics is still essentially Boolean.

In this work, we develop techniques to deal directly in three-valued semantics whereby a proposition may be affirmed, denied or neither. The semantics is bitopological. One topology provides the possibilities for affirming propositions and another topology provides the possibilities for denying them. The two topologies need not by identical (when negation is not present in the langauge). Even more importantly, it need not be the case that any two distinct models can be separated by a single classical proposition. So the underlying toplogical semantics is not based on Stone spaces.

De Morgan logics with a notion of inconsistency

ABSTRACT. In this contribution, we investigate logics which expand the four-valued Belnap-Dunn logic by a notion of inconsistency. In order of increasing expressive power, this can take the form of an expansion by an inconsistency predicate, an inconsistency constant, or a reductio ad contradictionem negation. After briefly motivating the study of inconsistency and reviewing some related work, we introduce these logics as the quasiequational logics of a certain class of algebras and then discuss their relational semantics. This leads us to a logic which is a natural conservative extension of both classical and intuitionistic logic.

16:30-18:00 Session 61G: Contributed Talks (LATD)
Location: MB, Hörsaal 15
Chaotic Fuzzy Liars, Degrees of Truth, and Fractal Images of Paradox

ABSTRACT. During the meta-mathematical period of logic flourishing in the 1930s, the paradox of Liar gave way to proofs of classical limitative theorems—e.g., Gödel’s [1930] Incompleteness Theorems, Church’s [1936] proof of the Unsolvability of the Entscheidungsproblem, and Tarski’s [1936] proof of the Undefinability of Truth. Ways of overcoming these limitations were initially explored by Kleene [1938] using partial recursive functions. The semantic equivalent of Kleene’s approach uses truth-value gaps to overcome Tarski’s Undefinability Theorem. Formal languages with truth-representing truth predicates were constructed by van Fraassen [1968], Woodruff and Martin [1975], and Kripke [1975]. By weakening the assumption of bivalence, these formal solutions exploited meta-language reasoning to prove paradoxical sentences were quarantined by forced assignment to truth-value gaps. Skepticism as to whether these truth-value gap theories actually “solve” the paradoxes is supported by strengthened versions of the Liar formalizing the semantic concepts used to block the paradoxes and showing that fundamental semantic principles cannot be expressed without reintroducing paradox [Mar 1985]. An alternative approach to solving the paradoxes with truth-value gaps is seeking richer semantic patterns of paradox using many-valued and infinite-valued fuzzy logics with degrees of truth (Mar and Grim [1991], Grim, Mar and St. Denis [1992], and Mar [2001]). This approach can be seen as diverging from Tarski’s classic analysis of the Liar by

(1) generalizing bivalent logical connectives to an infinite-valued logic with degrees of truth /~p/ = 1 − /p/ /(p ∧ q)/ = MIN {/p/, /q/} /(p ∨ q)/ = MAX {/p/, /q/} /(p → q)/ = MIN {1, 1 − /p/ + /q/} /(p ↔ q)/ = 1 − ABS { /p/ − /q/}

(2) replacing Tarski’s bivalent (T) schema with Rescher’s Vvp schema for many-valued logics /Tp/ = 1 − ABS {t − /p/} /Vvp/ = 1 − ABS {v − /p/}

(3) modeling self-reference as semantic feedback thus allowing us to embed the semantics in the mathematics and geometry of dynamical systems theory. This is done by replacing the constant truth-value v in the Vvp schema with expressions S(xn) representing the value the sentence attributes to itself in terms of a previously estimated values xn.

• Continuous-Valued Liars (“I am as true a the truth-value v”) with S(xn) = v, yielding the Classical Liar for v = 0, Rescher’s fixed-point “solution” to the Liar for v = ½, and Kripke’s Truth-Teller for v = 1.

• The Cautious Truth-Teller (“I am half as true as I am estimated to be true”) with S(xn) = xn/2.

• The Contradictory Liar (“I am as true as the conjunction of my estimated value and the estimated value of my negation”) with S(xn) = MIN{ xn, 1 − xn}.

The semantic differences among these sentences can be made visually perspicuous using a web diagram. The web diagrams for the Continuous-Valued Liars appears as nested series of simple squares ranging from the Classical Liar to the Truth-Teller with a singular fixed-point at ½. The web diagram for the Cautious Truth-Teller is a fixed-point attractor: no matter what initial value with which we begin other than precisely the fixed-point 2/3: the successively revised estimated values are inevitably drawn toward that fixed-point. The web diagram for the Contradictory Liar, in contrast, is a fixed-point repellor: for any values other than the fixed-point 2/3, the successively revised values are repelled away from 2/3 until the values settle on the oscillation between 1 and 0, characteristic of the Classical Liar. In short, the Cautious Truth Teller and the Contradictory Liar, while identical to the Classical Liar on the values 0 and 1, exhibit diametrically opposed semantic behavior in the interval (0,1). This example provides a justification for degrees of truth in an infinite-valued logic approach: bivalence masks intriguing semantic differences. Instead of taming patterns of semantic paradox by excluding semantic cycles (Barwise and Etchemendy [1987]) or seeking semantic stability (Gupta [1982] and Herzberger [1982]), this approach seeks semantic complexity and chaotic instability. The simplest generalizations of the classical bivalent Liar in the context of fuzzy logic with degrees of truth generate semantic chaos. The Chaotic Liar (“I am as true as I am estimated to be false”) is geometrically represented by the chaotic tent function. Using the squaring function for the modifier ‘very’ (Zadeh [1975]), we obtain the Logistic Liar (“I am as true as I am estimated to be very false”) represented by another paradigmatically chaotic function. These semantic generalizations of the paradox of the Liar are chaotic in a precise mathematical sense. Consider a pair of paradoxical statements known as the Dualist Liar:

Aristotle, “What Epimenides says is true.” Epimenides, “What Aristotle says is false.”

We can model the Dualist Liar as a pair of dynamical systems:

xn+1 = 1 − ABS {yn} yn+1 = 1 − ABS {(1 − xn) − yn }

Counting the number of iterations required for the ordered pairs (xn, yn) to exceed a threshold of the unit circle centered at (0,0), we obtain an escape-time diagram. Self-symmetry on descending scales characteristic of Zeno’s paradoxes (Mar and St. Denis [1999], Stewart [1992]) yields fractal images of semantic chaos. Following the lead of the limitative theorems of Gödel [1930] and Tarski [1936], what is initially a paradox of semantic chaos can be turned into a proof. Using a Strengthened Chaotic Liar, we can use well-known methods to prove that the set of functions chaotic on the [0,1] interval is not definable (Mar and Grim [1991]), Grim, Mar and St. Denis [1992], Mar [2001]). Paradox is not illogicality, but it has been a trap for logicians: the semantic paradoxes look just a little simpler and more predictable than they actually are. Our goal here is to offer glimpses into the infinitely complex, chaotic, and fractal patterns of semantic instability that have gone virtually unexplored.


Anderson, C. Anthony and Mikhail Zeleny (eds.) [2001]. Logic, Meaning and Computation: Essays In Memory of Alonzo Church (Netherlands: Kluwer Academic Publishers). Barwise, Jon and John Etchemendy [1987]. The Liar (New York, New York: Oxford University Press). Church, Alonzo [1936]. A note on the Entscheidungsproblem, J. Symb. Log. 1, 40-1, correct., ibid., 101-2. Devaney, Robert [1989]. An Introduction to Chaotic Dynamical Systems, 2nd edition (Menlo Park, California: Addison-Wesley). van Fraassen, Bas [1968]. Presupposition, implication, and self-reference, J. of Phil. 65, 136-152. Gödel, Kurt [1931] Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I, Monatshefte für Mathematik und Physik 38, 173-198. Grim, Patrick, Gary Mar and Paul St. Denis [1998]. The Philosophical Computer: Exploratory Essays in Philosophical Computer Modeling (Cambridge, Mass.: MIT Press). Herzberger, Hans [1982]. Notes on naive semantics, J. of Phil. Logic 11, 61-102. Kleene, Stephen [1938]. On notations for ordinal numbers, J. of Sym. Logic 3, 150-55. Kripke, Saul [1975]. Outline of a theory of truth, J. of Phil. 72, 690-716. Mar, Gary [1985]. Liars, Truth-Gaps, and Truth, Dissertation (UCLA), director Alonzo Church (Ann Arbor, Michigan: University Microfilms International). Mar, Gary [1992]. Chaos, fractals, and the semantics of paradox. Newsletter on Computers Use in Philosophy, American Philosophical Association Newsletters 91 (Fall), 30-34. Mar, Gary [2001]. Church’s Theorem and Randomness in Anderson and Zeleny [2001], 479-90. Mar, Gary and Patrick Grim [1991]. Pattern and chaos: new images in the semantics of paradox, Noûs 25, 659-93. Mar, Gary and Paul St. Denis [1999]. What the liar taught Achilles, J. of Phil. Logic 28, 29-46. Martin, Robert [1970]. A category solution to the Liar, in Robert L. Martin (editor) The Liar Paradox (New Haven, Conn.: Yale University Press). Martin, Robert and Peter Woodruff [1975]. On representing ‘true-in-L’ in L, Philosophia 5, 217-21. Rescher, Nicholas [1969]. Multi-Valued Logic (New York: McGraw-Hill). Stewart, Ian [1993]. A partly true story, Scientific American 268 (Feb.), 110-112 discusses Mar and Grim [1991]. Tarski, Alfred [1936]. Der wahrheitsbegriff in den formalisierten sprachen, Studien Philosophica I, 261-405 (translated by J. H. Woodger [1983], The concept of truth in formalized languages, and anthologized in J. Corcoran (ed.) [1983] Logic, Semantics and Metamathematics (Indianapolis, Indiana: Hackett). Zadeh, Lofti [1975]. Fuzzy logic and approximate reasoning, Synthese 30, 407-38.

Truth degrees in the interval [-1,1] for the librationist system $\pounds$.

ABSTRACT. We describe some central facets of the foundational system \pounds and give an account which supports an interpretation which assigns truth degrees to its sentences in the rational interval [-1,1],

A semantic approach to conservativity

ABSTRACT. Every first-order set of sentences T, viewed as a set of axioms, gives rise to a classical theory T_c when T is closed under consequences of classical logic, or an intuitionistic one, T_i, when T is closed under consequences of intuitionistic logic. While it is clear that every sentence which is intuitionistically provable is also classically provable, the question when the converse holds leads us to the so-called conservativity problem. More precisely, given a class G of formulas, we say that the theory T_c is G-conservative over its intuitionistic counterpart T_i iff for all formulas A from the class G, A is provable in T_i whenever A is provable in T_c.

A typical example is that of classical Peano Arithmetic, PA and its intuitionistic counterpart, Heyting Arithmetic HA. The well-known result concerning these two theories states that PA is Pi_2-conservative over HA.

In our talk we consider possible generalizations of this result. However, instead of using syntactic methods, we exploit semantic methods and present some new conservativity results proven by means of Kripke models for first-order theories.