previous day
next day
all days

View: session overviewtalk overview

08:45-09:15 Session 37: VSL Opening (VSL)
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
Welcome Address by the Rector
Welcome Address by the Organizers
SPEAKER: Matthias Baaz
VSL Opening
SPEAKER: Dana Scott
09:15-10:15 Session 38: VSL Keynote Talk (VSL)
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
VSL Keynote Talk: Computational Ideas and the Theory of Evolution

ABSTRACT. Covertly computational insights have influenced the Theory of Evolution from the very start. I shall discuss recent work on some central problems in Evolution that was inspired and informed by computational ideas. Considerations about the performance of genetic algorithms led to a novel theory of the role of sex in Evolution based on the concept of mixability, the population genetic equations describing the evolution of a species can be reinterpreted as a repeated game between genes played through the multiplicative update algorithm, while a theorem on satisfiability can shed light on the emergence of complex adaptations.

10:15-10:45Coffee Break
10:45-13:00 Session 38A: Tutorial by Assia Mahboubi (CSL-LICS)
Location: FH, Hörsaal 1
Computer-checked mathematics: a formal proof of the Odd Order theorem

ABSTRACT. The Odd Order Theorem is a landmark result in finite group theory, famous for its crucial role in the classification of finite simple groups, for the novel methods introduced by its original proof but also for the striking contrast between the simplicity of its statement and the unusual length and complexity of its proof. After a six year collaborative effort, we managed to formalize and machine-check a complete proof of this theorem using the Coq proof assistant . The resulting collection of libraries of formalized mathematics covers a wide variety of topics, mostly in algebra, as this proof relies on a sophisticated combination of different flavours of group theory. In this tutorial we comment on the role played by the different features of the proof assistant, from the meta-theory of its underlying logic to the implementation of its various components. We will also discuss some issues raised by the translation of mathematical textbooks into formal libraries and the perspectives it opens on the use of a computer to do mathematics.

10:45-13:00 Session 38B: Computability (CSL-LICS)
Location: FH, Hörsaal 5
A quest for algorithmically random infinite structures

ABSTRACT. The last two decades have witnessed significant advances in the investigation of algorithmic randomness, such as Martin-Lof randomness, of infinite strings. In spite of much work, research on randomness of infinite strings has excluded the investigation of algorithmic randomness for infinite algebraic structures. The main obstacle in introducing algorithmic randomness for infinite structures is that many classes of infinite structures lack measure. In this paper, we overcome this obstacle by proposing a limited amount of finiteness conditions on various classes of infinite structures. These conditions will enable us to introduce measure and, as a consequence, reason about algorithmic randomness. Our classes include finitely generated universal algebras, connected graphs and tress of bounded degree, and monoids. For all these classes one can introduce algorithmic randomness concepts and prove existence of random structures.

On the Total Variation Distance of Labelled Markov Chains
SPEAKER: unknown

ABSTRACT. Labelled Markov chains (LMCs) are widely used in probabilistic verification, speech recognition, computational biology, and many other fields. Checking two LMCs for equivalence is a classical problem subject to extensive studies, while the total variation distance provides a natural measure for the ``inequivalence'' of two LMCs: it is the maximum difference between probabilities that the LMCs assign to the same event.

In this paper we develop a theory of the total variation distance between two LMCs, with emphasis on the algorithmic aspects: (1) we provide a polynomial-time algorithm for determining whether two LMCs have distance 1, i.e., whether they can almost always be distinguished; (2) we provide an algorithm for approximating the distance with arbitrary precision; and (3) we show that the threshold problem, i.e., whether the distance exceeds a given threshold, is NP-hard and hard for the square-root-sum problem. We also make a connection between the total variation distance and Bernoulli convolutions.

A domain-theoretic approach to Brownian motion and general continuous stochastic processes
SPEAKER: Paul Bilokon

ABSTRACT. We introduce a domain-theoretic framework for continuous-time, continuous-state stochastic processes. The laws of stochastic processes are embedded into the space of maximal elements of the normalised probabilistic power domain on the space of continuous interval-valued functions endowed with the relative Scott topology. We use the resulting $\omega$-continuous bounded complete dcpo to define partial stochastic processes and characterise their computability. For a given continuous stochastic process, we show how its domain-theoretic, i.e., finitary, approximations can be constructed, whose least upper bound is the law of the stochastic process. As a main result, we apply our methodology to Brownian motion. We construct a partial Wiener measure and show that the Wiener measure is computable within the domain-theoretic framework.

On the Computing Power of +, −, and ×

ABSTRACT. Modify the Blum-Shub-Smale model of computation replacing the permitted computational primitives (the real field operations) with any finite set B of real functions semialgebraic over the rationals. Consider the class of boolean decision problems that can be solved in polynomial time in the new model by machines with no machine constants. How does this class depend on B? We prove that it is always contained in the class obtained for B = {+, −, ×}. Moreover, if B is a set of continuous semialgebraic functions containing + and −, and such that arbitrarily small numbers can be computed using B, then we have the following dichotomy: either our class is P or it coincides with the class obtained for B = {+, −, ×}.

10:45-13:00 Session 38C: Verified Theorem Provers (ITP)
Location: FH, Hörsaal 8
Towards a Formally Verified Proof Assistant
SPEAKER: Vincent Rahli

ABSTRACT. This paper presents a formalization of Nuprl's metatheory in Coq. It includes a nominal-style definition of the Nuprl language, its reduction rules, a coinductive computational equivalence, and a Curry-style type system where a type is defined as a Partial Equivalence Relation (PER) à la Allen. This type system includes Martin-Löf dependent types, a hierarchy of universes, inductive types and partial types. We then prove that the typehood rules of Nuprl are valid w.r.t. this PER semantics and hence reduce Nnuprl's consistency to Coq's consistency.

The reflective Milawa theorem prover is sound (down to the machine code that runs it)
SPEAKER: unknown

ABSTRACT. Milawa is a theorem prover styled after ACL2 but with a small kernel and a powerful reflection mechanism. We have used the HOL4 theorem prover to formalize the logic of Milawa, prove the logic sound, and prove that the source code for the Milawa kernel (2,000 lines of Lisp) is faithful to the logic. Going further, we have combined these results with our previous verification of an x86 machine-code implementation of a Lisp runtime. Our top-level HOL4 theorem states that when Milawa is run on top of our verified Lisp, it will only print theorem statements that are semantically true. We believe that this top-level theorem is the most comprehensive formal evidence of a theorem prover's soundness to date.

HOL with Definitions: Semantics, Soundness, and a Verified Implementation
SPEAKER: Ramana Kumar

ABSTRACT. We present a mechanised semantics and soundness proof for the HOL Light kernel including its definitional principles, extending Harrison's verification of the kernel without definitions. Soundness of the logic extends to soundness of a theorem prover, because we also show that a synthesised implementation of the kernel in CakeML refines the inference system. Our semantics is for Wiedijk's stateless HOL, and is the first for that system; our implementation, however, is stateful, and we give semantics to the stateful inference system by translation to the stateless one. Our model of HOL is parametric on the universe of sets, and thus somewhat cleaner and more general than Harrison's. Additionally, we prove soundness for Arthan's extension of the principle of constant definition, in the hope of encouraging its adoption. This paper represents the logical kernel aspect of our work on verified HOL implementations; the production of a verified machine-code implementation of the whole system with the kernel as a module will appear separately.

Proof Pearl: Proving a Simple Von Neumann Machine Turing Complete

ABSTRACT. In this paper we sketch an ACL2-checked proof that a simple but unbounded Von Neumann machine model is Turing Complete, i.e., can do anything a Turing machine can do. The project formally revisits the roots of computer science. It requires re-familiarizing oneself with the definitive model of computation from the 1930s, dealing with a simple ``modern'' machine model, thinking carefully about the formal statement of an important theorem and the specification of both total and partial programs, writing a verifying compiler, including implementing an X86-like call/return protocol and implementing computed jumps, codifying a code proof strategy, and a little ``creative'' reasoning about the non-termination of two machines.

Rough Diamond: An Extension of Equivalence-based Rewriting
SPEAKER: Matt Kaufmann

ABSTRACT. Previous work by the authors generalized conditional rewriting from the use of equalities to the use of arbitrary equivalence relations. This (classic) {\em equivalence-based rewriting} automates the replacement of one subterm by another that may not be strictly equal to it, but is equivalent to it, where this equivalence is determined {\em automatically} to be sufficient at that subterm occurrence. We extend that capability by introducing {\em patterned} congruence rules in the ACL2 theorem prover, to provide more control over the occurrences where such a replacement may be made. This extension enables additional automation of the rewriting process, which is important in industrial-scale applications. However, because this feature is so new (introduced January, 2014), we do not yet have industrial applications to verify its utility, so we present a small example that illustrates how it supports scaling to large proof efforts.

10:45-13:00 Session 38D (RTA-TLCA)
Location: Informatikhörsaal
Implicational Relevance Logic is 2-ExpTime-Complete

ABSTRACT. We show that provability in the implicational fragment of relevance logic is complete for doubly exponential time, using reductions to and from coverability in branching vector addition systems.

An Implicit Characterization of the Polynomial-Time Decidable Sets by Cons-Free Rewriting
SPEAKER: unknown

ABSTRACT. A constructor rewriting system $R$ is said to characterize a language $D \subseteq \{ 0, 1 \}^\{< \infty}$ if there are designated function symbols $f_0$ and $\mathrm{true}$ such that for every constructor term $s$ that represents a string $\mathtt{s} \in \{0,1\}^{< \infty}$, we have $f_0(s) \rightarrow^* \mathrm{true}$ iff $\mathtt{s} \in D$. A class $\mathfrak{R}$ of rewriting systems is said to characterize a class $\mathfrak{D}$ of languages if every language in $\mathfrak{D}$ is characterized by a system in $\mathfrak{R}$ and every element of $\mathfrak{R}$ characterizes a language in $\mathfrak{D}$. We define the class of \emph{constrained cons-free} rewriting systems and show that this class characterizes $P$, the set of languages decidable in polynomial time on a deterministic Turing machine. The main novelty of the characterization is that it allows very liberal properties of term rewriting, in particular non-deterministic evaluation: no reduction strategy is enforced, and systems are allowed to be non-confluent.

Unification and Logarithmic Space
SPEAKER: Marc Bagnol

ABSTRACT. We present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This new bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction.

We show how unification can be used to build a model of computation by means of specific subalgebras associated to finite permutations groups.

We then prove that the complexity of deciding whenever an observation (the algebraic counterpart of a program) accepts a word can be decided within logarithmic space. We also show that the construction can naturally be related to pointer machines, an intuitive way of understanding logarithmic space computing.

Automated Complexity Analysis Based on Context-Sensitive Rewriting
SPEAKER: unknown

ABSTRACT. In this paper we present a simple technique for analysing the runtime complexity of rewrite systems. In complexity analysis many techniques are based on reduction orders. We show how the monotonicity condition for orders can be weakened by using the notion of context-sensitive rewriting. The presented technique is very easy to implement and has been integrated in the Tyrolean Complexity Tool. We provide ample experimental data for assessing the viability of our method.

Automatic Evaluation of Context-Free Grammars (System Description)
SPEAKER: unknown

ABSTRACT. We implement an online judge for context-free grammars. Our system contains a list of problems describing formal languages, and asking for grammars generating them. A submitted proposal grammar receives a verdict of acceptance or rejection depending on whether the judge determines that it is equivalent to the reference solution grammar provided by the problem setter. Since equivalence of context-free grammars is an undecidable problem, we consider a maximum length L and only test equivalence of the generated languages up to words of length L. This length restriction is very often sufficient for the well-meant submissions. Since this restricted problem is still NP-complete, we design and implement methods based on hashing, SAT, and automata that perform well in practice.

10:45-12:05 Session 38E: Maximum Satisfiability (SAT)
Location: FH, Hörsaal 6
Solving Sparse Instances of Max SAT via Width Reduction and Greedy Restriction
SPEAKER: unknown

ABSTRACT. We present a moderately exponential time polynomial space algorithm for sparse instances of Max SAT. Our algorithms run in time of the form $O(2^{(1-\mu(c))n})$ for instances with $n$ variables and $cn$ clauses. Our deterministic and randomized algorithm achieve $\mu(c) = \Omega(\frac{1}{c^2\log^2 c})$ and $\mu(c) = \Omega(\frac{1}{c \log^3 c})$ respectively. Previously, an exponential space deterministic algorithm with $\mu(c) = \Omega(\frac{1}{c\log c})$ was shown by Dantsin and Wolpert [SAT 2006] and a polynomial space deterministic algorithm with $\mu(c) = \Omega(\frac{1}{2^{O(c)}})$ was shown by Kulikov and Kutzkov [CSR 2007].

Our algorithms have three new features. They can handle instances with (1) weights and (2) hard constraints, and also (3) they can solve counting versions of Max SAT. Our deterministic algorithm is based on the combination of two techniques, width reduction of Schuler and greedy restriction of Santhanam. Our randomized algorithm uses random restriction instead of greedy restriction.

Solving MaxSAT and #SAT on structured CNF formulas
SPEAKER: unknown

ABSTRACT. In this paper we propose a structural parameter of CNF formulas and use it to identify instances of weighted {\sc MaxSAT} and {\sc \#SAT} that can be solved in polynomial time. Given a CNF formula we say that a set of clauses is precisely satisfiable if there is some complete assignment satisfying these clauses only. Let the $\mathtt{ps}$-value of the formula be the number of precisely satisfiable sets of clauses. Applying the notion of branch decompositions to CNF formulas and using $\mathtt{ps}$-value as cut function, we define the $\mathtt{ps}$-width of a formula. For a formula given with a decomposition of polynomial $\mathtt{ps}$-width we show dynamic programming algorithms solving weighted {\sc MaxSAT} and {\sc \#SAT} in polynomial time. Combining with results of 'Belmonte and Vatshelle, Graph classes with structured neighborhoods and algorithmic applications, {\sc Theor. Comput. Sci.} 511: 54-65 (2013)' we get polynomial-time algorithms solving weighted {\sc MaxSAT} and {\sc \#SAT} for some classes of structured CNF formulas. For example, we get $\bigoh(n^4)$ algorithms for formulas having a linear ordering of variables and clauses such that for any variable $x$ occurring in clause $C$, if $x$ appears before $C$ then any variable between them also occurs in $C$, and if $C$ appears before $x$ then $x$ occurs also in any clause between them. Note that the class of incidence graphs of such formulas do not have bounded clique-width.

Cores in Core based MaxSat Algorithms: an Analysis
SPEAKER: unknown

ABSTRACT. A number of MaxSat algorithms are based on the idea of generating unsatisfiable cores. A common approach is to use these cores to construct cardinality (or pseudo-boolean) constraints that are then added to the formula. Each iteration extracts a core from the modified formula. Hence, the cores generated are not just cores of the original formula, they are cores of more complicated formulas. The effectiveness of core based algorithms for MaxSat is strongly affected by the structure of the cores of the original formula. Hence it is natural to ask the question: how are the cores found by these algorithms related to the cores of the original formula? In this paper we provide a formal characterization of this relationship. Our characterization allows us to identify a possible inefficiency in these algorithms. Hence, finding ways to address it may lead to performance improvements in these state-of-the-art MaxSat algorithms.

11:00-13:00 Session 39: Tutorial and invited talk (LC)
Location: MB, Prechtlsaal
Tutorial on Stategic and Extensive Games 1
SPEAKER: Krzysztof Apt
Applications of admissible computability

ABSTRACT. Admissible computability is an extension of traditional computability theory to ordinals beyond the finite ones. I will discuss two manifestations of admissible computability in the study of effective randomness and in the study of effective properties of uncountable structures.

12:10-13:00 Session 40: Minimal Unsatisfiability (SAT)
Location: FH, Hörsaal 6
On Computing Preferred MUSes and MCSes
SPEAKER: unknown

ABSTRACT. Minimal Unsatisfiable Subsets (MUSes) and Minimal Correction Subsets (MCSes) are essential tools for the analysis of unsatisfiable formulas. MUSes and MCSes find a growing number of applications, that include abstraction refinement in software verification, type debugging, software package management and software configuration, among many others. In some applications, there can exist preferences over which clauses to include in computed MUSes or MCSes, but also in computed Maximal Satisfiable Subsets (MSSes). Moreover, different definitions of preferred MUSes, MCSes and MSSes can be considered. This paper revisits existing definitions of preferred MUSes, MCSes and MSSes of unsatisfiable formulas, and develops a preliminary characterization of the computational complexity of computing preferred MUSes, MCSes and MSSes. Moreover, the paper investigates which of the existing algorithms and pruning techniques can be applied for computing preferred MUSes, MCSes and MSSes. Finally, the paper shows that the computation of preferred sets can have significant impact in practical performance.

MUS Extraction using Clausal Proofs
SPEAKER: Marijn Heule

ABSTRACT. Recent work introduced an effective method to extract reduced unsatisfiable cores of CNF formulas as a by-product of validating clausal proofs emitted by conflict-driven clause learning SAT solvers. In this paper, we demonstrate that this method for trimming CNF formulas can also benefit state-of-the-art tools for the computation of a Minimal Unsatisfiable Subformula (MUS). Furthermore, we propose a number of techniques that improve the quality of trimming, and demonstrate a significant positive impact on the performance of MUS extractors from the improved trimming.

13:00-14:30Lunch Break
14:30-16:00 Session 41A: Tutorial by Jasmin Fisher (CSL-LICS)
Location: FH, Hörsaal 1
Understanding Biology through Logic
SPEAKER: Jasmin Fisher

ABSTRACT. The complexity in biology is staggering. Biological processes involve many components performing complicated interactions. Moreover, they are organized in hierarchies spanning multiple levels going from individual genes and proteins to signalling pathways, through cells and tissues, to organisms and populations. All the levels in this hierarchy are subject to a multitude of interdisciplinary efforts to model, analyse and devise ways to make sense of all this complexity. Mathematical models (and using computers to simulate them) have been used for these purposes for many years. The abilities of modern computers and their increased availability have greatly advanced this kind of modelling. However, in the last decade (or so) computational and logical thinking have started to change the way biological models are constructed and analysed. The work of the logic-in-computer-science research community to formalize and enable analysis of computer systems inspired several pioneers to try and harness these capabilities to the design and analysis of computer models of biological systems.  This approach, which we later termed ``executable biology'', calls for the construction of a program or another formal model that represents aspects of a biological process. By analysing and reasoning about such artefacts we gain additional insights into the mechanisms of the biological processes under study. Over the years, these efforts have demonstrated successfully how this logical perspective to biology can be beneficial for gaining new biological insights and directing new experimental avenues. In this tutorial, I will give an introduction to this approach. I will survey different modelling paradigms and how they are being used for biological modelling through models of cell fate decision-making, organism development, and molecular mechanisms underlying cancer. I will also highlight verification and the usage of formal methods to gain new biological insights. Time permitting, I will touch upon some of the challenges involved in applying synthesis to the development of models directly from experimental data and the efforts that are required to make the computational tools that we develop widely adopted by experimentalists and clinicians in the biological and medical research community.

14:30-16:00 Session 41B: Invited Talk (ITP)
Location: FH, Hörsaal 8
From Operational Models to Information Theory - Side-Channels in pGCL with Isabelle
SPEAKER: David Cock

ABSTRACT. In this paper, we formally derive the probabilistic security predicate (expectation) for a guessing attack against a system with side-channel leakage, modelled in pGCL. Our principal theoretical contribution is to link the process-oriented view, where attacker and system execute particular model programs, and the information-theoretic view, where the attacker solves an optimal-decoding problem, viewing the system as a noisy channel. Our practical contribution is to illustrate the selection of probabilistic loop invariants to verify such security properties, and the demonstration of a mechanical proof linking traditionally distinct domains.

Invited talk: Microcode Verification - Another Piece of the Microprocessor Verification Puzzle
SPEAKER: unknown
14:30-16:00 Session 41C (RTA-TLCA)
Location: Informatikhörsaal
Process types as a descriptive tool for distributed protocols
Applicative May- and Should-Simulation in the Call-by-Value Lambda Calculus with AMB
SPEAKER: David Sabel

ABSTRACT. Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may- and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness.Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.

14:30-15:50 Session 41D: Complexity and Reductions (SAT)
Location: FH, Hörsaal 6
Fixed-parameter tractable reductions to SAT
SPEAKER: unknown

ABSTRACT. Today's SAT solvers have an enormous importance and impact in many practical settings. They are used as efficient back-end to solve many NP-complete problems. However, many reasoning problems are located at the second level of the Polynomial Hierarchy or even higher, and hence polynomial-time transformations to SAT are not possible, unless the hierarchy collapses. In certain cases one can break through these complexity barriers by fixed-parameter tractable (fpt) reductions which exploit structural aspects of problem instances in terms of problem parameters. Recent research established a general theoretical framework that supports the classification of parameterized problems on whether they admit such an fpt-reduction to SAT or not. We use this framework to analyze some problems that are related to Boolean satisfiability. We consider several natural parameterizations of these problems, and we identify for which of these an fpt-reduction to SAT is possible. The problems that we look at are related to minimizing an implicant of a DNF formula, minimizing a DNF formula, and satisfiability of quantified Boolean formulas.

On Reducing Maximum Independent Set to Minimum Satisfiability

ABSTRACT. Maximum Independet Set (MIS) is a well-known NP-hard graph problem, tightly related with other well known NP-hard graph problems, namely Minimum Vertex Cover (MVC) and Maximum Clique (MaxClq). This paper introduces a novel reduction of MIS into Minimum Satisfiability (MinSAT). Besides providing an alternative approach for solving MIS, the proposed reduction serves as a simpler alternative proof of the NP-hardness of MinSAT. The reduction naturally maps the vertices of a graph into clauses, without requiring the inclusion of hard clauses. Moreover, it is shown that the proposed reduction uses fewer variables and clauses than the existing alternative of mapping MIS into Maximum Satisfiability (MaxSAT). The paper develops a number of optimizations to the basic reduction, which significantly reduce the total number of variables used. The experimental evaluation considered the reductions described in the paper as well as existing state-of-the-art approaches. The results show that the proposed approaches based on MinSAT are competitive with existing approaches.

Conditional Lower Bounds for Failed Literals and Related Techniques

ABSTRACT. Simplification techniques for conjunctive normal form (CNF) formulas, used before (i.e., in preprocessing) and during (i.e., in inprocessing) search for satisfiability, have proven integral in speeding up SAT solvers in practice. Formula simplification tends to come with a price: in practice, the stronger the technique in terms of achieved simplification, the more costly in terms of computational effort it is to apply until fixpoint. However, formal understanding of the time complexity of different preprocessing techniques is rather limited at present. In this paper, we prove time-complexity lower bounds for different probing-based CNF simplification techniques, focusing on failed literal elimination and related techniques. More specifically, we show that improved algorithms for these simplification techniques would imply that the Strong Exponential Time Hypothesis (i.e., CNF-SAT is not solvable in time $2^{\epsilon n}$ for any $\epsilon<1$) is false.

14:30-16:00 Session 41E: Special session: Logic of Games and Rational Choice (LC)
Location: MB, Hörsaal 15
The DNA of logic and games.

ABSTRACT. Logic and games are entangled in delicate ways. Logics {\bf of} games are used to analyze how players reason and act in a game. I will discuss dynamic-epistemic logics that analyze various phases of play in this mode. But one can also study logic {\bf as} games, casting major logical notions as game-theoretic concepts. The two perspectives create a circle, or double helix if you will, of contacts all around. I will address this entanglement, and the issues to which it gives rise (\cite{vB}).

Dependence and independence - a logical approach

ABSTRACT. I will give an overview of dependence logic, the goal of which is to establish a basic logical theory of dependence and independence underlying seemingly unrelated subjects such as game theory, causality, random variables, database theory, experimental science, the theory of social choice, Mendelian genetics, etc. There is an abundance of new results in this field demonstrating a remarkable convergence. The concepts of (in)dependence in the different fields of humanities and sciences have surprisingly much in common and a common logic is starting to emerge.

14:30-16:00 Session 41F: Special session: Model Theory (LC)
Location: MB, Aufbaulabor
Pregeometries and definable groups

ABSTRACT. We will describe a program for analyzing groups and sets definable in certain pairs (R, P). Examples include:

(1) R is an o-minimal ordered group and P is a real closed field with bounded domain (joint work with Peterzil) (2) R is an o-minimal structure and P is a dense elementary substructure of R (work in progress with Hieronymi)

In each of these cases, a relevant notion of a pregeometry and genericity is used.

Quasiminimal structures and excellence
SPEAKER: Meeri Kesala

ABSTRACT. A structure $M$ is quasiminimal if every definable subset of $M$ is either countable or co-countable. The field of complex numbers is a strongly minimal structure and hence quasiminimal, but if we add the natural exponential function, the quasiminimality of the structure becomes an open problem. Boris Zilber defined the non-elementary framework of \textit{quasiminimal excellent classes} in 2005 in order to show that his class of \textit{pseudoexponential fields} is uncountably categorical. He conjectured that the unique pseudoexponential field of cardinality $2^{\aleph_0}$ fitting into this framework is isomorphic to the complex numbers with exponentiation. A key property for the categoricity of quasiminimal excellent classes was the technical axiom of excellence, which was adopted from Shelah’s work for excellent sentences in $L_{\omega_1\omega}$. However, the original proof of the categoricity of pseudoexponential fields turned out to have a gap and the problem lay in sh! owing that the excellence axiom holds.

In the paper \textit{Quasiminimal structures and excellence}\cite{cite1} we fill the gap in the proof with a surprising result: the excellence axiom is actually redundant in the framework of quasiminimal excellent classes. This result elegantly combines methods from classification theory that were generalized to different non-elementary frameworks by a group of people. These methods have a combinatorial core idea that is independent of the compactness of first order logic. We also study whether other quasiminimal structures fit into this uncountably categorical framework.

The paper strengthens the belief that non-elementary methods can provide effective tools to analyse structures that are out of reach for traditional model-theoretic methods. Different frameworks have been suggested and the methods refined and there are many interesting paths in the ongoing research. The paper is joint work of Martin Bays, Bradd Hart, Tapani Hyttinen, MK and Jonathan Kirby.

14:30-16:00 Session 41G: Special session: Recursion Theory (LC)
Location: MB, Seminarraum 212/232
Uniform and non-uniform reducibilities of algebraic structures

ABSTRACT. The talk will be devoted to various versions of algorithmic reducibility notion between algebraic structures. In particular, the reducibilities under Turing operetors, enumeration operators, and under $\Sigma$-formulas will be considered. Several constructions of jump inversion where these reducibilities do not coincide. Furthermore, the $\Sigma$-reducibility between the direct sums of cyclic p-groups will be studied in detail.

Randomness in the Weihrauch degrees
SPEAKER: Rupert Hölzl


14:30-16:00 Session 41H: Special session: Perspectives on Induction (CSL-LICS and LC)
Location: MB, Prechtlsaal
Inductive Proofs & the Knowledge They Represent


Proofs by mathematical induction seem to require certain interdependencies between the instances of the generalizations they prove. The character and significance of these interdependencies will be the main subject of this talk.

Decidable languages for knowledge representation and inductive definitions: From Datalog to Datalog+/-
SPEAKER: Georg Gottlob

ABSTRACT. Datalog is a language based on function-free Horn clauses used to inductively define new relations from finite relational structures. Datalog has many nice computational and logical properties. For example, Datalog captures PTIME on ordered structures, which means that evaluating fixed Datalog programs (i.e. rule sets) over finite structures is in PTIME and, moreover, every PTIME-property on ordered structures can be expressed as a Datalog program (see [1] for a survey). After giving a short overview of Datalog we argue that this formalism has certain shortcomings and is not ideal for knowledge representation, in particular, for inductive ontological knowledge representation and reasoning. We consequently introduce Datalog+/- which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+/- extends plain Datalog by features such as existentially quantified rule heads, negative constraints, and equalities in rule heads, and, at the same time, restricts the rule syntax so as to achieve decidability and tractability. In particular, we discuss three paradigms ensuring decidability: chase termination, guardedness, and stickiness, which were introduced and studied in [2, 3, 4, 5].

[1] Dantsin, E.; Eiter, T.; Georg, G.; and Voronkov, A. 2001. Complexity and expressive power of logic programming. ACM Computing Surveys 33(3):374-425. [2] Cali, A.; Gottlob, G.; and Kifer, M. 2013. Taming the innite chase: Query answering under expressive relational constraints. Journal of Articial Intelligence Research 48:115-174. [3] Cali, A.; Gottlob, G.; and Lukasiewicz, T. 2012. A general Datalog-based frame- work for tractable query answering over ontologies. Journal of Web Semantics 14:57-83. [4] Cali, A.; Gottlob, G.; and Pieris, A. 2012. Towards more expressive ontology languages: The query answering problem. Artificial Intelligence 193:87-128. [5] Gottlob, G.; Manna, M.; and Pieris, A. 2013. Combining decidability paradigms for existential rules. Theory and Practice of Logic Programming 13(4-5):877-892.

16:00-16:30Coffee Break
16:30-18:30 Session 42A: Computability (CSL-LICS)
Location: FH, Hörsaal 1
Turing Machines with Atoms, Constraint Satisfaction Problems, and Descriptive Complexity
SPEAKER: unknown

ABSTRACT. We study deterministic computability over sets with atoms. We characterize those alphabets for which Turing machines with atoms determinize. To this end, the determinization problem is expressed as a Constraint Satisfaction Problem, and a characterization is obtained from deep results in CSP theory. As an application to Descriptive Complexity Theory, within a substantial class of relational structures including Cai-F\"urer-Immerman graphs, we precisely characterize those subclasses where the logic IFP+C captures order-invariant polynomial time computation.

Beta Reduction is Invariant, Indeed
SPEAKER: unknown

ABSTRACT. Slot and van Emde Boas' invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which length and size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those unsharing steps that only explicit the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

On the Pigeonhole and Related Principles in Deep Inference and Monotone Systems
SPEAKER: Anupam Das

ABSTRACT. We construct quasipolynomial-size proofs of the propositional pigeonhole principle in the deep inference system KS, addressing an open problem raised in previous works and matching the best known upper bound for the more general class of monotone proofs.

We make significant use of monotone formulae computing boolean threshold functions, an idea previously considered in works of Atserias et al. The main construction, monotone proofs witnessing the symmetry of such functions, makes use of an implementation of merge-sort in the design of proofs in order to tame the structural behaviour of atoms, and so the complexity of normalisation. Proof transformations from previous work on atomic flows are then employed to yield appropriate proofs in KS.

As further results we show that our constructions can be applied to provide quasipolynomial-size proofs of the Parity principle and the Generalised Pigeonhole principle, and n^(O(log log n))-size proofs of the weak pigeonhole principle, thereby also improving the best known bounds for monotone proofs.

Faster decision of first-order graph properties
SPEAKER: Ryan Williams

ABSTRACT. First-order logic captures a vast number of computational problems on graphs. We study the time complexity of deciding graph properties definable by first-order sentences with $k$ variables in prenex normal form. The trivial algorithm for this problem runs in $O(n^k)$ time on $n$-node graphs (the big-O hides the dependence on $k$).

Answering a question of Miklos Ajtai, we give the first algorithms running faster than the trivial algorithm, in the general case of arbitrary first-order sentences and arbitrary graphs. One algorithm runs in $O(n^{k-3+\omega)} \leq O(n^{k-0.627})$ time for all $k \geq 3$, where $\omega < 2.373$ is the $n \times n$ matrix multiplication exponent. By applying fast rectangular matrix multiplication, the algorithm can be improved further to run in $n^{k-1+o(1)}$ time, for all $k \geq 9$. Finally, we observe that the exponent of $k-1$ is optimal, under the popular hypothesis that CNF satisfiability with $n$ variables and $m$ clauses cannot be solved in $(2-\epsilon)^n \cdot \poly(m)$ time for some $\epsilon > 0$.

16:30-18:30 Session 42B: Concurrency (CSL-LICS)
Location: FH, Hörsaal 5
Logic for Communicating Automata with Parameterized Topology

ABSTRACT. We introduce parameterized communicating automata (PCA) as a model of systems where finite-state processes communicate through FIFO channels. Unlike classical communicating automata, a given PCA can be run on any network topology of bounded degree. The topology is thus a parameter of the system. We provide various Büchi-Elgot-Trakhtenbrot theorems for PCA, which roughly read as follows: Given a logical specification \phi and a class of topologies T, there is a PCA that is equivalent to \phi on all topologies from T. We give uniform constructions which allow us to instantiate T with concrete classes such as pipelines, ranked trees, grids, rings, etc. The proofs build on a locality theorem for first-order logic due to Schwentick and Barthelmann, and they exploit concepts from the non-parameterized case, notably a result by Genest, Kuske, and Muscholl.

Equilibria of Concurrent Games on Event Structures

ABSTRACT. Event structures are a model of concurrency with a natural game-theoretic interpretation, which was initially given in terms of zero-sum games only. This paper studies an extension of such games to include a much wider class of game types and solution concepts. The extension permits modelling scenarios where, for instance, cooperation or independent goal-driven behaviour of computer agents is desired. Specifically, we define non-zero-sum games on event structures, and give full characterisations---existence and completeness results---of the kinds of games, payoff sets, and strategies for which Nash equilibria and subgame perfect Nash equilibria always exist. The game semantics of various logics and systems are outlined to illustrate the power of this framework.

Compositional Verification of Termination-Preserving Refinement of Concurrent Programs
SPEAKER: unknown

ABSTRACT. Many verification problems can be reduced to refinement verification. However, existing work on verifying refinement of concurrent programs either fails to prove the preservation of termination, allowing a diverging program to trivially refine any programs, or is difficult to apply in compositional thread-local reasoning. In this paper, we first propose a new simulation technique, which establishes termination-preserving refinement and is a congruence with respect to parallel composition. Then we give a proof theory for the simulation, which is the first Hoare-style concurrent program logic supporting termination-preserving refinement proofs. We show two key applications of our logic, i.e., verifying linearizability and lock-freedom together for fine-grained concurrent objects, and verifying full correctness of optimizations of concurrent algorithms.

Symmetry in Concurrent Games
SPEAKER: unknown

ABSTRACT. Behavioural symmetry is introduced into concurrent games. It expresses when plays are essentially the same. A characterization of strategies on games with symmetry is provided. This leads to a bicategory of strategies on games with symmetry. Symmetry helps allay the perhaps overly-concrete nature of games and strategies, and shares many mathematical features with homotopy. In the presence of symmetry we can consider monads for which the monad laws do not hold on the nose but do hold up to symmetry. This broadening of the concept of monad has a dramatic effect on the types concurrent games can support and allows us, for example, to recover the replication needed to express and extend traditional game semantics.

16:30-18:00 Session 42C: Codatatypes (ITP)
Location: FH, Hörsaal 8
Truly Modular (Co)datatypes for Isabelle/HOL

ABSTRACT. We extended Isabelle/HOL with a pair of definitional commands for datatypes and codatatypes. They support mutual and nested (co)recursion through well-behaved type constructors, including mixed recursion–corecursion, and are complemented by syntaxes for introducing primitively (co)recursive functions and by a general proof method for reasoning coinductively. As a case study, we ported Isabelle’s Coinductive library to use the new commands, eliminating the need for tedious ad hoc constructions.

Recursive Functions on Codatatypes via Domains and Topologies

ABSTRACT. The usual definition facilities in theorem provers cannot handle all recursive functions on codatatypes; the filter function on lazy lists is a prime counterexample. We present two new ways of directly defining functions like filter by exploiting their dual nature as producers and consumers. Borrowing from domain theory and topology, we define them as a least fixpoint (producer view) and as a continuous extension (consumer view). Both constructions yield proof principles that allow elegant proofs.

Experience Implementing a Performant Category-Theory Library in Coq
SPEAKER: Jason Gross

ABSTRACT. We describe our experience implementing a broad category-theory library in Coq. Category theory and computational performance are not usually mentioned in the same breath, but we have needed substantial engineering effort to teach Coq to cope with large categorical constructions without slowing proof script processing unacceptably. In this paper, we share the lessons we have learned about how to represent very abstract mathematical objects and arguments in Coq and how future proof assistants might be designed to better support such reasoning. One particular encoding trick to which we draw attention allows category-theoretic arguments involving duality to be internalized in Coq's logic with definitional equality. Ours may be the largest Coq development to date that uses the relatively new Coq version developed by homotopy type theorists, and we reflect on which new features were especially helpful.

16:30-18:15 Session 42D (RTA-TLCA)
Location: Informatikhörsaal
A Coinductive Confluence Proof for Infinitary Lambda-Calculus.

ABSTRACT. We give a coinductive proof of confluence, up to equivalence of root-active subterms, of infinitary lambda-calculus. We also show confluence of Böhm reduction (with respect to root-active terms) in infinitary lambda-calculus. In contrast to previous proofs, our proof makes heavy use of coinduction and does not employ the notion of descendants.

Confluence by Critical Pair Analysis
SPEAKER: unknown

ABSTRACT. Knuth and Bendix showed that confluence of a terminating first-order rewrite system can be reduced to the joinability of its finitely many critical pairs. We show that this is still true of a rewrite system RT ∪ RNT such that RT is terminating and RNT is a left-linear, rank non-increasing, possibly non-terminating rewrite system. Confluence can then be reduced to the joinability of the critical pairs of RT and to the existence of decreasing diagrams for the critical pairs of RT inside RNT as well as for the rigid parallel critical pairs of RNT.

Proving Confluence of Term Rewriting Systems via Persistency and Decreasing Diagrams
SPEAKER: unknown

ABSTRACT. Decreasing diagrams technique (van Oostrom, 1994) has been successfully used to prove confluence of rewrite systems in various ways; using rule-labelling (van Oostrom, 2008), it can be also applied directly to prove confluence of some linear term rewriting systems (TRSs) automatically. Some efforts for extending the rule-labelling are known, but non-left-linear TRSs are left beyond the scope. Two methods for automatically proving confluence of non-(left-)linear TRSs with the rule-labelling are given. The key idea of our methods is to combine the decreasing diagrams technique with persistency of confluence (Aoto & Toyama, 1997).

Conditional Confluence (System Description)

ABSTRACT. This paper describes the Conditional Confluence tool, a fully automatic confluence checker for first-order conditional term rewrite systems. The tool implements various confluence criteria that have been proposed in the literature. A simple technique is presented to test conditional critical pairs for infeasibility, which makes conditional confluence criteria more useful. Detailed experimental data is presented.

16:30-18:00 Session 42E: Tool Papers (SAT)
Location: FH, Hörsaal 6
MPIDepQBF: Towards Parallel QBF Solving without Knowledge Sharing
SPEAKER: unknown

ABSTRACT. Inspired by the recent work on parallel SAT solving, we present a lightweight approach for concurrently solving quantified Boolean formulas (QBFs). In particular, our approach abstains from globally exchanging information between working processes, but keeps learnt information only locally. To this end, we equipped the state-of-the-art QBF solver DepQBF with assumption-based reasoning and integrated it in our novel solver MPIDepQBF as backend solver. Extensive experiments on standard computers as well as on the supercomputer Tsubame show the impact of our approach.

DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs

ABSTRACT. The DRAT-trim tool is a satisfiability proof checker based on the new DRAT proof format. Unlike its predecessor, DRUP-trim, all presently known SAT solving and preprocessing techniques can be validated using DRAT-trim. Checking time of a proof is comparable to the running time of the corresponding solver. Memory usage is also similar to solving memory consumption, which overcomes a major hurdle of resolution-based proof checkers. The DRAT-trim tool can emit trimmed formulas, optimized proofs, and new TraceCheck$^+$ dependency graphs. We describe the output that is produced, what optimizations have been made to check RAT clauses, and potential applications of the tool.

Automatic Evaluation of Reductions between NP-complete Problems
SPEAKER: unknown

ABSTRACT. We implement an online judge for evaluating correctness of reductions between NP-complete problems. The site has a list of exercises asking for a reduction between two given problems. Internally, the reduction is evaluated by means of a set of tests. Each test consists of an input of the first problem and gives rise to an input of the second problem through the reduction. The correctness of the reduction, that is, the preservation of the answer between both problems, is checked by applying additional reductions to SAT and using a state-of-the-art SAT solver. In order to represent the reductions, we have defined a new programming language called REDNP. On one side, REDNP has specific features for describing typical concepts that frequently appear in reductions, like graphs and clauses. On the other side, we impose severe limitations to REDNP in order to avoid malicious submissions, like the ones with an embedded SAT solver.

Open-WBO: a Modular MaxSAT Solver
SPEAKER: Ruben Martins

ABSTRACT. This paper presents Open-WBO, a new MaxSAT solver. Open-WBO has two main features. First, it is an open-source solver that can be easily modified and extended. Most MaxSAT solvers are not available in open-source, making it hard to extend and improve current MaxSAT algorithms. Second, Open-WBO may use any MiniSAT-like solver as the underlying SAT solver. As many other MaxSAT solvers, Open-WBO relies on successive calls to a SAT solver. Even though new techniques are proposed for SAT solvers every year, for many MaxSAT solvers it is hard to change the underlying SAT solver. With Open-WBO, advances in SAT technology will result in a free improvement in the performance of the solver. In addition, the paper uses Open-WBO to perform an evaluation on the impact of different SAT solvers in the performance of MaxSAT algorithms.

16:30-18:30 Session 42F: Contributed talks A1 (LC)
Location: MB, Hörsaal 14
Natural deduction for intuitionistic differential linear logic

ABSTRACT. Recent investigations in denotational semantics led Ehrhard to define a new model of lambda-calculus and linear logic called finiteness spaces. In finiteness spaces, types are interpreted as topological vector spaces, lambda-terms and (via the Curry-Howard correspondence) intuitionistic proofs are interpreted as analytic functions in topological vector spaces.

However, analytic functions are infinitely differentiable, thus the question arise whether differentiation can be defined as a meaningful syntactic operation. A positive answer from a computational perspective is given by differential lambda$-calculus, an extension of lambda-calculus in which the operations of differentiation and Taylor expansion of a lambda-term are definable. One of the interests of differential lambda-calculus lies in the analogy that can be drawn between the usual notion of differentiation (i.e. linearity in mathematical sense) and its computational meaning (i.e. linearity in computational sense).

It turns out that differentiation and the actual infinite operation involved in Taylor expansion makes sense also in a purely logical setting. The right syntax is provided by differential linear logic (DiLL) and analyzed in terms of differential interaction nets by T. Ehrhard and L. Regnier (2006). DiLL is an extension of LL characterized by three new rules dealing with the ! modality: cocontraction, codereliction, and coweakening. The latter rules are called "co-structural" and, in a one-sided sequent calculus setting, can be considered as symmetric duals of the ?-rules. Co-structural rules give a logical status to differentiation.

In this paper, we introduce a natural deduction system for intuitionistic DiLL. We show normalization and, as corollaries, subformula, separability and introduction form properties for this system. Its relationships with the natural deduction systems for intuitionistic logic are discussed.

An outline of intuitionistic epistemic logic

ABSTRACT. We outline an intuitionistic view of knowledge which maintains the Brouwer-Heyting-Kolmogorov semantics and is consistent with Williamson's suggestion that intuitionistic knowledge is the result of verification and that verifications do not necessarily yield strict proofs. On this view, A --> KA is valid and KA --> A is not. The former expresses the constructivity of truth, while the latter demands that verifications yield strict proofs. Unlike in the classical case where

Classical Knowledge ==> Classical Truth


Intuitionistic Truth ==> Intuitionistic Knowledge.

Consequently we show that KA --> A is a distinctly classical principle, too strong as the intuitionistic truth condition for knowledge, ``false is not known,''which can be more adequately expressed by e.g., ~(KA & ~A) or, equivalently, ~(K⊥).

We construct a system of intuitionistic epistemic logic:

IEL = intuitionistic logic IPC + K(A-->B)-->(KA-->KB) + (A-->KA) + ~(K⊥)

provide a Kripke semantics for it and prove IEL soundness, completeness and the disjunction property.

IEL can be embedded into an extension of S4, S4V, via the Godel embedding ``box every subformula.'' S4V is a bi-modal classical logic consisting of the rules and axioms of S4 for [] and D for K, with the connecting axiom []A --> KA. The soundess of the embedding is proved.

Within the framework of IEL, the knowability paradox is resolved in a constructive manner. Namely, the standard Church-Fitch proof reduces the intuitionistic knowability principle A --> <>KA to A --> ~~KA, which is an IEL-theorem. Hence the knowability paradox in the domain of IEL disappears since neither of these principles are intuitionistically controversial. We argue that previous attempts to analyze the paradox were insufficiently intuitionistic.

Some properties related to the existence property in intermediate predicate logics

ABSTRACT. We discuss relationships between the existence property (EP) and its weak variants in intermediate predicate logics.

In [2], we provided a negative answer to Ono's problem P52 in intermediate predicate logics (Does EP imply the disjunction property? [1]), and presented some related results.

To solve the problem, we considered two variants of EP in super-intuitionistic predicate logics, and used them to construct counterexamples in intermediate logics. One variant is an extreme EP; namely, for every \exists xA(x), L |- \exists xA(x) implies that there exists a fresh individual variable v such that L |- A(v). This property is so extreme that none of intermediate predicate logics has it. However, if we restrict \exists xA(x) to a sentence, EP implies this property, which we call the sentential existence property (sEP). Another one is a weak variant of EP; an intermediate predicate logic L is said to have the weak existence property (wEP), if for every \exists xA(x) that contains no free variables other than v_1, ..., v_n, L |- \exists xA(x) implies L |- A(v_1)\vee...\vee A(v_n). Then, it is easy to see that EP implies wEP, and wEP implies sEP.

In the present talk, we show that the converses of these implications do not hold in intermediate predicate logics.

[1] H. Ono, Some problems in intermediate predicate logics, Reports on Mathematical Logic, vol. 21 (1987), pp.55--67.

[2] N.-Y. Suzuki, A negative solution to Ono's problem P52: Existence and disjunction properties in intermediate predicate logics, submitted.

On Hallden complete modal logics determined by homogeneous frames

ABSTRACT. For several normal modal logics we construct countable many their normal extensions, which are Hallden complete, as well as uncountable many, which are not. Our approach to this task is purely semantic.

A universal diagonal schema by fixed-points and Yablo’s paradox
SPEAKER: Ahmad Karimi

ABSTRACT. In 1906, Russell [5] showed that all the known set-theoretic paradoxes (till then) had a common form. In 1969, Lawvere [3] used the language of category theory to achieve a deeper unication, embracing not only the set-theoretic paradoxes but incompleteness phenomena as well. To be precise, Lawvere gave a common form to Cantor's theorem about power sets, Russell's paradox, Tarski's theorem on the undefinability of truth, and Godel's first incompleteness theorem. In 2003, Yanofsky [7] extended Lawvere's ideas using straightforward set-theoretic language and proposed a universal schema for diagonalization based on Cantor's theorem. In this universal schema for diagonalization, the existence of a certain (diagonalized-out and contradictory) object implies the existence of a fixed-point for a certain function. He showed how self-referential paradoxes, incompleteness, and fixed-point theorems all emerge from the single generalized form of Cantor's theorem. Yanofsky extended Lawvere's analysis to include the Liar paradox, the paradoxes of Grelling and Richard, Turing's halting problem, an oracle version of the P=?NP problem, time travel paradoxes, Parikh sentences, Lob's Paradox and Rice's theorem. In this talk, we fit more theorems in the universal schema of diagonalization, such as Euclid's theorem on the infinitude of the primes, and new proofs of Boolos [1] for Cantor's theorem on the non-equinumerosity of a set with its powerset. We also show the existence of Ackermann-like functions (which dominate a given set of functions such as primitive recursive functions) using the schema. Furthermore, we formalize a reading of Yablo's paradox [6], the most challenging paradox in the recent years, in the framework of Linear Temporal Logic (LTL [2]) and the diagonal schema, and show how Yablo's paradox involves circularity by presenting it in the framework of LTL. All in all, we turn Yablo's paradox into a genuine mathematico logical theorem. This is the first time that Yablo's paradox becomes a (new) theorem in mathematics and logic. We also show that Priest's [4] inclosure schema can fit in our universal diagonal/fixed-point schema. The inclosure schema was used by Priest for arguing for the self-referentiality of Yablo's sequence of sentences, in which no sentence directly refers to itself but the whole sequence does so.


[1] George Boolos, Constructing Cantorian Counterexamples, Journal of Philosophical Logic, vol. 26 (1997), no. 3, pp. 237-239.

[2] Fred Kroger & Stephan Merz, Temporal Logic and State Systems, EATCS Texts in Theoretical Computer Science, Springer, 2008.

[3] F. William Lawvere, Diagonal arguments and cartesian closed categories, Category theory, homology theory and their applications II (Seattle Research Center of the Battelle Memorial Institute), LNM 92, Springer, Berlin, 1969, pp. 134-145.

[4] Graham Priest, Yablo’s Paradox, Analysis, vol. 57 (1997), no. 4, pp. 236-242.

[5] Bertrand Russell, On some difficulties in the theory of transfinite numbers and order types, Proceedings of the London Mathematical Society, vol. s2-4 (1907), no. 1, pp. 29-53.

[6] Stephen Yablo, Paradox without Self-Reference, Analysis, vol. 53 (1993), no. 4, pp. 251-252.

[7] Noson S. Yanofsky, A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, Bulletin of Symbolic Logic, vol. 9 (2003), no. 3, pp. 362-386.

Canonical extensions and prime filter completions of poset expansions.
SPEAKER: unknown

ABSTRACT. The algebraic models of substructural logics are residuated ordered algebras [2]. Embedding a residuated ordered algebra into a complete algebra of the same class has many applications in logic, e.g., the canonical extension is used to obtain relational semantics for non-classical logics [1].

The underlying sets of the algebraic structures of interest are often partially ordered. The canonical extensions of posets have been studied in [1, 2]. Upon closer inspection it can be seen that the completions in [1] and [2] are generally different. Both use a construction, first appearing in [3], based on a Galois connection between sets of filters and ideals, however, the choice of filters differs.

We investigate the construction from [3] for various choices of filters and ideals, consider the extension of operations defined on the posets and focus on some specific properties of completions obtained via this construction. Next we present a construction for completions of posets that makes use of the prime filters of the posets. We show that the completion obtained via this second construction is isomorphic to the former for a particular choice of filters.

16:30-18:30 Session 42G: Contributed talks B1 (LC)
Location: MB, Hörsaal 15
On the compactness theorem in many valued logics

ABSTRACT. Nowadays many-valued logics occupy new areas of computer science. Being extensively used in various areas, theoretical investigations of different properties in such logics is a challenging area of research [1]. Firstly it worths mentioning that axiomatic systems for many valued logics are not well developed. Secondly many notions are not naturally extended in many valued logics from already existing analogues of classical or other "well-developed" non classical logics. One of the key properties to characterize first order logic is compactness. We formulate an analogue of classical compactness theorem for arbitrary N-valued logic. To prove it overloading operators are constructed.

Formalizing vagueness as a doxastic, relational concept
SPEAKER: Thomas Benda

ABSTRACT. Descriptions and statements about the physical world often involve vague predicates, e.g., "x is red". It has become a common procedure to assign vague predicates intermediate truth values that are real numbers between 0 and 1. However, there is no satisfactory account what it means to be true to a given degree, which leaves doxastic degrees as the only option. Furthermore, real numbers provide an almost absurd accuracy as well as a natural metric, where in fact we want to state no more than, say, that x is rather red than not, perhaps redder or less red than some y. That suggests considering vagueness as a relational notion.

A thereby established vagueness relation is a partial order. Advantages of a relational account of vagueness are that vague predicates form a comparatively weak structure without metric and that the well-known problem of higher-order vagueness vanishes.

There is no reason not to implement doxastic degrees on the object language level. Furthermore, with the practice of evaluating vague predicates, relational vagueness should be allowed to depend on perception and epistemic as well as pragmatic context and hence be non-extensional. To set up a requisite formal language, we enclose vagueness predicates in quotation marks and perform their assessment with a background in mind which provides epistemic and pragmatic context. Thus a ternary predicate is introduced, B'Ax''Ay'b, read "I believe, with background b, Ax to at least as high a degree as Ay". Given background b, believing Ax with absolute confidence is formalized as B'Ax''0=0'b.

Such a formalization may be applied to conferring values to physical magnitudes which uses approximations and error bars. "The value of a is v" would then be vague as much as "x is red", acknowledging a fuzzy nature of experimental, particularly, macroscopic physics.

Two Formalisms for a Logic of Generalized Truth Values

ABSTRACT. This report concerns to the problem of constructing tableau-based proof procedure for a logic of generalized truth values. We propose two different tableau-style formalisation for a logic which captures a syntactical analogue of semantic logical consequence relation. One of them is more or less 'traditional' and resembles tableau systems for relevant logic FDE. Another one is appropriate for designing a proof search procedure and based on well known KE formalism.

Characterising Logics through their Admissible Rules

ABSTRACT. The admissible rules of a logic are those rules under which the set of its theorems are closed. Many non-classical logics have interesting admissible rules, and these admissible rules carry quite a bit of information. In fact, some logics can even be completely characterised by way of their admissible rules. Consider the lattice of intermediate logics, that is, the lattice of consistent extensions of propositional intuitionistic logic. The disjunction property is an example of an admissible rule. It is well-known that there are continuum many logics among that lattice that enjoy the disjunction property, Medvedev’s logic and Kreisel–Putnam logic to name but two. It was shown by Maksimova that Medvedev’s logic can be characterised as the maximal intermediate logic above Kreisel–Putnam logic with the disjunction property. The intuitionistic propositional logic itself can also be characterised as the maximal intermediate logic that admits a particular set of rules, as has been independently confirmed by Skura and Iemhoff. The rules they employed arose independently in the work of Citkin and Visser. These rules can be stratified in natural way with respect to admissibility in the logics of bounded branching, also known as the Gabbay–de Jongh logics. We will present a characterisation of each of these logics as being the maximal intermediate logic admitting a particular strata of the aforementioned rules.

Models for the probabilistic sequent calculus

ABSTRACT. The usual approach to treating the probability of a sentence leads to a kind of polymodal logic with iterated (or not iterated) probability operators over formulae (see [3]). On the other hand, there were some papers dealing with probabilistic form of inference rules (see [1], [2] and [4]). The sequent calculi present a particular mode of deduction relation analysis. The combination of these concepts, the sentence probability and the deduction relation formalized in a sequent calculus, makes it possible to build up sequent calculus probabilized --- the system {\bf LKprob}. Sequents in {\bf LKprob} are of the form $\Gamma\vdash_a^b\Delta$, meaning that 'the probability of provability of $\Gamma\vdash\Delta$ is in interval $[a,b] \cap I$', where $I$ is a finite subset of reals $[0,1]$.

Let $\text{\rm Seq}$ be the set of all sequents of the form $\Gamma\vdash\Delta$. A model for {\bf LKprob} is any mapping $p: \text{\rm Seq} \to [0,1]$ satisfying: \it (i) \rm $p(A\vdash A)=1$, for any formula $A$; \it (ii) \rm if $p(AB\vdash )=1$, then $p( \vdash AB)=p( \vdash A)+p( \vdash B)$, for any formulas $A$ and $B$; \it (iii) \rm if sequents $\Gamma\vdash\Delta$ and $\Pi\vdash\Lambda$ are equivalent in $\text{\bf LK}$, in sense that there are proofs for both sequents $\bigwedge \Gamma \rightarrow \bigvee \Delta \vdash \bigwedge \Pi \rightarrow \bigvee \Lambda$ and $\bigwedge \Pi \rightarrow \bigvee \Lambda\vdash \bigwedge \Gamma \rightarrow \bigvee \Delta $ in $\mathbf{LK}$, then $p(\Gamma\vdash\Delta)=p(\Pi\vdash\Lambda)$.

We prove that our probabilistic sequent calculus {\bf LKprob} is sound and complete with respect to the models just described.


\bibitem{cite1} {\scshape A. M. Frisch, P. Haddawy}, {\itshape Anytime deduction for probabilistic logic}, {\bfseries\itshape Artificial Intelligence}, vol.~69 (1993), pp.~93--122.

\bibitem{cite2} {\scshape T. Hailperin}, {\itshape Probability logic}, {\bfseries\itshape Notre Dame Journal of Formal Logic}, vol.~25 (1984), pp.~198--212.

\bibitem{cite3} {\scshape Z. Ognjanovi\'c, M. Ra\v skovi\'c, Z. Markovi\'c}, {\itshape Probability logics}, {\bfseries\itshape Logic in Computer Science}, Zbornik radova 12 (20), Z. Ognjanovi\'c (ed.), Mathematical Institute SANU, Belgrade, 2009, pp. ~35--111.

\bibitem{cite4} {\scshape C. G. Wagner}, {\itshape Modus tollens probabilized}, {\bfseries\itshape British Journal for the Philosophy of Science}, vol.~54(4) (2004), pp.~747–-753.


Display-type calculi for non classical logics

ABSTRACT. The proposed talk gives a general outline of a line of research aimed at providing a wide array of logics (spanning from dynamic logics to monotone modal logic through substructural logics) with display type proof systems which in particular enjoy a uniform strategy for cut elimination. We generalize display calculi in two directions: by explicitly allowing different types, and by dropping the full display property. The generalisation to a multi-type environment makes it possible to introduce specific tools enhancing expressivity, which have proved useful e.g. for a smooth proof-theoretic treatment of multi-modal and dynamic logics. The generalisation to a setting in which the full display property is not required makes it possible to account for logics, such as monotone modal logic, which admit connectives which are neither adjoints nor residuals.

16:30-18:30 Session 42H: Contributed talks C1 (LC)
Location: MB, Zeichensaal 15
Complexity bounds for Multiagent Justification Logic

ABSTRACT. We investigate the complexity of systems of Multi-agent Justification Logic with interacting justifications (see [1]). The system we study has n agents, each based on some (single-agent) justification logic (we consider J, J4, JD, JD4, JT, LP) and a transitive, irreflexive binary relation, C. Each agent i has its own set of axioms, depending on the logic it is based on. If iCj, then we include axiom t:i φ -> t:j φ (we do not include V-Verification as in [1]). Finally, it has a sufficient amount of propositional axioms and an axiomatically appropriate constant specification, which is in P. Traditionally, to establish upper complexity bounds for satisfiability for Justification Logic, we use a set of tableau rules to generate a branch and then we run the ∗-calculus on it. A similar system for (diamond-free) modal logic was studied in [2]. We adjust appropriately the tableau for the corresponding system in [2] and the ∗-calculus can be run locally for every prefix, so we can use the same methods as in [2] to establish upper bounds. On the other hand, we can see that if we replace [i] by x:i in a diamond-free modal formula (for all i), then the new formula is satisfiable iff the old one was. Thus, we can prove the same complexity bounds as in [2] - with the exception that where satisfiability for a modal logic is in NP, the corresponding justification logic has its satisfiability in the second level of the polynomial hierarchy. [1] Antonis Achilleos, Complexity jumps in Multi-agent Justification Logic with interacting justifications, Under submission [2] ____ Modal logics with hard diamond-free fragments, Under submission

The relation between the graphs structures and proof complexity of corresponding Tseitin graph tautologies

ABSTRACT. We describe two sufficient properties of graphs Gn on n vertices such that the formulas based on them have exponential Resolution proof steps. The network style graphs of Tseitins formulas and graphs of Urquhart are examples of graphs with mentioned properties. If at least one of these properties is not valid for any graph, then the corresponding formula has polynomial bounded resolution refutation.

The algorithmic complexity of decomposability in fragments of first-order logic

ABSTRACT. We consider the decomposability problem for finite theories, i.e. the problem to decide whether a theory can be represented as a union of two (or several) theories sharing a given set of signature symbols. The algorithmic complexity of this problem has been studied in various calculi, ranging from first-order logic to classical propositional and Description Logics. The results suggested that the complexity of decomposability coincides with the complexity of entailment in the underlying logic. Although this observation was not too surprising, a general method for proving this claim was missing. We describe a method of proving that the complexity of deciding decomposability coincides with the complexity of entailment in fragments of first-order logic. We illustrate this method by showing the complexity of decomposability in signature fragments of first-order logic, i.e. those which are obtained by putting restrictions on signature.

Type equations and second order logic
SPEAKER: Paolo Pistone

ABSTRACT. The aim of this talk is to propose a constructive understanding of second order logic: it is argued that a better grasp of the functional content of the comprehension rule comes from the consideration of inference rules independently of logical correctness; the situation is analogous to that of computation, whose proper functional description imposes to consider non terminating (i.e. “wrong”) algorithms. The Curry-Howard correspondence allows indeed a shift from the question of provability (within a formal system) to that of typability for pure lambda terms, representing for instance recursive functions. By relying on well-known results on type inference, an equational description, independent of type systems, of the predicates required to build proofs of totality is presented: one no more focuses on what one can prove by means of a certain package of rules, but rather on what the rules needed to prove a certain formula must be like, at the level of their functional description. This might look a bit weird at first glance: by applying this technique it is possible, in principle, to construct second order proofs of totality for all partial recursive functions! The assumption that every system of type equations defines a set of types is indeed equivalent to a naive comprehension axiom. The focus on typability conditions exposes a different point of view on the phenomenon of incompleteness: the lack of the relevant “diagonal” or “limit ” proof is indeed explained by the lack of the relevant “diagonal” or “limit” predicates. On the other hand, on the basis of a characterization of the solvability of type equations by means of recursive techniques, it is conjectured that such a “naıve” approach to second order proofs is “complete” in the following sense: all total recursive functions are provably total in some consistent subsystem of the whole (violently inconsistent) system of equational types.

An intuitive semantics for {Full Intuitionistic Linear Logic}
SPEAKER: unknown

ABSTRACT. This work describes an intuitive semantics in the style of Girard's well-known cigarette vending machine for Full Intuitionistic Linear Logic. Full Intuitionistic Linear Logic (FILL) was introduced by Hyland and de Paiva [1] as arising from its categorical semantics, while hinting at its independent interest as a framework for forms of parallelism in Functional Programming. The systems FILL and its intuitionistic counterpart FIL[2] show that the constructive character of logical systems is not given by syntactic size restrictions on sequent calculus, but comes about by explaining connectives in terms of intensional constructions/operations/transformations on derivations of the system. This seems to us the central message of the Brouwer-Heyting-Kolmogorov (BHK) interpretation and also of the Curry-Howard isomorphism, which we take as guiding criteria for our mathematical logic investigations. This work also aims to explain to the mythical man-on-the street what Full Intuitionistic Linear Logic is about. We were pressed on the point that, elegance of categorical constructions and esthetic criteria on proof systems notwithstanding, one should always be able to say what our logical operations mean in common words, when describing a new logical system like FILL. Initially we had no intuitive explanation for the multiplicative disjunction 'par', which now seems more understandable in terms of interactions with a 'stock-keeping' system.

Argumentation Logic
SPEAKER: unknown

ABSTRACT. Born out of the need to formalize common sense in Articial Intelligence (AI), Ar- gumentation Logic (AL) brings together the syllogistic roots of logic with recent argumentation theory from AI to propose a new logic based on argumentation.

Argumentation Logic is purely proof theoretic dened via a criterium of acceptabil- ity of arguments. Arguments in AL are sets of propositional formulae with the acceptability of an argument ensuring that the argument can defend against any other argument that is inconsistent with it, under a given propositional theory. AL can be linked to Natural Deduction allowing us to reformulate Propositional Logic (PL) in terms of argumentation and to show that, under certain conditions, AL and PL are equivalent. AL separates proofs into direct and indirect ones, the latter being through the use of a restricted form of Reductio ad Absurdum (RAA) where the (direct) derivation of the inconsistency must depend on the hypothesis posed when we apply the RAA rule.

As such AL is able to isolate inconsistencies in the given theory and to behave agnostically to them. This gives AL as a conservative paraconsistent extension of PL that does not trivialize in the presence of inconsistency. The logic then captures in a single framework defeasible reasoning and its synthesis with the strict form of reasoning in classical logic. The interpretation of implication in AL is dierent from that of material implication, closer to that of default rules but where proof by contradiction can be applied with them. AL has recently formed the basis to formalize psychological theories of story comprehension.

16:30-18:30 Session 42I: Contributed talks D1 (LC)
Location: MB, Seminarraum 5
On the existential interpretability of structures
SPEAKER: unknown

ABSTRACT. We study the $\exists$--interpretability of constructive structures of finite predicate signatures. This definition is motivated by a kind of effective interpretability of abstract databases and leads to a good natural translation of $\exists$--queries.

The following definition is a restricted variant of the standard well--known definition of interpretability of structures:

\smallskip \noindent{\bf Definition. } {\it Let $\mathfrak{A}_0$ and $\mathfrak{A}_1$ be two structures of finite predicate signatures and let $\langle P_1,\ldots, P_k\rangle$ be the signature of $\mathfrak{A}_0$. We say that $\mathfrak{A}_0$ has a $\exists$--interpretation in $\mathfrak{A}_1$ if there exist \begin{itemize} \item $n\in\omega$ and a finite tuple of parameters $\bar p\in\mathfrak{A}_1$, \item $\exists$--formula $U(\bar x,\bar y)$, $|\bar x|=n$, \item $\exists$--formulas $E^+(\bar x_0,\bar x_1,\bar y)$ and $E^-(\bar x_0,\bar x_1,\bar y)$ such that $|\bar x_0|=|\bar x_1|=n$, \item $\exists$--formulas $P^+(\bar x_1,\ldots,\bar x_m,\bar y)$ and $P^-(\bar x_1,\ldots,\bar x_m,\bar y)$, for each predicate symbol $P$ of the signature of $\mathfrak{A}_0$, where $m$ is the arity of $P$ with the property $|\bar x_1|=\cdots=|\bar x_m|=n$, \end{itemize} such that \begin{enumerate} \item The set $(U^{\mathfrak{A}_1}(\bar x))^2$ is a disjunct union of the sets $\{\langle\bar x_0,\bar x_1\rangle\mid \mathfrak{A}_1\models E^\varepsilon(\bar x_0,\bar x_1,\bar p)\}$, $\varepsilon\in\{+,-\}$. \item For any $m$--ary predicate symbol $P$ of the signature of $\mathfrak{A}_0$, the set $(U^{\mathfrak{A}_1}(\bar x))^m$ is a disjunct union of the sets $\{\langle\bar x_0,\ldots,\bar x_m\rangle\mid \mathfrak{A}_1\models P^\varepsilon(\bar x_0,\ldots,\bar x_m,\bar p)\}, \ \varepsilon\in\{+,-\}$. \item Let $\widehat{P_i}=\{\langle\bar x_1,\ldots,\bar x_m\rangle\mid\mathfrak{A}_1\models P^+(x_1,\ldots,\bar x_m,\bar p)\}, \mbox{ äëÿ }i=1,\ldots,k$. Then the relation $E=\{\langle \bar x_0,\bar x_1\rangle\mid\mathfrak{A}_1\models E^+(\bar x_0,\bar x_1,\bar p)\}$ is a congruence on $\mathfrak{B}=\langle U^{\mathfrak{A}_1}(\bar x),\widehat{P}_1,\ldots,\widehat{P}_k\rangle$ and the quotient algebra $\mathfrak{B}/\!E$ is isomorphic to $\mathfrak{A}_0$. \end{enumerate} }

\noindent{\bf Theorem. } {\it \begin{enumerate} \item The $\exists$--interpretability generates an upper semilattice ${\mathcal L}_\exists$ in which computable structures form a principal ideal ${\mathcal L}^0_\exists$; in particular, there exists a universal computable structure, i.e., a computable structure that $\exists$-interprets any computable structure. \item Any finite partial order is embeddable into ${\mathcal L}^0_\exists$. \end{enumerate} }

The isomorphism problem for computable projective planes

ABSTRACT. Computable presentations for projective planes are studied. We prove that the isomorphism problem is $\Sigma^1_1$-complete for the following classes of projective planes: pappian projective planes, desarguesian projective planes, arbitrary projective planes.

Coding graphs into fields

ABSTRACT. It is well established that the class of countable symmetric irreflexive graphs is complete in computable model theory: every countable structure in a finite language can be coded into a graph in such a way that the graph has the same spectrum, the same computable dimension, and the same categoricity spectrum as the original structure, and shares most other known computable-model-theoretic properties of the original structure as well. In 2002, Hirschfeldt, Khoussainov, Shore, and Slinko collected related results and proved more, showing that many other classes of countable structures are complete in the same sense. On the other hand, classes such as linear orders, Boolean algebras, trees, and abelian groups are all known not to be complete in this way. We address the most obvious class for which this question was still open, by giving a coding of graphs into countable fields in such a way as to preserve all of these properties.

Uniformization in the hereditarily finite list superstructure over the real exponential field

ABSTRACT. This work is concerned with the generalized computability theory, as well as properties of the real exponential field. To describe computability we use an approach via definability by Sigma-formulas in hereditarily finite superstructures, which was introduced in [1]. In particular, we establish the uniformization property for Sigma-predicates in the hereditarily finite list superstructure over the real exponential field. (See [2] for the structure's definition) We shall outline the proof of the following theorem.

Theorem 1. For any Sigma-predicate P in the hereditarily finite list superstructure over the real exponential field exists a Sigma-function f such that its domain dom(f)={x: exists y P(x,y)} and its graph G(f) is a subset of P.

As a corollary we obtain existence of an universal Sigma-function in the same structure.

1. Ershov, Yu. L. (1996). Definability and Computability, Consultants Bureau, New York-London-Moscow. 2. Goncharov, S. S. and Sviridenko, D. I. (1985). Sigma-programming (Russian), Vychisl. Sist. 107, pp. 3-29.

Dynamic logic on approximation spaces

ABSTRACT. We present recent results on a version of dynamic logic suitable to describe properties of approximation spaces, with the set of finite elements considered as a structure (typical example is the set of rational numbers within the set of real numbers). We consider the case when this structure generates on a whole approximation space an induced structure in a way definable in dynamic logic. We apply this general technique to approximation spaces from effective model theory and HF-computability.

On limitwise monotonic reducibility of Σ^0_2-sets.

ABSTRACT. One of the directions of research in modern computability theory focus on studying properties of limitwise monotonic functions and limitwise monotonic sets. We prove the existence of incomparable Σ^0_2-sets relative to limitwise monotonic reducibility and there is no maximal element in the class of all lm-degrees of Σ^0_2-sets. Moreover, we prove that every countable partial order can be embedded into the class of all lm-degrees of Σ^0_2-sets.

16:30-18:30 Session 42J: Contributed talks E1 (LC)
Location: MB, Seminarraum 2
Relativized universal numberings
SPEAKER: unknown

ABSTRACT. We study the greatest under reduction computable numberings of families of sets which are uniformly computably enumerable relative to an arbitrary oracle.

Ideals without minimal numberings in the Rogers semilattice

ABSTRACT. It is well known many infinite families of c.e. sets whose Rogers semilattice contains an ideal without minimal elements, for instance, the family of all c.e. sets, \cite{Ers}. Moreover, there exists a computable family of c.e. sets whose Rogers semilattice has no minimal elements at all, \cite{Bad2}. In opposite to the case of the families of c.e. sets, for every computable numbering $\alpha$ of an infinite family $\mathfrak F$ of computable functions, there is a Friedberg numbering of $\mathfrak F$ which is reducible to $\alpha$, \cite{Ers}. This means that the Rogers semilattice of any computable family of total functions from level 1 of the arithmetical hierarchy contains no ideal without minimal elements.

We study computable families of total functions of any level of the Kleene-Mostowski hierarchy above level 1 and try to find elementary properties of Rogers semilattices that are different from the properties of Rogers semilattices for the families of computable functions.

\begin{theorem}\label{th1} For every n, there exists a $\Sigma^0_{n+2}$-computable family of total functions whose Rogers semilattice contains an ideal without minimal elements. \end{theorem}

Note that every Rogers semilattice of a $\Sigma^0_{n+2}$-computable family $\mathfrak F$ contains the least element if $\mathfrak F$ is finite, \cite{Ers}, and infinitely many minimal elements, otherwise, \cite{BG}.

Theorem \ref{th1} is based on the following criterion that extends the criterion for minimal numbering from \cite{Bad2}.

\begin{theorem} Let $\alpha$ be a numbering of an arbitrary set $\mathcal S$. Then there is no minimal numbering of $\mathcal S$ that is reducible to $\alpha$ if and only if, for every c.e. set $W$, if $\alpha (W)=S$ then there exists a c.e. set $V$ such that $\alpha (V)=\mathcal S$ and, for every positive equivalence $\varepsilon$, either $\varepsilon\upharpoonright W\nsubseteq \theta_{\alpha}$ or $W\nsubseteq [V]_{\varepsilon}$. \end{theorem}


\bibitem{Ers} {\scshape Yu. L. Ershov}, {\bfseries\itshape Theory of numberings}, Nauka, Moscow, 1977 (in Russian).

\bibitem{Bad2} {\scshape S. A. Badaev}, {\itshape On minimal enumerations}, {\bfseries\itshape Siberian Adv. Math.}, vol. 2 (1992), no. 1, pp. 1--30.

\bibitem{BG} {\scshape S. A. Badaev and S. S. Goncharov}, {\itshape Rogers semilattices of families of arithmetic sets}, {\bfseries\itshape Algebra and Logic}, vol. 40 (2001), no. 5, pp. 283--291.


Computable numberings in Analytical Hierarchy

ABSTRACT. We investigate minimal enumerations in analytical hierarchy. One of the most important minimal numberings is Friedbergs numbering. Owings showed that there is no Pi^1_1-computable Friedberg enumeration of all Pi^1_1-sets using metarecursion theory. This result is obtained in classic computability theory for higher levels of analytical hierarchy.

Definability of 0' in the structure of $\omega$-enumeration degrees
SPEAKER: Andrey Sariev

ABSTRACT. In this paper we find a first order formula which defines the first jump of the least element in the structure of the $\omega$-enumeration degrees.

Embedding the omega-enumeration degrees into the Muchnik degrees generated by spectra of structures
SPEAKER: Stefan Vatev

ABSTRACT. For an infinite sequence of sets $\mathscr{R} = \{R_n\}_{n\in\omega}$ and a set $X$, we write $\mathscr{R}\leq_{c.e.} X$ if for every $n$, $R_n$ is computably enumerable in $X^{(n)}$, uniformly in $n$. Soskov \cite{omega-degrees} considered the following redicibility between sequences of sets \[\mathscr{R} \leq_\omega \mathscr{P}\mbox{ iff }(\forall X \subseteq \mathbb{N})[\mathscr{P} \leq_{c.e.} X\ \Rightarrow\ \mathscr{R}\leq_{c.e.} X].\] This reducibility naturally induces an equivalence relation, whose equivalence classes are called $\omega$-enumeration degrees. They form an upper semi-lattice, which have been extensively studied by a number of researchers in Sofia University over the past decade.

In this talk we discuss how to encode an infinite sequence of sets $\mathscr{R}$ into a single countable structure $\mathcal{N}_{\mathscr{R}}$, preferably in a finite language, such that the Turing degree spectrum of $\mathcal{N}_{\mathscr{R}}$ is the set \[Sp(\mathcal{N}_{\mathscr{R}}) = \{d_T(X) \mid \mathscr{R} \mbox{ is c.e. in }X\}.\] We present two such methods. The first one was studied by Soskov \cite{soskov} and is based on the so-called Marker's extensions \cite{marker}. The other approach is based on the idea of coding each set $R_n$ by a sequence of pairs of computable structures \cite{pairs-rec-str}. We conclude that for any two infinite sequences of sets $\mathscr{R}$ and $\mathscr{P}$ we can build countable structures $\mathcal{N}_{\mathscr{R}}$ and $\mathcal{N}_{\mathscr{P}}$ such that \[\mathscr{R} \leq_\omega \mathscr{P}\ \iff\ Sp(\mathcal{N}_{\mathscr{P}}) \subseteq Sp(\mathcal{N}_{\mathscr{R}}).\] In other words, the $\omega$-enumeration degrees are embeddable into the Muchnik degrees generated by spectra of structures.

Limitwise monotonic sets of reals
SPEAKER: unknown

ABSTRACT. We extend the limitwise monotonicity notion to the case of arbitrary computable linear ordering to get a set which is limitwise monotonic precisely in the non-computable degrees. Also we get a series of connected non-uniformity results to obtain new examples of non-uniformly equivalent families of computable sets with the same enumeration degree spectrum.

16:30-18:30 Session 42K: Contributed talks F1 (LC)
Location: MB, Seminarraum 212/232
An Intuitionistic Interpretation of Classical Implication
SPEAKER: unknown

ABSTRACT. A connection between the classical and the Heyting's logic is given by the Glivenko's Theorem: for every propositional formula $A$, $A$ is classically provable iff $\neg\neg A$ is provable intuitionistically. \rm This theorem can be understood as a possible way of intuitionistic interpretation of the classical reasoning. Embedding of the implicative fragment of classical logic into the implicative fragment of the Heyting's logic was considered by J. P. Seldin [3] and L. C. Pereira, E. H. Haeusler, V. G. Costa, W. Sanz [2]. Seldin's interpretation essentially depends on the presence of conjunction, but the second one is obtained in the pure language of implication. Here we define, in spirit of Kolmogorov's interpretation, a mapping of the pure implicational propositional language enabling to prove the corresponding result. Let $p_1,\dots,p_n$ be a list of all propositional letters occurring in formula $A\to B$ and $q$ any propositional letter not occurring in $A\to B$. Then the image $b(A\to B)$ of $A\to B$ is defined inductively as follows: $b(p)=(p\to q)\to q$, for each $p\in\{p_1,\dots,p_n\}$, and $b(A\to B)=b(A)\to b(B)$. Namely, $b(A\to B)$ is obtained by replacing each occurrence of a propositional letter $p$ in $A\to B$ by $(p\to q)\to q$, where $q$ is a new letter.

{\bf Embedding Lemma.} \emph{For every propositional implicational formula $A$, $A$ is provable in classical logic iff $b(A)$ is provable in Heyting logic.}

This is a part of our paper [1] dealing with an alternative approach to normalization of the implicative fragment of classical logic.


\bibitem{cite1} {\scshape B. Bori\v ci\'c, M. Ili\'c}, {\itshape An Alternative Normalization of the Implicative Fragment of Classical Logic}, \rm (to appear).

\bibitem{cite2} {\scshape L. C. Pereira, E. H. Haeusler, V. G. Costa, W. Sanz}, {\itshape A new normalization strategy for the implicational fragment of classical propositional logic}, {\bfseries\itshape Studia Logica}, vol. 96 (2010), no. 1, pp. 95--108.

\bibitem{cite3} {\scshape J. P. Seldin}, {\itshape Normalization and excluded middle I}, {\bfseries\itshape Studia Logica}, vol. 48 (1989), no. 2, pp. 193--217.


Predicate Glivenko theorems and substructural aspects of negative translations
SPEAKER: unknown

ABSTRACT. In [1], the second author has developed a proof-theoretic approach to Glivenko theorems for substructural propositional logics. In the present talk, by using the same techniques, we will extend them for substructural predicate logics. It will be pointed out that in this extension, the following double negation shift scheme (DNS) plays an essential role. (DNS): \forall x \neg \neg \varphi(x) \rightarrow \neg \neg \forall x \varphi(x)$ Among others, the following is shown, where QFLe and QFLe$^{\dagger}$ are predicate extensions of FLe and FLe$^{\dagger}$, respectively (see [1]). The Glivenko theorem holds for QFLe$^{\dagger}$ +(DNS) relative to classical predicate logic. Moreover, this logic is the weakest one among predicate logics over QFLe for which the Glivenko theorem holds relative to classical predicate logic. Then we will study negative translations of substructural predicate logics by using the same approach. We introduce a negative translation, called extended Kuroda translation and over QFLe it will be shown that standard negative translations like Kolmogorov translation and G\"odel-Gentzen translation are equivalent to our extended Kuroda translation. Thus, we will give a clearer unified understanding of these negative translations by substructural point of view.

Adding a Conditional to Kripke's Theory of Truth
SPEAKER: Lorenzo Rossi

ABSTRACT. I address the lack of expressiveness of Kripke's theory of truth (over arithmetic) along the lines suggested by H. Field, namely equipping it with a primitive, non-trivial conditional. However, I focus my attention on computationally simple methods to interpret a primitive conditional. I provide an inductive construction and define a corresponding monotone operator that improves on Kripke's construction and show some nice features.

Constructive completeness and Joyal's theorem

ABSTRACT. We present a unifying, categorical approach to several constructive completeness theorems for intuitionistic (and classical) first-order theories, as well for theories in certain fragments of first-order logic, based on a theorem by A. Joyal [Thm 6.3.5, Makkai and Reyes, 1977]. We show that the notion of exploding (Tarski-) model introduced by W. Veldman [Veldman 1976] is adequate for certain fragments of first-order logic (as well as for classical first-order logic) and that Veldman's modified Kripke semantics arises, as a consequence, as semantics in a suitable sheaf topos. In the process we give an alternative proof of Veldman's completeness theorem, and note the equivalence of this theorem with the Fan Theorem. Finally, we show that the disjunction-free fragment is constructively complete with respect to modified Kripke semantics without appeal to the Fan Theorem, as well as without appeal to decidable axiomatizability and size restrictions on the language. This is joint work with Christian Espindola.

Algorithmic-algebraic canonicity for mu-calculi
SPEAKER: Andrew Craig

ABSTRACT. The correspondence and completeness of logics with fixed point operators has been the subject of recent research (see~\cite{BezHod:TCS},~\cite{muALBA}). These works aim to develop a Sahlqvist-like theory for their respective fixed point settings. That is, they identify classes of formulas which are preserved under canonical extensions and have first-order frame correspondents.

We prove that the members of a certain class of intuitionistic mu-formulas are \emph{canonical}, in the sense of~\cite{BezHod:TCS}. When projected onto the classical case, our class of canonical mu-formulas subsumes the class described in~\cite{BezHod:TCS}. Our methods use a variation of the algorithm ALBA (Ackermann Lemma Based Algorithm) developed in~\cite{DistMLALBA}. We show that all mu-inequalities that can be successfully processed by our algorithm, $\mu^*$-ALBA, are canonical.

Formulas are interpreted on a bounded distributive lattice $\A$ with additional operations. The canonical extension of $\A$, denoted $\A^\delta$, is a complete lattice in which the completely join-irreducible elements ($\jty(\A^\delta)$) are join-dense, and the completely meet-irreducible elements ($\mty(\A^\delta)$) are meet-dense. An \emph{admissible valuation} takes all propositional variables to elements of $\A$. The algorithm aims to ``purify'' an inequality $\alpha \leq \beta$ by rewriting it as a (set of) pure (quasi-)inequalities. A pure quasi-inequality has no occurrences of propositional variables; only special variables whose interpretations range over $\jty(\A^\delta) \cup \mty(\A^\delta)$ are present. The fact that admissible and ordinary validity coincide for pure inequalities is the lynchpin for proving canonicity.

The proof of the soundness of the rules of the algorithm $\mu^*$-ALBA rests on the order-topological properties of formulas (term functions) of the $\mu$-calculus.

Proof-Theoretic Analysis of Brouwer's Argument of the Bar Induction

ABSTRACT. ¥begin{document} ¥thispagestyle{empty}


¥NP ¥absauth{Ryota Akiyoshi} ¥meettitle{Proof-Theoretic Analysis of Brouwer's Argument of the Bar Induction} ¥affil{Faculty of Letters, Kyoto University, Yoshidahonmachi, Sakyo Ward, Kyoto Prefecture 606-8501, Japan} ¥meetemail{georg.logic@gmail.com}

In a series of papers, Brouwer had developed intuitionistic analysis, in particular the theory of choice sequences. An important theorem called the ``fan theorem" plays an essential role in the development of it. The fan theorem was derived from another stronger theorem called the ``bar induction", which is an induction principle on a well-founded tree. We refer to [4,5] as standard references of Brouwer's intuitionistic analysis.

Brouwer's argument in [1] contains a controversial assumption on canonical proofs of some formula. In many cases, constructive mathematicians have assumed the bar induction as axiom, hence the assumption has not been examined by them.

In this talk, we sketch an approach of Brouwer's argument via infinitary proof theory. We point out that there is a close similarity between Brouwer's argument and Buchholz' method of the $¥Omega$-rule ([2,3]). In particular, Brouwer's argument in [1] seems very close to Buchholz' embedding theorem of the (transfinite) induction axiom of $ID_{1}$ in [2], which is a theory of non-iterated inductive definition. By comparing these two arguments, we give a natural explanation of why Brouwer needed the assumption. Our conclusion is that Brouwer supposed the assumption in order to avoid the impredicativity or a vicious circle which is essentially the same as one in the $¥Omega$-rule for $ID_{1}$. In other words, the impredicativity can be explained in a very clear way from the view point of the $¥Omega$-rule. Moreover, Brouwer's argument can be formulated in a mathematically precise way by the $¥Omega$-rule. Therefore, we conclude that his introduction of the assumption is mathematically well-motivated. If time is permitting, we suggest how to carry out this idea in a mathematical way.


¥bibitem{B1} {¥scshape Luitzen Egbertus Jan Brouwer}, {¥itshape ¥"{U}ber Definitionsbereiche von Funktionen}, {¥bfseries¥itshape Mathematische Annalen}, vol.~97, 1927, pp.~60--75.

¥bibitem{B2} {¥scshape Wilfried Buchholz}, {¥itshape The $¥Omega_{¥mu +1}$-Rule}, in {¥bfseries¥itshape Iterated Inductive Definitions and Subsystems of Analysis: Recent Proof-Theoretical Studies}, LNM 897, 1981, pp.~188--233.

¥bibitem{B3} {¥scshape Wilfried Buchholz}, {¥itshape Explaining the {G}entzen-{T}akeuti reduction steps}, {¥bfseries¥itshape Archive for Mathematical Logic}, vol.~40, pp.~255--272.

¥bibitem{D1} {¥scshape Michael A. E. Dummett}, {¥bfseries¥itshape Elements of Intuitionism}, 2nd edition, OUP, 2000.

¥bibitem{TvD} {¥scshape A. S. Troelstra and D. van Dalen}, {¥bfseries¥itshape Constructivism in Mathematics}, Vol.1, North-Holland, 1988.

%% INSERT YOUR BIBLIOGRAPHIC ENTRIES HERE; %% SEE (4) BELOW FOR PROPER FORMAT. %% EACH ENTRY MUST BEGIN WITH ¥bibitem{citation key} %% %% IF THERE ARE NO ENTRIES %% DELETE THE LINE ABOVE (¥begin{thebibliography}{20}) %% AND THE LINE BELOW (¥end{thebibliography})


¥vspace*{-0.5¥baselineskip} % this space adjustment is usually necessary after a bibliography


16:30-18:30 Session 42L: Contributed talks G1 (LC)
Location: MB, Seminarraum 3
On Explicit-Implicit Reflection Principles.
SPEAKER: Elena Nogina

ABSTRACT. Reflection principles based on provability predicates were introduced in the 1930s by Rosser and Turing, and later were explored by Feferman, Kreisel & Levi, Schmerl, Artёmov, Beklemishev and others. We study reflection principles of Peano Arithmetic based on both proof and provability predicates. We find a classification of these principles and establish their linear ordering with respect to their deductive strength in Peano Arithmetic. The proof essentially relies on the Gödel-Löb-Artёmov logic GLA.

Ordinal Notations and Fundamental Sequences in Caucal Hierarchy

ABSTRACT. The Caucal hierarchy of infinite graphs with colored edges is a wide class of graphs with decidable monadic theories[1]. Graphs from this hierarchy can be considered as structures with finite number of binary relations. It is known that the exact upper bound for order types of the well-orderings that lie in this class is ε0 [2]. Actually, any well-ordering from Caucal hierarchy can be used as a constructive ordinal notation system. We investigate systems of fundamental sequences for that well-orderings and the corresponding fast-growing hierarchies of computable functions. For a well-ordering (A,<) we can determine a system of fundamental sequences λ[n] by a relation Cs(x,y) such that Cs(α, β) <=> α is a limit point of < A and β = α[n], for some n. Our principal result is that for a well-ordering with a pair of Schmidt-coherent fundamental sequences (A,<,Cs1,Cs2) from Caucal hierarchy the corresponding fast-growing hierarchies fα1(x) and fα2(x) are equivalent in the following sense: for all α < β we have fβ1(n) > fα2(n) and fβ1(n) > fα2(n), for all large enough n (Schmidt-coherence is a classical condition that implies that functions from fast-growing hierarchy are strictly increasing [3]). We show that any two well-orderings with Schmidt-coherent systems of fundamental sequences from Caucal hierarchy of the same order type < ω^ω give rise to the equivalent fast-growing hierarchies. We also prove that it is possible to extend a graph with a well-ordering from Caucal hierarchy by a Schmidt-coheren system of fundamental sequences for the well-ordering in such a way that the resulting graph will lie in Caucal hierarchy. [1] Didier Caucal, On Infinite Terms Having a Decidable Monadic Theory, Mathematical Foundations of Computer Science 2002 (Diks, Krzysztof and Rytter, Wojciech), vol. 2420, Springer Berlin Heidelberg, 2002, pp. 165–176. [2] Braud Laurent, Arnaud Carayol, Linear Orders in the Pushdown Hierarchy, Lecture Notes in Computer Science (Samson Abramsky, et.al. eds.), vol. 6199, Springer Berlin Heidelberg, 2010, pp. 88–99. [3] Diana Schmidt, Built-up Systems of Fundamental Sequences and Hierarchies of Number-Theoretic Functions, Archive for Mathematical Logic, vol. 18 (1976), pp. 47–53.

The maximal order type of the trees with the gap-embeddability relation

ABSTRACT. In 1985, Harvey Friedman introduced a new kind of embeddability relation between finite labeled rooted trees, namely the gap-embeddability relation. Under this embeddability relation, the set of finite rooted trees with labels bounded by a fixed natural number n is a well-partial-ordering. The well-partial-orderedness of these trees (if we put a universal quantifier in front) gives rise to a statement not provable in Pi11-CA_0. There are still some open questions left about these famous well-partial-orderings. For example, what is the maximal order type of these sets of trees with the gap-embeddability relation? The maximal order type of a well-partial-ordering is an important characteristic of that well-partial-ordering and it captures in some sense its strength. In this talk, I will discuss some new recent developments concerning this topic.

Computational reverse mathematics and foundational analysis

ABSTRACT. Reverse mathematics studies which natural subsystems of second order arithmetic are equivalent to key theorems of ordinary or non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of various weak foundations for mathematics in a formally precise manner. Richard Shore [1, 2] proposes an alternative framework in which to conduct reverse mathematics, called computational reverse mathematics. The formal content of his proposal amounts to restricting our attention to omega-models of RCA0 when we prove implications and equivalences in reverse mathematics.

Despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for all of the relevant foundational theories, particularly a partial realisation of Hilbert’s programme due to Simpson [3].

Secondly, computable entailment is a Pi^1_1-complete relation, and hence employing it commits one to theoretical resources which outstrip those acceptable to the stronger foundational programmes such as predicativism and predicative reductionism. This argument can be formalised within second order arithmetic, making it accessible to partisans of foundational frameworks such as predicativism. In doing so we show that the existence of the set of sentences which are computably entailed is equivalent over ACA0 to Pi^1_1 comprehension.

[1] Shore, R. A., Reverse Mathematics: The Playground of Logic, The Bulletin of Symbolic Logic, vol. 16 (2010), no. 3, pp. 378–402. [2] Shore, R. A., Reverse mathematics, countable and uncountable: a computational approach, Effective Mathematics of the Uncountable, Lecture Notes in Logic, (D. Hirschfeldt, N. Greenberg, J. D. Hamkins, and R. Miller, editors), ASL and Cambridge University Press, Cambridge, 2013, pp. 150–163. [3] Simpson, S G., Partial realizations of Hilbert’s program, The Journal of Symbolic Logic, vol. 53 (1988), pp. 349–363.

Semantic completeness of first order theories in constructive reverse mathematics

ABSTRACT. We introduce a general notion of semantic structure for first-order theories, covering a variety of constructions such as Tarski and Kripke semantics, and prove that, over Zermelo Fraenkel set theory (ZF), the completeness of such semantics is equivalent to the Boolean Prime Ideal theorem (BPI). In particular, we deduce that the completeness of that type of semantics for non-classical theories is unprovable in intuitionistic Zermelo Fraenkel set theory IZF. Using results of Joyal and McCarty, we conclude also that the completeness of Kripke semantics is equivalent, over IZF, to the Law of Excluded Middle plus BPI. By results from Banaschewski and Bhutani, none of these two principles imply each other, and so this gives the exact strength of Kripke completeness theorem in the sense of constructive reverse mathematics.

Reverse Mathematics, more explicitly
SPEAKER: Sam Sanders


The program Reverse Mathematics can be viewed as a classification of theorems of ordinary mathematics from the point of view of computability. Working in Kohlenbach’s higher-order Reverse Mathematics, we study an alternative classification of theorems of ordinary mathematics, namely based on the central tenet of Feferman’s Explicit Mathematics that a proof of existence of an object is converted into a procedure to compute said object. Nonstandard Analysis is used in an essential way.

Our preliminary classification gives rise to the Explicit Mathematics theme (EMT). Intuitively speaking, the EMT states a standard object with certain properties can be computed by a functional if and only if this object merely exists classically with the same nonstandard properties. Besides theorems of classical mathematics, we also consider intuitionistic objects, like the fan functional.



16:30-18:30 Session 42M: Contributed talks H1 (LC)
Location: MB, Seminarraum 4
A computability approach to three hierarchies

ABSTRACT. We analyze the computable part of three hierarchies from analysis and set theory. The hierarchies are those induced by the Cantor-Bendixson rank, the differentiability rank of Kechris and Woodin, and the Denjoy rank. Our goal is to identify the descriptive complexity of the initial segments of these hierarchies. For example, we show that for each recursive ordinal alpha>0, the set of Turing indices of computable C[0,1] functions that are differentiable with rank at most alpha is Pi_{2alpha+1}-complete. Similar results hold for the other hierarchies. Underlying of all the results is a combinatorial theorem about trees. We will present the theorem and its connection to the results.

A generalization of Solovay's Sigma-construction with application to intermediate models

ABSTRACT. A Sigma-construction of Solovay is extended to the case of intermediate sets which are not necessarily subsets of the ground model, with a more transparent description of the resulting forcing notion than in a classical paper of Grigorieff. As an application, we prove that, for a given name t (not necessarily a name of a subset of the ground model), the set of all sets of the form t[G] (the G-interpretation of t), G being generic over the ground model, is Borel. This result was first established by Zapletal by a descriptive set theoretic argument.

A new lower bound for the length of the hierarchy of norms

ABSTRACT. A norm is a surjective function from the Baire space R onto an ordinal. Given two norms φ, ψ we write φ ≤_N ψ if φ continuously reduces to ψ. Then ≤_N is a preordering and so passing to the set of corresponding equivalence classes yields a partial order, the hierarchy of norms.

Assuming the axiom of determinacy (AD) the hierarchy of norms is a wellorder. The length Σ of the hierarchy of norms was investigated by Löwe in [1]; he determined that Σ ≥ Θ^2 (where Θ := {α | There is a surjection from R onto α}). In his talk "Multiplication in the hierarchy of norms", given at the ASL 2011 North American Meeting in Berkeley, Löwe presented a binary operation × on the hierarchy of norms such that for wellchosen norms φ, ψ the ordinal rank of φ×ψ in the hierarchy of norms is at least as big as the product of the ordinal ranks of φ and ψ, which implies that Σ is closed under ordinal multiplication and so Σ ≥ Θ^ω.

In this talk I will note that in fact for wellchosen norms φ, ψ the ordinal rank of φ×ψ is exactly the product of the ranks of φ and ψ with an intermediate factor of ω_1. Furthermore using a stratification of the hierarchy of norms into initial segments closed under the ×-operation I will show that Σ ≥ Θ^(Θ^Θ).

[1] BENEDIKT LÖWE, The length of the full hierarchy of norms, Rendiconti del Seminario Matematico dell'Università e del Politecnico di Torino, vol. 63 (2005), no. 2, pp. 161-168.

Unraveling $\Sigma^0_\alpha(\Pi^1_1)$-Determinacy

ABSTRACT. In parallel with the Borel hierarchy, one can define the levels $\Sigma^0_{\alpha}(\Pi^1_1)$ ($\alpha <\omega_1$) of the Borel-on-coanalytic hierarchy by starting with $\Pi^1_1$ in place of the class $\Delta^0_1$ of clopen sets. In this talk, we consider the consistency strength of determinacy for infinite perfect-information games with payoff in $\Sigma^0_\alpha(\Pi^1_1)$. This has been computed exactly for $\alpha = 0, 1$, by Martin, Harrington, and J. Simms. For $\alpha > 1$, dual results of Steel~\cite{Steel} and Neeman~\cite{Neeman} have shown the strength to reside within a very narrow range in the region of a measurable cardinal $\kappa$ of largest possible Mitchell order $o(\kappa)$. However, an exact equiconsistency had yet to be isolated.

We have recently completed work pinpointing the determinacy strength of levels of the Borel hierarchy of the form $\Sigma^0_{1+\alpha+3}$, showing a level-by-level correspondence between these and a family of natural $\Pi_1$ reflection principles. Combining our techniques with those of \cite{Neeman} and \cite{Steel}, we can characterize the strength of $\Sigma^0_{1+\alpha+3}(\Pi^1_1)$-DET in terms of inner models with measurable cardinals. In particular, $\Sigma^0_4(\Pi^1_1)$-DET is equivalent to the existence of a mouse satisfying $(\exists \kappa) o(\kappa)=\kappa^{++}$ plus the schema that each true $\Pi_1$ statement with parameters in $P^2(\kappa)$ reflects to an admissible set containing $P(\kappa)$.

We will also discuss progress on calculating the strength of $\Sigma^0_2(\Pi^1_1)$-DET, relating this to Mitchell's hierarchy of weak repeat point measures.

Determinacy of Refinements to the Difference Hierarchy of Co-analytic sets

ABSTRACT. It is quite well-known result of Martin that the existence of a measurable cardinal is enough to prove the determinacy of all boldface Pi^1_1 sets. The argument nicely modifies to get the determinacy of all lightface Pi^1_1 sets from the existence of O^sharp. With this argument in mind, I will discuss how the technique has been pushed since then to get more determinacy in the difference hierarchy of Pi^1_1 sets, including a family of new determinacy results following from sharp-like hypotheses. To achieve this I will also demonstrate a generalised notion of computability suitable for defining the lightface Borel hierarchy in uncountable spaces.

On proof complexities of strong equal modal tautologies.

ABSTRACT. By analogy with the notions of determinative conjuncts in CPL, we introduce the same notion for modal tautologies. On the base of introduced modal determinative conjuncts we introduce the notion of strong equality for modal tautologies and compare different measures of proof complexity (size, steps, space and width) for them in some proof systems of MPL. We prove that 1) in some proof systems the strong equal modal tautologies have the same proof complexities and 2) there are such proof systems, in which some measures of proof complexities for strong equal modal tautologies are the same, the other measures differ from each other only by the sizes of tautologies.

16:30-18:30 Session 42N: Contributed talks I1 (LC)
Location: MB, Seminarraum 6
Reducts of simple (non-collapsed) Fraisse-Hrushovski constructions

ABSTRACT. Fraisse-Hrushovski constructions were first introduced by Hrushovski as a method for constructing strongly minimal sets that do not fit within Zilber's trichotomy conjecture. The construction can be seen as a two-step process where first a rank $\omega$ structure is constructed from a countable amalgamation class, using a variation of a Fraisse limit construction, and then the structure is ``collapsed'' to a strongly minimal substructure.

In this talk we acquaint ourselves with the rank $\omega$, non-collapsed version of the construction and its associated combinatorial geometry, and provide a general method of showing that one simple Fraisse-Hrushovski construction is a (proper) reduct of another Fraisse-Hrushovski construction.

On Jonsson sets and some their properties.

ABSTRACT. Our interest is to obtain a new information on the model-theoretic properties of the Jonsson theories and the structure of their classes of models. To date, the best studied of the perfect Jonsson theory and their classes of existentially closed models. In this thesis, we consider a new approach to the study of Jonsson theories. We will work with special subsets of the semantic model of fixed Jonsson theory or of some Jonsson fragment of an arbitrary complete theory. We introduce the notion of Jonsson set as the corresponding closure of some existential formula in the semantic model. Using of the Jonsson sets we consider such properties as a stability, a categoricity and a similarity of relevant models and theories related to these sets.

Ultraproduct construction of representative utility functions with infinite-dimensional domain

ABSTRACT. This paper applies model theory to macroeconomic theory. In mathematical models of macroeconomic theory, the hypothesis of a "representative agent" is ubiquitous, but the search for a rigorous justification has so far been unsuccessful and was ultimately abandoned until very recently. Herzberg (2010) constructed a representative utility function for finite-dimensional social decision problems, based on an bounded ultrapower construction over the reals, with respect to the ultrafilter induced by the underlying social choice function (via the Kirman-Sondermann (1972) correspondence). However, since the decision problems of macroeconomic theory are typically infinite-dimensional, Herzberg's original result is insufficient for many applications. We therefore generalise his result by allowing the social alternatives to belong to a general reflexive Banach space W; in addition to known results from convex analysis, our proof uses a nonstandard enlargement of the superstructure over the union of W with the reals, obtained by a bounded ultrapower construction with respect to the Kirman-Sondermann ultrafilter.

An axiomatic approach to modelling of orders of magnitude
SPEAKER: unknown

ABSTRACT. %% FIRST RENAME THIS FILE .tex. %% BEFORE COMPLETING THIS TEMPLATE, SEE THE "READ ME" SECTION %% BELOW FOR INSTRUCTIONS. %% TO PROCESS THIS FILE YOU WILL NEED TO DOWNLOAD asl.cls from %% http://aslonline.org/abstractresources.html. %\usepackage{verbatim}

\documentclass[bsl,meeting]{asl} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %TCIDATA{OutputFilter=Latex.dll} %TCIDATA{Version=} %TCIDATA{} %TCIDATA{BibliographyScheme=Manual} %TCIDATA{LastRevised=Monday, April 07, 2014 19:30:57} %TCIDATA{}

%\documentclass[10pt,a4paper]{asl} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \begin{document}

\end{document}\AbstractsOn \pagestyle{plain} \def\urladdr#1{\endgraf\noindent{\it URL Address}: {\tt #1}.} \newcommand{\NP}{} \input{tcilatex} \begin{document}



%% SEE INSTRUCTIONS (1), (2), (3), and (4) FOR PROPER FORMATS \NP \absauth{Bruno Dinis, Imme van den Berg}

\meettitle{An axiomatic approach to modelling of orders of magnitude} % First author's affiliation \affil{CMAF, University of Lisbon,Faculdade de Ciências Universidade de Lisboa Campo Grande, Edifício C6, Gabinete 6.2.18 P-1749-016 Lisboa , Portugal} \meetemail{bruno.salsa@gmail.com} % % Second author's affiliation \affil{Mathematics Department, University of Évora, Colégio Luís António Verney Rua Romão Ramalho, 59 7000-671 Évora, Portugal} \meetemail{ivdb@uevora.pt}

%% INSERT TEXT OF ABSTRACT DIRECTLY BELOW Many arguments deal informally with orders of magnitude of numbers. If one tries to maintain the intrinsic vagueness of orders of magnitude - they should be bounded, but stable under at least some additions -, they cannot be formalized with ordinary real numbers, due to the Archimedean property and Dedekind completion. Still there is the functional approach through Oh's and oh's and more generally Van der Corput's neutrices\cite{vdcorput-60}, both have some operational shortcomings.

Nonstandard Analysis disposes of a natural example of order of magnitude: the (external) set of infinitesimals is bounded and closed under addition% \cite{koudjetithese}\cite{koudjetivandenberg}. Adopting the terminology of Van der Corput, we call a \emph{neutrix} an additive convex subgroup of the nonstandard reals. An \emph{external number} is the set-theoretic sum of a nonstandard real and a neutrix. The external numbers capture the imprecise boundaries of informal orders of magnitude and permit algebraic operations which go beyond the calculus of the Oh's and oh's\cite{dinisberg}. This external calculus is based more on semigroup operations than group operations, but happens to be fairly operational in concrete cases and allows for total order with a generalized form of Dedekind completion\cite% {dinisberg2}.

Based on joint work with Imme van den Berg, we discuss an axiomatics for the calculus of neutrices and external numbers, trying to do justice to the vagueness of orders of magnitude. In particular we consider foundational problems which appear due to the fact that some axioms are necessarily of second order, and the fact that the external calculus exceeds existing foundations for external sets\cite{kanoveireeken1}.

\begin{thebibliography}{9} %% INSERT YOUR BIBLIOGRAPHIC ENTRIES HERE; %% SEE (4) BELOW FOR PROPER FORMAT. %% EACH ENTRY MUST BEGIN WITH \bibitem{citation key} %%

\bibitem{vdcorput-60} J.G. van der Corput, \textit{Neutrix calculus, neutrices and distributions}, MRC Tecnical Summary Report. University of Wisconsin, 1960.

\bibitem{dinisberg} B. Dinis, I. P. van den Berg,\textit{\ Algebraic properties of external numbers}, J. Logic and Analysis 3:9 (2011) 1--30.

\bibitem{dinisberg2} B. Dinis, I. P. van den Berg, \textit{On structures with two semigroup operations }(to appear)

\bibitem{kanoveireeken1} V. Kanovei, M. Reeken, Nonstandard Analysis, axiomatically, Springer Monographs in Mathematics (2004).

\bibitem{koudjetithese} F. Koudjeti, Elements of External Calculus with an aplication to Mathematical Finance, PhD thesis, Labyrinth publications, Capelle a/d IJssel, The Netherlands (1995).

\bibitem{koudjetivandenberg} F. Koudjeti and I.P. van den Berg. Neutrices, external numbers and external calculus, in Nonstandard Analysis in Practice, p. 145-170. F. and M. Diener eds., Springer Universitext, 1995. %% IF THERE ARE NO ENTRIES %% DELETE THE LINE ABOVE (\begin{thebibliography}{20}) %% AND THE LINE BELOW (\end{thebibliography}) \end{thebibliography}

\vspace*{-0.5\baselineskip} % this space adjustment is usually necessary after a bibliography


Elementary numerosities and measures.

ABSTRACT. Generalizing the notion of numerosity, introduced in [1], we say that a function n from the powerset of a given set Ω is an elementary numerosity if it satisfies the properties 1. the range of n is the non-negative part of a non-archimedean field F that extends R; 2. n({x}) = 1 for every x ∈ Ω ; 3. n(A ∪ B) = n(A) + n(B) whenever A and B are disjoint. We have shown that every non-atomic finitely additive or sigma-additive measure is obtained from an elementary numerosity by taking its ratio to a unit. The proof of this theorem relies in showing that, given a non-atomic finitely additive or sigma- additive measure over Ω, there exists a suitable ultrafilter on Ω such that the elementary numerosity of a set can be defined as the equivalence class of a particular real Ω- sequence. A proof can also be obtained from Theorem 1 of [2], by an argument of saturation. This result allowed to improve nonstandard models of probability, first studied in [3], that overcome some limitations of the conditional probability; further research aims towards models that avoid the Borel-Kolmogorov paradox. For this reasons, we do believe that this topic could be of particular interest not only to mathematicians but also to philosophers. [1] V. Benci, M. Di Nasso, Numerosities of labelled sets: a new way of counting, Advances in Mathematics, vol. 173 (2003), pp. 50–67. [2] C.W. Henson, On the nonstandard representation of measures, Transactions of the American Mathematical Society, vol. 172 (1972), pp. 437–446. [3] V. Benci, L. Horsten, S. Wenmackers, Non-Archimedean probability, Milan Journal of Mathematics (to appear), arXiv:1106.1524.

16:30-18:30 Session 42P: Contributed talks J1 (LC)
Location: MB, Seminarraum 7
Differential Galois theory in the class of formally real fields.

ABSTRACT. Inside the class of formally real fields, we study strongly normal extensions as defined in chap. VI, [Kol]. Fix L/K a strongly normal extension of formally real differential fields such that the subfield C_K of constant elements of K is real closed.

Let U be a saturated model of the theory of closed ordered differential fields containing L (see [Sin]), U is real closed and for i^2=-1, U(i) is a model of DCF_0.

We denote gal(L/K) the group of differential K-automorphisms of L and Gal(L/K):=gal(<L, C_U> / <K, C_U>).

Theorem 1. The group Gal(L/K), respectively gal(L/K), is isomorphic to a definable group G in the real closed field C_U, respectively C_K.

Under the hypothesis that K is relatively algebraically closed in L, we prove that given any u\in L\setminus K, there exists \sigma\in Gal(L/K) such that \sigma(u)\neq u.

Let K\subseteq E\subseteq L be an intermediate differential field extension. As the elements of Gal(E/K) are not supposed to respect the order induced on <E,C_U> by the one of U, they do not need to have an extension in Gal(L/K). Therefore, we do not get a 1-1 Galois correspondence like in the classical case where C_K is algebraically closed (see [Pil]).

Let Aut(L/K) denote the subgroup of elements of Gal(L/K) that are increasing, let \eta:G\rightarrow Gal(L/K) denote a group isomorphism given by Theorem 1 and < L,C_U>^r be the real closure of <L,C_U> in U.

Proposition 2. Let G_0 be a definable subgroup of G. There is a finite tuple \bar d\in <L,C_\U>^r such that \eta(G_0)\cap Aut(L/K) is isomorphic (as a group) to Aut(L(\bar d)/K(\bar d)).



[Kol] E. R. Kolchin, Differential algebra and algebraic groups, Pure and Applied Mathematics, Vol. 54. Academic Press, New York-London, 1973. xviii+446 pp.

[Pil] A. Pillay, Differential Galois Theory I, Illinois Journal of Mathematics, vol.42 (1998), no.4, pp.978--699.

[Sin] M. Singer, The model theory of ordered differential fields, Journal of Symbolic Logic, 43 (1978), no.1, pp.82--91.




Around n-dependent fields
SPEAKER: Nadja Hempel

ABSTRACT. The notion of n-dependent theories introduced by Shelah is a natural generalization of dependent or more frequently called NIP theories. They form a proper hierarchy of first order theories in which the case n equals to 1 coincides which NIP theories. In my talk, I give an overview about algebraic extensions of fields defined in structures with certain properties (superstable, stable, NIP, etc.). For instance, infinite NIP fields of positive characteristic are known to be Artin-Schreier closed. I extend this result to the wider class of infinite n-dependent fields for any natural number n and present some applications to valued fields defined in this setting. Secondly, I show that non-separable closed pseudo-algebraically closed (PAC) fields have the n-independence property for all natural numbers n which is already known for the independence property (n equal to 1) due to Duret. Hence, non-separable closed PAC fields lie outside of the hierarchy of n-dependent fields.

Some remarks on $\aleph_0$-categorical weakly circularly minimal structures

ABSTRACT. \documentclass[bsl,meeting]{asl}\AbstractsOn \pagestyle{plain}\def\urladdr#1{\endgraf\noindent{\it URL Address}: {\tt #1}.}\newcommand{\NP}{}% \usepackage{verbatim}\begin{document}\thispagestyle{empty}% % BEGIN INSERTING YOUR ABSTRACT DIRECTLY BELOW; % % SEE INSTRUCTIONS (1), (2), (3), and (4) FOR PROPER FORMATS\NP \absauth{Beibut Kulpeshov} \meettitle{Some remarks on $\aleph_0$-categorical weakly circularly minimal structures} \affil{Department of Information Systems and Mathematical Modelling, International Information Technology University, 34 A Manas str./8 A Zhandosov str., 050040, Almaty, Kazakhstan} \meetemail{b.kulpeshov@iitu.kz}

The notion of {\it circular minimality} has been introduced and originally studied by D.~Macpherson and C.~Steinhorn in \cite{mac}. Here we continue studying the notion of {\it weak circular minimality} being its generalisation.

A {\it circular} order relation is described by a ternary relation $K$ satisfying the following conditions:\\ (co1) $\forall x\forall y \forall z (K(x,y,z)\to K(y,z,x));$\\ (co2) $\forall x\forall y \forall z (K(x,y,z)\land K(y,x,z) \Leftrightarrow x=y \lor y=z \lor z=x);$\\ (co3) $\forall x\forall y \forall z(K(x,y,z)\to \forall t[K(x,y,t) \lor K(t,y,z)]);$\\ (co4) $\forall x\forall y \forall z (K(x,y,z)\lor K(y,x,z))$.

A set $A$ of a circularly ordered structure $M$ is said to be {\it convex} if for any $a, b\in A$ the following holds: for any $c\in M$ with $K(a,c,b)$ we have $c\in A$ or for any $c\in M$ with $K(b,c,a)$ we have $c\in A$. A circularly ordered structure $M=\langle M, K, \ldots \rangle$ is {\it weakly circularly minimal} if any definable (with parameters) subset of $M$ is a finite union of convex sets \cite{km}. Any weakly o-minimal structure is weakly circularly minimal, but the inverse is not true in general. Some of interesting examples of weakly circularly minimal structures that are not weakly o-minimal were studied in \cite{km, kb30, ksm}.

In \cite{km}--\cite{ksm} $\aleph_0$--categorical 1-transitive weakly circularly minimal structures have been studied, and was obtained their description up to binarity. Here we discuss some properties of $\aleph_0$--categorical weakly circularly minimal structures that are not 1-transitive. In particular, we study a behaviour of 2-formulas in such structures.

\begin{thebibliography}{10} \bibitem{mac} H.D.~Macpherson, Ch.~Steinhorn, {\it On variants of o-minimality}, {\bf Annals of Pure and Applied Logic}, 79 (1996), pp. 165--209. \bibitem{km} B.Sh.~Kulpeshov, H.D.~Macpherson, {\it Minimality conditions on circularly ordered structures}, {\bf Mathematical Logic Quarterly}, 51 (2005), pp. 377--399. \bibitem{kb30} B.Sh.~Kulpeshov, {\it On $\aleph_0$-categorical weakly circularly minimal structures}, {\bf Mathematical Logic Quarterly}, 52 (2006), pp. 555--574. \bibitem{ksm} B.Sh.~Kulpeshov, {\it Definable functions in the $\aleph_0$-categorical weakly circularly minimal structures}, {\bf Siberian Mathematical Journal}, 50 (2009), pp. 282--301.

%% INSERT YOUR BIBLIOGRAPHIC ENTRIES HERE; %% SEE (4) BELOW FOR PROPER FORMAT. %% EACH ENTRY MUST BEGIN WITH \bibitem{citation key}%%%% IF THERE ARE NO ENTRIES %% DELETE THE LINE ABOVE (\begin{thebibliography}{20}) %% AND THE LINE BELOW (\end{thebibliography}) \end{thebibliography}\vspace*{-0.5\baselineskip} % this space adjustment is usually necessary after a bibliography \end{document}

Uniqueness of limit models in metric abstract elementary classes under categoricity and some consequences in domination and orthogonality of Galois types
SPEAKER: unknown

ABSTRACT. Abstract Elementary Classes (AECs) corresponds to an abstract framework for studying non first order axiomatizable classes of structures. Grossberg, VanDieren and Villaveces studied uniqueness of limit models as a weak notion of superstability in AECs.

Hirvonen and Hyttinen gave an abstract setting similar to AECs to study classes of metric structures which are not axiomatizable in continuous logic, called Metric Abstract Elementary Classes (MAECs).

In this work, we will talk about a study of a metric version of limit models as a weak version of superstability in categorical MAECs, and some consequences of uniqueness of limit models in domination, orthogonality and parallelism of Galois types.

Amalgamation, characterizing cardinals and locally finite Abstract Elementary Classes
SPEAKER: unknown

ABSTRACT. We introduce the concept of a locally finite Abstract Elementary Class and develop the theory of excellence for such classes. From this we find a family of complete $L_{\omega_1,\omega}$ sentences $\phi^r$ such that $\phi^r$ is $r$-excellent and $\phi^r$ homogeneously characterizes $\aleph_r$, improving results of Hjorth and Laskowski-Shelah and answering a question of Souldatos. This provides the first example of an Abstract Elementary Class where the spectrum of cardinals on which amalgamation holds contains more than one interval.

Computing the number of types of infinite tuples
SPEAKER: Will Boney

ABSTRACT. We show that the number of types of sequences of tuples of a fixed length can be calculated from the number of 1-types and the length of the sequences. Specifically, if $\kappa \leq \lambda$, then

$$\sup_{\|M\| = \lambda} |S^\kappa(M)| = \left( \sup_{\|M\| = \lambda} |S^1(M)| \right)^\kappa$$

We show that this holds for any abstract elementary class with $\lambda$ amalgamation%, but it is new for first order theories when $\kappa$ is infinite . No such calculation is possible for nonalgebraic types. We introduce a generalization of nonalgebraic types for which the same upper bound holds. We use this to answer a question of Shelah from [Sh:c].