previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 13A: LCC Opening and Invited Talk (Moser) (LCC)
Location: FH, Zeichensaal 3
LCC'14 Opening Remarks
Weighted Automata Theory for Complexity Analysis of Rewrite Systems
SPEAKER: Georg Moser

ABSTRACT. Matrix interpretations can be used to bound the derivational and  runtime  complexity  of  term rewrite  systems.   In  particular, triangular matrix  interpretations over the natural  numbers are known to induce  polynomial upper bounds  on the complexity  of (compatible) rewrite  systems.  Recently  a couple  of improvements  were proposed, based on the theory of weighted  automata and linear algebra.  In this talk we  focus on the application  of weighted automata theory  on the runtime  complexity  analysis  of  rewrite  systems  and  clarify  the connection to joint spectral radius theory.

The  talk is  based  on  joint work  with  Aart Middeldorp,  Friedrich Neurauter, Johannes Waldmann, and Harald Zankl.

Non-uniform Polytime Computation in the Infinitary Affine Lambda-calculus
SPEAKER: Damiano Mazza

ABSTRACT. We give an implicit, functional characterization of the class of non-uniform polynomial time languages, based on an infinitary affine lambda-calculus and on previously defined bounded-complexity subsystems of linear (or affine) logic. The fact that the characterization is implicit means that the complexity is guaranteed by structural properties of programs rather than explicit resource bounds. As a corollary, we obtain a proof of the (already known) P-completeness of the normalization problem for the affine lambda-calculus which mimics in an interesting way Ladner's P-completeness proof of CIRCUIT VALUE (essentially, the argument giving the Cook-Levin theorem). This suggests that the relationship between affine and usual lambda-calculus is deeply similar to that between Boolean circuits and Turing machines, opening some interesting perspectives, which we discuss.

08:45-10:15 Session 13B: Calculi (HOR)
Location: FH, Seminarraum 101C
The dynamic pattern calculus as a higher-order pattern rewriting system

ABSTRACT. We show that Jay and Kesner’s dynamic pattern calculus can be embedded into a higher-order pattern rewriting systems in the sense of Nipkow. Metatheoretical results, such as confluence and standardisation, are shown to obtain for the dynamic pattern calculus as a consequence of this embedding. The embedding also opens a way to study the semantics of Jay’s programming language Bondi based on pattern matching.

Distilling Abstract Machines

ABSTRACT. It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between big-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature (as the KAM, the CEK, the ZINC, etc.), we show that distillation preserves the time complexity of the executions, i.e., the LSC is a complexity-preserving abstraction of abstract machines.

Experience with Higher Order Rewriting from the Compiler Teaching Trenches

ABSTRACT. I have now twice used the “HACS” compiler generator tool in a New York Univ. graduate Com- piler Construction class. HACS is based on a higher-order rewriting formalism, thus I have effectively been teaching students higher order rewriting techniques as the way to implement compilers. In this talk I report on how HACS matches specific rewriting notions to the tech- niques used by compiler writers, and where the main difficulties have been encountered in teaching these.

09:00-10:15 Session 14: Improvements to ACL2 (ACL2)
Location: FH, Hörsaal 7
Industrial-Strength Documentation for ACL2
SPEAKER: Jared Davis

ABSTRACT. The modeling tools, specifications, and proof scripts for an industrial formal verification effort can easily reach hundreds of thousands of lines of source code. High quality documentation is vital for teams that are working on projects of this scale. We have developed a flexible, scalable docu- mentation tool for ACL2 that can incorporate the documentation for the ACL2 theorem prover, the Community Books, and an organization’s internal formal verification projects. The resulting manuals are automatically accurate and current, and are easily deployed throughout an organization.

Enhancements to ACL2 in Versions 6.2, 6.3, and 6.4
SPEAKER: Matt Kaufmann

ABSTRACT. We report on improvements to ACL2 made since the 2013 ACL2 Workshop.

09:10-10:15 Session 15: Lambda Calculus (SD)
Location: FH, Zeichensaal 1
Opening Remarks
SPEAKER: unknown

ABSTRACT. Welcome to the Third International Workshop on Structures and Deduction.

Simply Typed Lambda-Calculus Modulo Type Isomorphisms
SPEAKER: Gilles Dowek

ABSTRACT. In this talk, I will present an extension of simply-typed lambda-calculus where isomorphic types are identified. This requires to introduce new reduction rules, some of which are non deterministic. The normalization of this system is a non-trivial adaptation of the reducibility method.

Joint work with Alejandro Díaz-Caro

10:15-10:45Coffee Break
10:45-12:00 Session 16A: PC Opening and Invited Talk (Pavel Pudlak) (PC)
Location: FH, Seminarraum 101B
On Some Conjectures in Proof Complexity
SPEAKER: Pavel Pudlák

ABSTRACT. Our aim is to study conjectures that are somehow connected with consistency and/or incompleteness of first order theories. We want to fully understand (1) how far we can go by considering stronger and stronger conjectures and (2) whether the system of conjectures splits into incomparable ones or there there are always stronger unifying conjectures. Currently the conjectures that we have can be classified into two levels - uniform and nonunifom, and two directions - universal and existential. We still hope to be able to find a stronger unifying conjecture.

10:45-12:45 Session 16B: Contributed Talks (LCC)
Location: FH, Zeichensaal 3
Towards recursion schemata for the probablistic class PP

ABSTRACT. We propose a recursion-theoretic characterization of the probabilistic class PP, using recursion schemata with pointers.

Closing a Gap in the Complexity of Refinement Modal Logic

ABSTRACT. Refinement Modal Logic (RML), which was recently introduced by Bozzelli et al., is an extension of classical modal logic which allows one to reason about a changing model. In this paper we study computational complexity questions related to this logic, settling a number of open problems. Specifically, we study the complexity of satisfiability for the existential fragment of RML, a problem posed by Bozzelli, van Ditmarsch and Pinchinat. Our main result is a tight PSPACE upper bound for this problem, which we achieve by introducing a new tableau system. As a direct consequence, we obtain a tight characterization of the complexity of the existential fragment of RML satisfiability.

On the Complexity of Symbolic Verification and Decision Problems in Bit-Vector Logic

ABSTRACT. We study the complexity of decision problems encoded in bit-vector logic. This class of problems includes word-level model checking, i.e., the reachability problem for transition systems encoded by bit-vector formulas. Our main result is a generic theorem which determines the complexity of a bit-vector encoded problem from the complexity of the problem in explicit encoding. In particular, NL-completeness of graph reachability directly implies PSPACE-completeness and EXPSPACE-completeness for word-level model checking with unary and binary arity encoding, respectively. In general, problems complete for a complexity class C are shown to be complete for an exponentially harder complexity class than C when represented by bit-vector formulas with unary encoded scalars, and further complete for a double exponentially harder complexity class than C with binary encoded scalars.We also show that multi-logarithmic succinct encodings of the scalars result in completeness for multi-exponentially harder complexity classes. Technically, our results are based on concepts from descriptive complexity theory and related techniques for OBDDs and Boolean encodings.

First-Order Logic of Order and Metric has the Three-Variable Property

ABSTRACT. Real-time specifications refer both to order-theoretic and metric temporal properties. A natural framework for such specifications is monadic first-order logic over the ordered real line with unary +1 function. Our main result is that this structure has the 3-variable property---every monadic first-order formula with at most 3 free variables is equivalent to one that uses 3 variables in total. As a corollary we obtain also the 3-variable property for certain unary arithmetic functions in place of +1. As a negative example we exhibit a bijection on the reals such that the resulting structure does not have the k-variable property for any k.

10:45-13:00 Session 16C: Datatypes and machine learning (ACL2)
Location: FH, Hörsaal 7
Experiments with TPTP-style Automated Theorem Provers on ACL2 Problems
SPEAKER: unknown

ABSTRACT. We present first experiments with translating ACL2 problems to TPTP and running TPTP-style ATPs on such problems.

Polymorphic Types In ACL2
SPEAKER: unknown

ABSTRACT. This paper describes a tool suite for the ACL2 programming language which incorporates certain ideas from the Hindley-Milner paradigm of functional programming (as manifested in popular languages like ML and Haskell), including a "typed" style of programming with the ability to define polymorphic types. These ideas are introduced via macros into the language of ACL2, taking advantage of ACL2's guard-checking mechanism to perform type checking on both function definitions and theorems. Finally, we discuss how these macros can be used to implement features of a high-level software specification and implementation language. (citation Specware/Kestrel)

Data Definitions in ACL2 Sedan

ABSTRACT. We present a data definition framework that supports a wide variety of specifying (classifying) ``new'' data types from existing types. A distinguishing feature of our approach is that we maintain both a analytic (recognizer/predicate function) and synthetic (enumerator function) characterization of a data definition. We describe the syntax and semantics of our interfacing macros (\p{defdata} et al). The \p{defdata} language concisely captures common data definition patterns, e.g. list types, map types, record types, and installs theories raising the level of automation for reasoning about such types (and associated functions). It also provides support for polymorphic functions, by using the advanced ACL2 features of \p{encapsulate} and functional instantiation, without exposing them to the user. We give a complete characterization (in terms of tau-rules) of the inclusion/exclusion relations a \p{defdata} type definition induces. We present a number of examples illustrating usage of \p{defdata}. The data definition framework has been implemented as a component of counterexample generation support in ACL2 Sedan, but can be independently used, and is available, as a community book.

ACL2(ml): Machine-Learning for ACL2
SPEAKER: unknown

ABSTRACT. ACL2(ml) is an extension for the Emacs interface of ACL2. It uses machine-learning to help the ACL2 user during the proof-development. ACL2(ml) gives hints to the user in the form of families of similar theorems, and generates auxiliary lemmas automatically. In this paper, we present the two most recent extensions for ACL2(ml). First, ACL2(ml) can suggest families of similar definitions, in addition to the families of similar theorems. Second, the lemma generation tool has been improved with a method to generate preconditions based on the guard mechanism of ACL2. The user of ACL2(ml) can also invoke directly the latter extension to obtain preconditions for his conjectures.

10:45-12:00 Session 16D: Foundations (HOR)
Location: FH, Seminarraum 101C
The Higher-order Dependency Pair Framework
SPEAKER: Cynthia Kop

ABSTRACT. In recent years, two different dependency pair approaches have been introduced: the dynamic and static styles. The static style is based on a computability argument, and is limited to plain function-passing systems. The dynamic style has no limitations, but standard techniques to simplify sets of dependency pairs – such as the subterm criterion and usable rules – are not applicable. The basic reduction pair technique is also significantly weaker than in the static case. On the other hand, when the system is left-linear and left-abstraction-free, we can significantly improve the dynamic approach. In this talk, I will discuss how to combine the dynamic and static styles in a single dependency pair framework, extending the various notions from the first-order dependency pair framework.

Feebly not weakly

ABSTRACT. Some rewrite systems are not orthogonal, in that they do have critical peaks, but are very close to being orthogonal, in that for any given object there exists a partial function, called orthogonalisation, mapping the set of all redexes to an orthogonal subset and every multi-step to an equivalent one. Term rewrite systems having only trivial peaks, so-called weakly orthogonal systems with the lambda-beta-eta-calculus as prime example, are known to admit such an orthogonalisation. Here we characterise the term rewrite systems that admit orthogonalisation as those whose critical peaks are feeble, in that at least two out of the three terms in such a peak must be identical (generalising weak orthogonality).

Report from the HOR 2014 Chair & Discussion
10:45-12:45 Session 16E: Sequent Calculus and Proof Nets (SD)
Location: FH, Zeichensaal 1
Completions and the Schuette method

ABSTRACT. The cut-elimination theorem of Gerhard Gentzen is one of the most beautiful and useful theorems in structural proof theory. For certain logics there is an algebraic view on cut-elimination in which this property is proved via completions of certain semi-algebraic structures corresponding to cut-free sequent calculi. There is, however, for intuitionistic propositional logic another way to obtain Heyting algebras refuting an underivable sequent, the so-called Schuette method. We describe this method and compare it to the method of producing algebras via completions.

A Correctness Criterion Free from Switchings
SPEAKER: unknown

ABSTRACT. We present a new correctness criterion for Multiplicative Linear Logic (MLL) proof nets. Our criterion generalizes a criterion recently introduced by Mogbil and Jacobé de Naurois in order to characterize the space complexity of the correctness problem for M(E)LL: while the Mogbil-Naurois criterion relies on a single switching, our criterion completely abstracts from the notion of switchings.

This is done in two steps. First we define dependency graphs on proof nets rather than correction graphs, as in Mogbil-Naurois, making them switching-independent. Then, we partition our criterion in two parts: one part deals exclusively with tensor, axiom and cut inferences while the other one deals with the structure of par inferences by requiring dependency graphs to be acyclic flowgraphs (or SDAG). We finally compare both notions of dependency graphs.

First-order Proofs Without Syntax: Summary of Work in Progress

ABSTRACT. "...mathematicians care no more for logic than logicians for mathematics." - Augustus de Morgan, 1868

Formal proofs are traditionally syntactic, build inductively from symbolic inference rules. This paper reformulates classical first-order logic with proofs which are combinatorial rather than syntactic. A *combinatorial proof* of a formula F is a graph on a variant formula F′ which satisfies geometric properties. For example, an edge between the two Ps in ∀xPx ⇒ ∃x∀yPy constitutes a combinatorial proof of ∃x(Px ⇒ ∀yPy).

Combinatorial proofs provide a canonical abstraction of Gentzen’s sequent calculus LK: (a) the correctness of a combinatorial proof can be verified in polynomial time (conjectured linear), and (b) there is a function from LK proofs to combinatorial proofs which identifies many LK proofs.

This paper summarizes ongoing work on a sequel to Proofs Without Syntax [Annals of Mathematics, 2006], which introduced propositional combinatorial proofs.

12:00-13:00 Session 17A: HOR/WIR Invited Talk (HOR and WIR)
Location: FH, Seminarraum 101C
On Infinitary Affine Lambda-Calculi
SPEAKER: Damiano Mazza

ABSTRACT. We summarize recent work based on affine lambda-calculi which brings together some aspects of infinitary and higher-order rewriting. In particular, we discuss three points: the relationship with the (infinitary) lambda-calculus; the categorical perspective, which gives a way of building models of linear logic; and the applications to implicit computational complexity, in particular the possiblity of representing non-uniform polytime computation in a functional, higher-order setting.

12:00-13:00 Session 17B: Contributed Talks: SAT and QBF (PC)
Location: FH, Seminarraum 101B
Size-Space Bounds and Trade-offs for CDCL Proofs
SPEAKER: Marc Vinyals

ABSTRACT. A long line of research has shown that conflict-driven clause learning viewed as a proof system can polynomially simulate resolution. We give a more explicit description of this CDCL proof system, which allows us to define proof size and space in a natural way and also capture aspects such as clause learning schemes and restart policies. This in turn makes it possible to study which upper and lower bounds on size, space and size-space trade-offs in general resolution carry over to proofs actually realizable by CDCL solvers.

Beyond Q-Resolution and Prenex Form: A Proof System for Quantified Constraint Satisfaction
SPEAKER: Hubie Chen

ABSTRACT. We consider the quantified constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quantification, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results.

13:00-14:30Lunch Break
14:30-15:30 Session 18A: Invited Talk (Jakob Nordström) (PC)
Location: FH, Seminarraum 101B
A (Biased) Survey of Space Complexity and Time-Space Trade-offs in Proof Complexity

ABSTRACT. This presentation is intended as a survey of results on proof space and time-space trade-offs in proof complexity. We will try to provide some proof sketches to give a flavour of the proof methods used, but also explain the limitations of current techniques and state some tantalising problems that remain open.

14:30-16:00 Session 18B: Invited Talk (Buss) and Contributed Talks (LCC)
Location: FH, Zeichensaal 3
Search problems, proof complexity, and second-order bounded arithmetic

ABSTRACT. This talk will discuss NP search problems, in particular based on the local improvement principles, in second-order bounded arithmetic.  The bounded arithmetic theory U-1-2 captures exactly the polynomial space computable functions and predicates.  It can formalize Savitch's theorem on non-deterministic space, and universal theorems in U-1-2 translate to quasipolynomial size Frege proofs.  The theory V-1-2 plays a similar role for exponential time computable functions and predicates, and quasipolynomial size extended Frege proofs.

The local improvement principles, first introduced by Kolodziejczyk, Nguyen, and Thapen, are a class of total NP search problems.   The talk discusses sharpened results for characterizing the power of various local improvement principles, giving completeness results for a broader range of search problems.

Most of the new results reported in this talk are joint with A. Beckmann.


Infinite AC0 Circuits for Parity

ABSTRACT. For arbitrary $d,k \in \N$, we consider a first-order theory $T$ of ``$\AC^0$ circuits of depth $d$ and size $n^k$ which compute a parity function'' where $n$ is a possibly infinite number of boolean variables. The theorem $\PARITY \notin \AC^0$ is directly equivalent to the statement that $T$ does not possess arbitrarily large finite models for all $d,k$. A slighter stronger theorem---which follows from derandomization results of Agrawal \cite{agrawal2001towards}---shows that $T$ has no pseudofinite model which includes a $\BIT$ predicate. However, unlike the theories of $\AC^0$ studied in proof complexity (which are typically formulated within Peano Arithmetic), models of $T$ are not necessary pseudofinite. In fact, the main purpose of this paper is to address the question: does $T$ have {\em any} infinite model? A negative answer would be very interesting, as $\PARITY \notin \AC^0$ would follow directly via the Compactness Theorem without recourse to finite combinatorics.

Unfortunately, our main result shows this initial hope was too optimistic: we construct an infinite model of $T$ for some $d,k$ (unpublished joint work with Saharon Shelah). This counterexample leads us to consider a stronger theory, $T^<$, which augments $T$ by a linear order and axioms expressing that every definable set has a minimal element. We also consider a stricter notion of {\em definable parity function} along the lines of Ajtai \cite{ajtai1995existence}. This theory $T^<$ appears to defeat counterexamples based on symmetry. We leave it as an intriguing open question whether or not $T^<$ has any infinite model. Here again a negative answer would yield a non-combinatorial proof of $\PARITY \notin \AC^0$.

14:30-16:00 Session 18C: Keynote Mike Gordon (ACL2)
Location: FH, Hörsaal 7
Linking ACL2 and HOL: past achievements and future prospects
SPEAKER: Mike Gordon

ABSTRACT. Over the years there have been several attempts to obtain the amazing automation and efficiency of ACL2 theorem proving within various versions of the HOL proof assistant. These include building a Boyer-Moore waterfall as a tactic, dynamically linking to ACL2 using the PROSPER Plug-In Interface and, most recently, embedding the ACL2 logic in the HOL logic and then using ACL2 as an trusted oracle. These activities have differed in goals and methods, e.g. placing different emphases on useability, efficiency and logical coherence. The talk will start with a critical historical overview, and will end with thoughts on possible future projects.

14:30-16:00 Session 18D (WIR)
Location: FH, Seminarraum 101C
Dynamical systems, attractors and well-behaved infinitary rewriting
From the finite to the transfinite: Λμ-terms and streams.
SPEAKER: unknown
Normal Forms and Infinity
SPEAKER: unknown
14:30-16:00 Session 18E: Algebra and Topology (SD)
Location: FH, Zeichensaal 1
Proof theory for ordered algebra: amalgamation and densification

ABSTRACT. Proof theory plays only a limited role in the context of superintuitionistic/substructural logics. We certainly have cut elimination, disjunction property, Herbrand's theorem, interpolation and density elimination. But is there anything else? Our purpose is to find further applications of proof theory by making it directly act on algebras (Heyting/FL algebras) rather than on syntactic formulas and proof systems.

It is well known that a superintuitionistic logic enjoys the interpolation property if and only if the corresponding variety of Heyting algebras enjoys the amalgamation property. This means that a proof theoretic method for interpolation (Maehara's Lemma) potentially has an effect of constructing an amalgam in the context of Heyting algebras. Likewise, a proof theoretic method for eliminating the density rule in the hypersequent calculus has an effect of densifying a given Heyting/FL chain.

In this talk, we will illustrate such direct applications of proof theory to Heyting/FL algebras. Specifically, we will explain how Maehara's lemma and the density elimination theorem can be construed as algebraic constructions for amalgamation and densification.

Weak topologies for Linear Logic
SPEAKER: Marie Kerjean

ABSTRACT. We construct a categorical model of Linear Logic, whose objects are all the locally convex and separated topological vector spaces endowed with their weak topology. Linear proofs are interpreted as continuous linear functions, and non-linear proofs as sequences of monomials. The duality in this interpretation of Linear Logic does not come from an orthogonality relation, thus we do not complete our constructions by a double-orthogonality operation. This yields into an interpretation of polarities with respect to weak topologies.

14:30-16:00 Session 18F (2FC)
Location: FH, Dissertantenraum E104
A cubical representation of local states
In this talk, I will explain how imperative programs with local stores may be represented
as cubes living inside types themselves represented as cubical sets. I will illustrate
with a few examples how this geometric representation of programs with states
should help us to reason about resource allocation and deallocation in imperative programs.
This is joint work with Kenji Maillard.
Computational complexity in the lambda-calculus: what's new?
SPEAKER: Damiano Mazza

ABSTRACT. The lambda-calculus  models computation via  substitution, an abstract
notion  whose concrete  meaning in  terms of  elementary computational
steps is unclear a priori.  As  a consequence, the model of choice for
computational complexity  is usually some variant  of Turing machines,
even  in  the higher-order  case  (which  seems  regretful, given  the
functional nature of the  lambda-calculus).  In recent years, a number
of tools related to  linear logic (proof nets, explicit substitutions,
linear calculi)  have provided  us with a  finer understanding  of the
substitution  process, offering new  perspectives on  the relationship
between lambda-calculus and computational complexity.  In this talk, I
will survey three specific  cases: Accattoli and Dal Lago's invariance
result  for beta-reduction;  an ongoing  work by  Dal Lago  and myself
proposing a new formulation  of higher-order complexity entirely based
on  a lambda-calculus;  and some  of my  results and  ongoing  work on
characterizing non-uniform circuit  complexity classes with infinitary

On the parallel reduction and complexity of interaction-net systems

ABSTRACT. Interaction  nets form  an inherently  parallel computation  model. We
approach complexity  analysis of interaction-net  systems using typing
and semantic interpretation techniques.

15:30-16:00 Session 19: Contributed Talk: Polynomial Calculus (PC)
Location: FH, Seminarraum 101B
Towards an Understanding of Polynomial Calculus: New Separations and Lower Bounds
SPEAKER: Marc Vinyals

ABSTRACT. Recently, an active line of research has been into the space complexity of proofs. We prove that if a formula requires large resolution width, then making XOR substitution yields a formula requiring large PC space. This yields formulas that exhibiting a degree-space separation similar to what is known for resolution. Using related ideas, we show that Tseitin formulas over random 4-regular graphs almost surely require space at least Ω(√n).

Our proofs use techniques from [Bonacina-Galesi '13]. Our final contribution, however, is to show that these techniques cannot yield non-constant space lower bounds for the functional pigeonhole principle, which is conjectured to be hard for PC space.

16:00-16:30Coffee Break
16:30-18:00 Session 20A: Contributed Talks: Bounded Arithmetic (PC)
Location: FH, Seminarraum 101B
Provably Total Search Problems in Fragments of Bounded Arithmetic Below Polynomial-time

ABSTRACT. In bounded arithmetic, a host of theories have been developed which correspond to complexity classes within the polynomial
hierarchy and below polynomial-time. Recent research has tried to characterize the provably total NP search problems
in such theories, where a total NP search problem is provably total in a theory T if it can be formalized in the language of T, and T can prove that, for each instance, there exists a solution to the search problem. For example, Buss and Krajcek showed that the class of provably total NP search problems in a certain bounded arithmetic theory (denoted T12 or  TV1) is characterized by PLS. Further characterizations for theories above T12 were later discovered. To the best of our knowledge, no such characterizations are known for the theories below T12 until this work.

Given a class S of provably total NP search problems for some theory, the general aim of our research project is to identify some specific provably total NP search problem class (usually defined via some specific combinatorial principle) which is complete within S under AC0-many-one reduction; completeness should be proven using AC0-reasoning only, i.e. formalizable in V0. For example, it was shown by Cook and Nguyen that PLS, where the neighborhood function and the initial point are given by AC0-functions, is complete in the above sense for the class of provably total NP search problems in T12.

For the theory related to polynomial-time, we identify the search problem class Inflationary Iteration (IITER) which serves our above described aim. A function F (defined on finite strings) is inflationary if X is a subset of F(X) (under the natural identification of strings with finite sets). An IITER principle is defined as a special case of the iteration principle, in which the iterated function has to be AC0-computable and inflationary.

Cook and Nguyen have a generic way of defining a bounded arithmetic theory VC for complexity classes C below polynomial-time. For such theory VC, we define a search problem class KPT[C] which serves our above described aim. These problems are based on a version of Herbrand's theorem proven by Krajicek, Pudlak and Takeuti.

Parameter-free induction in bounded arithmetic

ABSTRACT. In the area of strong fragments of Peano Arithmetic, it proved fruitful to study not just the usual induction fragments IΣi, but also fragments axiomatized by parameter-free induction schemata, and theories axiomatized using the closely related induction inference rules.

In contrast, induction rules and parameter-free schemata have been largely ignored in bounded arithmetic literature. Apart from IEi- briefly discussed by Kaye (1990), the corresponding fragments of Buss's S2 were only studied by Bloch (1992) and Cordon-Franco et al. (2009). Many basic questions about these fragments were disregarded, in particular, neither paper even mentions Πbi-induction rules and parameter-free schemata, despite that they could be expected to behave rather differently from Σbi-rules by analogy with the case of strong fragments.

In this talk, we will try to systematically investigate parameter-free and inference rule versions of the theories Ti2 and Si2. We will particularly focus on the following questions:

  • Reductions between the various rules and schemata.
  • Conservativity results between the fragments, and the formula complexity of axioms.
  • The number of instances (nesting) of R needed to generate the closure T+R, and finite axiomatizability of fragments.
  • Reformulation of axioms and rules in terms of reflection principles for propositional proof systems.
  • Conditional or relativized separations between the fragments.
Logical strength of complexity theory and a formalization of the PCP theorem in bounded arithmetic

ABSTRACT. The aim of this presentation is to show that a lot of complexity theory can be formalized in low fragments of arithmetic like Cook's theory PV1. We present several known formalizations of theorems from computational complexity in bounded arithmetic and formalize the PCP theorem in the theory PV1 (no formalization of this theorem was known). This includes a formalization of the existence and of some properties of the (n,d,λ)-graphs needed to derive the PCP theorem in PV1.

16:30-18:00 Session 20B: Contributed Talks (LCC)
Location: FH, Zeichensaal 3
Descriptive Complexity and CSP

ABSTRACT. We give a common generalization of two theorems from Descriptive Complexity: 1) the Immerman-Vardi theorem, which says that over linearly ordered structures, the logic LFP+C captures PTime and 2) the Cai-Fürer-Immerman theorem, which says that over all finite structures, the logic LFP+C does not capture PTime. Our generalization relies on a connection with Constraint Satisfaction Problems and recent results from CSP theory. The results presented in this abstract are extracted from a paper [KLOT14] accepted to LICS’14.

Capturing Bisimulation-Invariant Complexity Classes with Higher-Order Modal Fixpoint Logic
SPEAKER: Etienne Lozes

ABSTRACT. Polyadic Higher-Order Fixpoint Logic (PHFL) is a modal fixpoint logic obtained as the merger of Higher-Order Fixpoint Logic (HFL) and the Polyadic $\mu$-Calculus. Polyadicity enables formulas to make assertions about tuples of states rather than states only. Like HFL, PHFL has the ability to formalise properties using higher-order functions. We consider PHFL in the setting of descriptive complexity theory: its fragment using no functions of higher-order is exactly the Polyadic $\mu$-Calculus, and it is known from Otto's Theorem that it captures the bisimulation-invariant fragment of PTIME. We extend this and give capturing results for the bisimulation-invariant fragments of EXPTIME, PSPACE, and NLOGSPACE.

Small Dynamic Descriptive Complexity Classes
SPEAKER: Thomas Zeume

ABSTRACT. In this talk I will give an overview over results recently obtained for small complexity classes in the dynamic descriptive complexity framework of Patnaik and Immerman. Those results are joint work with Thomas Schwentick and include a hierarchy of dynamic fragments as well as new lower bounds for the quantifier-free fragment.

16:30-19:00 Session 20C: Economics, Rump Sessions and Business Meeting (ACL2)
Location: FH, Hörsaal 7
On Vickrey's Theorem and the Use of ACL2 for Formal Reasoning in Economics
SPEAKER: unknown

ABSTRACT. The ForMaRE project (\textbf{for}mal \textbf{ma}thematical \textbf{r}easoning in \textbf{e}conomics) is an interdisciplinary effort with the goal of increasing confidence in theoretical results in economics. In a nutshell, the idea is to find suitable automated reasoning platforms for reasoning about economic theory. This requires collaboration among economists and formal methodists, but in an ideal world, the automated reasoning platforms will become sufficiently advanced that economists can use them directly. As such, researchers in the ForMaRE project recently analyzed the suitability of four different automated reasoning platforms for this project: Isabelle, Theorema, Mizar, and Hets/CASL/TPTP. A major portion of the analysis was the formalization in each of these systems of Vickrey's theorem on the equilibrium and efficiency of second-price auctions. Conspicuously absent from the list of theorem provers considered is ACL2. This paper describes our attempt to rectify this oversight.

16:30-18:00 Session 20D: WIR Contributed Talks (WIR)
Location: FH, Seminarraum 101C
An Introduction to the Clocked Lambda Calculus
SPEAKER: unknown
Work in Progress: Algebraic Abstract Reduction Systems

ABSTRACT. Several extensions of the lambda-calculus allow to rewrite a term into a linear combination of terms taking coefficients in the ring of non-negative real numbers. The goal of this work is to define a notion of algebraic rewrite system abstracting away these calculi. In particular, we look for a notion of normalization induced by a rewriting compatible with the semi-ring structure and the usual topology on non-negative reals. The goal is to achieve a kind of Newman's lemma: "strong normalization together with local confluence implies confluence". We state the formal definitions and show how a (naïve) expected implication fails short with several non-trivial examples.

Turtle graphics of morphic streams
SPEAKER: Hans Zantema
16:30-17:30 Session 20E: Normalisation and Ludics (SD)
Location: FH, Zeichensaal 1
Substructural Cut Elimination

ABSTRACT. In the paper ``Structural Cut Elimination'', Pfenning gives a proof of the admissibility of cut for intuitionistic and classical logic. The proof is remarkable in that it does not rely on difficult termination metrics, but rather a nested lexicographical induction on the structure of formulas and derivations. A crucial requirement for this proof to go through is that contraction is not an inference rule of the system. Because of this, it is necessary to change the inference rules so that contraction becomes an admissible rule rather than an inference rule. This change also requires that weakening is admissible, hence it is not directly applicable to logics in which only contraction is admissible (e.g. relevance logic).

We present a sequent calculus which admits a unified structural cut elimination proof that encompasses Intuitionistic MALL and its affine, strict and intuitionistic extensions. A nice feature of the calculus is that, for instance, moving from linear to strict logic is as simple as allowing the presence of a rule corresponding to contraction.

Multiplicative Decomposition of Behaviours in Ludics
SPEAKER: unknown

ABSTRACT. Ludics is a theory that may be considered, at first glance, as a Brouwer-Heyting- Kolmogorov interpretation of Logic as a formula is denoted by the set of its proofs. More primitively, Ludics is a theory of interaction that models (a variant of) secondorder multiplicative-additive Linear Logic. A formula is denoted by a set of objects called a behaviour, a proof by an object that satisfies some criteria. Our aim is to analyze the structure of behaviours in order to better understand and refine the usual notion of formula or type. More precisely, we study properties that guarantee a behaviour to be multiplicatively decomposable.

16:30-18:00 Session 20F (2FC)
Location: FH, Dissertantenraum E104
On the Value of Variables

ABSTRACT. Call-by-value  and call-by-need lambda-calculi  are defined  using the
distinguished  syntactic category of  values. In  theoretical studies,
values are variables and abstractions. In more practical works, values
are  usually defined  simply as  abstractions. This  paper  shows that
practical   values    lead   to   a   more    efficient   process   of
substitution---for  both  call-by-value  and  call-by-need---once  the
usual hypothesis for implementations hold (terms are closed, reduction
does  not go  under abstraction,  and  substitution is  done in  micro
steps,  replacing one variable  occurrence at  the time).  Namely, the
number  of  substitution  steps   becomes  linear  in  the  number  of
beta-redexes, while theoretical values only provide a quadratic bound.

Automated Complexity Analysis of Term Rewrite Systems

ABSTRACT. TcT, the Tyrolean complexity tool, is a tool for automatically proving
polynomial upper  bounds on the  runtime complexity of  term rewriting
systems.  In  this talk, I  will briefly outline the  formal framework
and main techniques underlying TcT.

Subrecursive Linear Dependent Types and Computational Security
SPEAKER: Ugo Dal Lago

ABSTRACT. Various techniques  for the complexity analysis of  programs have been introduced in the last twenty years, spanning from type systems, to abstract interpretation, to path orders. A peculiar approach consists in adopting linear dependent types, which have been proved to be a relative complete methodology for complexity analysis. We here study linear dependency in a subrecursive setting, where recursion is structural. On the one hand, we show that linear dependent types are robust enough to  be adaptable, by way of effects, to a language with imperative features. On the other, we show the obtained system to be enough powerful to type nontrivial examples drawn from computational security, which is the driving motivation for this work.

18:15-19:15 Session 21 (2FC)
Location: FH, Dissertantenraum E104
Proof theoretic approaches to rewriting
SPEAKER: Thomas Powell

ABSTRACT. A  key method  for establishing  the  termination or  complexity of  a
program in some high level language is to capture its salient features
in an abstract  rewrite system which can then  be analysed separately.
In this talk  I will discuss some of the  deep links between rewriting
theory  and  classical  proof   theory,  in  particular  how  powerful
techniques from proof  theory can be used to  derive the complexity of
rewrite systems and  shed light on well-known tools  such as recursive
path orderings.  My aim is  ultimately to present some proof theoretic
ideas for understanding and analysing rewrite systems.