VSL 2014: VIENNA SUMMER OF LOGIC 2014
CSL-LICS ON MONDAY, JULY 14TH, 2014
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:45-09:15 Session 37: VSL Opening
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
08:45
Welcome Address by the Rector
08:50
Welcome Address by the Organizers
SPEAKER: Matthias Baaz
08:55
VSL Opening
SPEAKER: Dana Scott
09:15-10:15 Session 38: VSL Keynote Talk
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
09:15
VSL Keynote Talk: Computational Ideas and the Theory of Evolution

ABSTRACT. Covertly computational insights have influenced the Theory of Evolution from the very start. I shall discuss recent work on some central problems in Evolution that was inspired and informed by computational ideas. Considerations about the performance of genetic algorithms led to a novel theory of the role of sex in Evolution based on the concept of mixability, the population genetic equations describing the evolution of a species can be reinterpreted as a repeated game between genes played through the multiplicative update algorithm, while a theorem on satisfiability can shed light on the emergence of complex adaptations.

10:15-10:45Coffee Break
10:45-13:00 Session 38A: Tutorial by Assia Mahboubi
Location: FH, Hörsaal 1
10:45
Computer-checked mathematics: a formal proof of the Odd Order theorem

ABSTRACT. The Odd Order Theorem is a landmark result in finite group theory, famous for its crucial role in the classification of finite simple groups, for the novel methods introduced by its original proof but also for the striking contrast between the simplicity of its statement and the unusual length and complexity of its proof. After a six year collaborative effort, we managed to formalize and machine-check a complete proof of this theorem using the Coq proof assistant . The resulting collection of libraries of formalized mathematics covers a wide variety of topics, mostly in algebra, as this proof relies on a sophisticated combination of different flavours of group theory. In this tutorial we comment on the role played by the different features of the proof assistant, from the meta-theory of its underlying logic to the implementation of its various components. We will also discuss some issues raised by the translation of mathematical textbooks into formal libraries and the perspectives it opens on the use of a computer to do mathematics.

10:45-13:00 Session 38B: Computability
Location: FH, Hörsaal 5
10:45
A quest for algorithmically random infinite structures

ABSTRACT. The last two decades have witnessed significant advances in the investigation of algorithmic randomness, such as Martin-Lof randomness, of infinite strings. In spite of much work, research on randomness of infinite strings has excluded the investigation of algorithmic randomness for infinite algebraic structures. The main obstacle in introducing algorithmic randomness for infinite structures is that many classes of infinite structures lack measure. In this paper, we overcome this obstacle by proposing a limited amount of finiteness conditions on various classes of infinite structures. These conditions will enable us to introduce measure and, as a consequence, reason about algorithmic randomness. Our classes include finitely generated universal algebras, connected graphs and tress of bounded degree, and monoids. For all these classes one can introduce algorithmic randomness concepts and prove existence of random structures.

11:15
On the Total Variation Distance of Labelled Markov Chains
SPEAKER: unknown

ABSTRACT. Labelled Markov chains (LMCs) are widely used in probabilistic verification, speech recognition, computational biology, and many other fields. Checking two LMCs for equivalence is a classical problem subject to extensive studies, while the total variation distance provides a natural measure for the ``inequivalence'' of two LMCs: it is the maximum difference between probabilities that the LMCs assign to the same event.

In this paper we develop a theory of the total variation distance between two LMCs, with emphasis on the algorithmic aspects: (1) we provide a polynomial-time algorithm for determining whether two LMCs have distance 1, i.e., whether they can almost always be distinguished; (2) we provide an algorithm for approximating the distance with arbitrary precision; and (3) we show that the threshold problem, i.e., whether the distance exceeds a given threshold, is NP-hard and hard for the square-root-sum problem. We also make a connection between the total variation distance and Bernoulli convolutions.

11:45
A domain-theoretic approach to Brownian motion and general continuous stochastic processes
SPEAKER: Paul Bilokon

ABSTRACT. We introduce a domain-theoretic framework for continuous-time, continuous-state stochastic processes. The laws of stochastic processes are embedded into the space of maximal elements of the normalised probabilistic power domain on the space of continuous interval-valued functions endowed with the relative Scott topology. We use the resulting $\omega$-continuous bounded complete dcpo to define partial stochastic processes and characterise their computability. For a given continuous stochastic process, we show how its domain-theoretic, i.e., finitary, approximations can be constructed, whose least upper bound is the law of the stochastic process. As a main result, we apply our methodology to Brownian motion. We construct a partial Wiener measure and show that the Wiener measure is computable within the domain-theoretic framework.

12:15
On the Computing Power of +, −, and ×

ABSTRACT. Modify the Blum-Shub-Smale model of computation replacing the permitted computational primitives (the real field operations) with any finite set B of real functions semialgebraic over the rationals. Consider the class of boolean decision problems that can be solved in polynomial time in the new model by machines with no machine constants. How does this class depend on B? We prove that it is always contained in the class obtained for B = {+, −, ×}. Moreover, if B is a set of continuous semialgebraic functions containing + and −, and such that arbitrarily small numbers can be computed using B, then we have the following dichotomy: either our class is P or it coincides with the class obtained for B = {+, −, ×}.

13:00-14:30Lunch Break
14:30-16:00 Session 41A: Tutorial by Jasmin Fisher
Location: FH, Hörsaal 1
14:30
Understanding Biology through Logic
SPEAKER: Jasmin Fisher

ABSTRACT. The complexity in biology is staggering. Biological processes involve many components performing complicated interactions. Moreover, they are organized in hierarchies spanning multiple levels going from individual genes and proteins to signalling pathways, through cells and tissues, to organisms and populations. All the levels in this hierarchy are subject to a multitude of interdisciplinary efforts to model, analyse and devise ways to make sense of all this complexity. Mathematical models (and using computers to simulate them) have been used for these purposes for many years. The abilities of modern computers and their increased availability have greatly advanced this kind of modelling. However, in the last decade (or so) computational and logical thinking have started to change the way biological models are constructed and analysed. The work of the logic-in-computer-science research community to formalize and enable analysis of computer systems inspired several pioneers to try and harness these capabilities to the design and analysis of computer models of biological systems.  This approach, which we later termed ``executable biology'', calls for the construction of a program or another formal model that represents aspects of a biological process. By analysing and reasoning about such artefacts we gain additional insights into the mechanisms of the biological processes under study. Over the years, these efforts have demonstrated successfully how this logical perspective to biology can be beneficial for gaining new biological insights and directing new experimental avenues. In this tutorial, I will give an introduction to this approach. I will survey different modelling paradigms and how they are being used for biological modelling through models of cell fate decision-making, organism development, and molecular mechanisms underlying cancer. I will also highlight verification and the usage of formal methods to gain new biological insights. Time permitting, I will touch upon some of the challenges involved in applying synthesis to the development of models directly from experimental data and the efforts that are required to make the computational tools that we develop widely adopted by experimentalists and clinicians in the biological and medical research community.

14:30-16:00 Session 41H: Special session: Perspectives on Induction (joint with LC)
Location: MB, Prechtlsaal
14:30
Inductive Proofs & the Knowledge They Represent

ABSTRACT.  

Proofs by mathematical induction seem to require certain interdependencies between the instances of the generalizations they prove. The character and significance of these interdependencies will be the main subject of this talk.

15:15
Decidable languages for knowledge representation and inductive definitions: From Datalog to Datalog+/-
SPEAKER: Georg Gottlob

ABSTRACT. Datalog is a language based on function-free Horn clauses used to inductively define new relations from finite relational structures. Datalog has many nice computational and logical properties. For example, Datalog captures PTIME on ordered structures, which means that evaluating fixed Datalog programs (i.e. rule sets) over finite structures is in PTIME and, moreover, every PTIME-property on ordered structures can be expressed as a Datalog program (see [1] for a survey). After giving a short overview of Datalog we argue that this formalism has certain shortcomings and is not ideal for knowledge representation, in particular, for inductive ontological knowledge representation and reasoning. We consequently introduce Datalog+/- which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+/- extends plain Datalog by features such as existentially quantified rule heads, negative constraints, and equalities in rule heads, and, at the same time, restricts the rule syntax so as to achieve decidability and tractability. In particular, we discuss three paradigms ensuring decidability: chase termination, guardedness, and stickiness, which were introduced and studied in [2, 3, 4, 5].

[1] Dantsin, E.; Eiter, T.; Georg, G.; and Voronkov, A. 2001. Complexity and expressive power of logic programming. ACM Computing Surveys 33(3):374-425. [2] Cali, A.; Gottlob, G.; and Kifer, M. 2013. Taming the innite chase: Query answering under expressive relational constraints. Journal of Articial Intelligence Research 48:115-174. [3] Cali, A.; Gottlob, G.; and Lukasiewicz, T. 2012. A general Datalog-based frame- work for tractable query answering over ontologies. Journal of Web Semantics 14:57-83. [4] Cali, A.; Gottlob, G.; and Pieris, A. 2012. Towards more expressive ontology languages: The query answering problem. Artificial Intelligence 193:87-128. [5] Gottlob, G.; Manna, M.; and Pieris, A. 2013. Combining decidability paradigms for existential rules. Theory and Practice of Logic Programming 13(4-5):877-892.

16:00-16:30Coffee Break
16:30-18:30 Session 42A: Computability
Location: FH, Hörsaal 1
16:30
Turing Machines with Atoms, Constraint Satisfaction Problems, and Descriptive Complexity
SPEAKER: unknown

ABSTRACT. We study deterministic computability over sets with atoms. We characterize those alphabets for which Turing machines with atoms determinize. To this end, the determinization problem is expressed as a Constraint Satisfaction Problem, and a characterization is obtained from deep results in CSP theory. As an application to Descriptive Complexity Theory, within a substantial class of relational structures including Cai-F\"urer-Immerman graphs, we precisely characterize those subclasses where the logic IFP+C captures order-invariant polynomial time computation.

17:00
Beta Reduction is Invariant, Indeed
SPEAKER: unknown

ABSTRACT. Slot and van Emde Boas' invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which length and size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those unsharing steps that only explicit the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

17:30
On the Pigeonhole and Related Principles in Deep Inference and Monotone Systems
SPEAKER: Anupam Das

ABSTRACT. We construct quasipolynomial-size proofs of the propositional pigeonhole principle in the deep inference system KS, addressing an open problem raised in previous works and matching the best known upper bound for the more general class of monotone proofs.

We make significant use of monotone formulae computing boolean threshold functions, an idea previously considered in works of Atserias et al. The main construction, monotone proofs witnessing the symmetry of such functions, makes use of an implementation of merge-sort in the design of proofs in order to tame the structural behaviour of atoms, and so the complexity of normalisation. Proof transformations from previous work on atomic flows are then employed to yield appropriate proofs in KS.

As further results we show that our constructions can be applied to provide quasipolynomial-size proofs of the Parity principle and the Generalised Pigeonhole principle, and n^(O(log log n))-size proofs of the weak pigeonhole principle, thereby also improving the best known bounds for monotone proofs.

18:00
Faster decision of first-order graph properties
SPEAKER: Ryan Williams

ABSTRACT. First-order logic captures a vast number of computational problems on graphs. We study the time complexity of deciding graph properties definable by first-order sentences with $k$ variables in prenex normal form. The trivial algorithm for this problem runs in $O(n^k)$ time on $n$-node graphs (the big-O hides the dependence on $k$).

Answering a question of Miklos Ajtai, we give the first algorithms running faster than the trivial algorithm, in the general case of arbitrary first-order sentences and arbitrary graphs. One algorithm runs in $O(n^{k-3+\omega)} \leq O(n^{k-0.627})$ time for all $k \geq 3$, where $\omega < 2.373$ is the $n \times n$ matrix multiplication exponent. By applying fast rectangular matrix multiplication, the algorithm can be improved further to run in $n^{k-1+o(1)}$ time, for all $k \geq 9$. Finally, we observe that the exponent of $k-1$ is optimal, under the popular hypothesis that CNF satisfiability with $n$ variables and $m$ clauses cannot be solved in $(2-\epsilon)^n \cdot \poly(m)$ time for some $\epsilon > 0$.

16:30-18:30 Session 42B: Concurrency
Location: FH, Hörsaal 5
16:30
Logic for Communicating Automata with Parameterized Topology

ABSTRACT. We introduce parameterized communicating automata (PCA) as a model of systems where finite-state processes communicate through FIFO channels. Unlike classical communicating automata, a given PCA can be run on any network topology of bounded degree. The topology is thus a parameter of the system. We provide various Büchi-Elgot-Trakhtenbrot theorems for PCA, which roughly read as follows: Given a logical specification \phi and a class of topologies T, there is a PCA that is equivalent to \phi on all topologies from T. We give uniform constructions which allow us to instantiate T with concrete classes such as pipelines, ranked trees, grids, rings, etc. The proofs build on a locality theorem for first-order logic due to Schwentick and Barthelmann, and they exploit concepts from the non-parameterized case, notably a result by Genest, Kuske, and Muscholl.

17:00
Equilibria of Concurrent Games on Event Structures

ABSTRACT. Event structures are a model of concurrency with a natural game-theoretic interpretation, which was initially given in terms of zero-sum games only. This paper studies an extension of such games to include a much wider class of game types and solution concepts. The extension permits modelling scenarios where, for instance, cooperation or independent goal-driven behaviour of computer agents is desired. Specifically, we define non-zero-sum games on event structures, and give full characterisations---existence and completeness results---of the kinds of games, payoff sets, and strategies for which Nash equilibria and subgame perfect Nash equilibria always exist. The game semantics of various logics and systems are outlined to illustrate the power of this framework.

17:30
Compositional Verification of Termination-Preserving Refinement of Concurrent Programs
SPEAKER: unknown

ABSTRACT. Many verification problems can be reduced to refinement verification. However, existing work on verifying refinement of concurrent programs either fails to prove the preservation of termination, allowing a diverging program to trivially refine any programs, or is difficult to apply in compositional thread-local reasoning. In this paper, we first propose a new simulation technique, which establishes termination-preserving refinement and is a congruence with respect to parallel composition. Then we give a proof theory for the simulation, which is the first Hoare-style concurrent program logic supporting termination-preserving refinement proofs. We show two key applications of our logic, i.e., verifying linearizability and lock-freedom together for fine-grained concurrent objects, and verifying full correctness of optimizations of concurrent algorithms.

18:00
Symmetry in Concurrent Games
SPEAKER: unknown

ABSTRACT. Behavioural symmetry is introduced into concurrent games. It expresses when plays are essentially the same. A characterization of strategies on games with symmetry is provided. This leads to a bicategory of strategies on games with symmetry. Symmetry helps allay the perhaps overly-concrete nature of games and strategies, and shares many mathematical features with homotopy. In the presence of symmetry we can consider monads for which the monad laws do not hold on the nose but do hold up to symmetry. This broadening of the concept of monad has a dramatic effect on the types concurrent games can support and allows us, for example, to recover the replication needed to express and extend traditional game semantics.