VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR WEDNESDAY, JULY 23RD, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 159A: Invited talks: Scott; Benveniste (RS)
Location: FH, Zeichensaal 3
08:45
Stochastic Lambda Calculus - An Appeal
SPEAKER: Dana Scott

ABSTRACT. It is shown how the operators in the "graph model" for λ-calculus (which can function as a programming language for Recursive Function Theory) can be expanded to allow for "random combinators". The result then is a model for a basic language for random algorithms. The author has lectured about this model over the last year, but he wants to appeal for APPLICATIONS. The details of the model are so easy and so fundamental, the idea has to be useful.

09:30
Contracts for System Design

ABSTRACT. Contracts are specifications of systems in which the respective roles of component vs. environment are made explicit. Contracts come with a rich algebra, including in particular a game oriented notion of refinement, a conjunction for fusing different aspects of the specification, and a parallel composition supporting independent development of sub-systems. Contracts can address the functional behavior of embedded systems, as well as other aspects of it, such as timing, resources, etc. Several frameworks have been proposed in the last decade offering -- sometimes in part -- such features, e.g., de Alfaro-Henzinger Interface Automata, K. Larsen's Modal Automata, and Benveniste et al. Assume/Guarantee contracts. Extensions to deal with time, resources, and probability, have been proposed. In this talk we develop a generic framework of contracts (we call it a meta-theory of contracts) that can be instantiated to any of the known concrete frameworks. This generic framework highlights the essence of this concept and allows equipping it with testing and abstraction apparatuses. We then illustrate through an example the use of Modal Interfaces, a specialization of the former, for Requirements Engineering, a demanding area too often dismissed by the community of formal verification. This is collective work by B. Caillaud, D. Nickovic, R. Passerone, J-B. Raclet, Ph. Reinkemeier, A. Sangiovanni-Vincentelli, W. Damm, T. Henzinger, and K.G. Larsen.

08:45-09:45 Session 159B: Partial/Semi-Formal Assurance and Verification (VeriSure)
Location: FH, Hörsaal 2
08:45
Balancing the Formal and Informal in Safety Case Arguments
SPEAKER: unknown

ABSTRACT. In many safety-critical industries developers and operators are required to construct and present well reasoned arguments that their systems achieve acceptable levels of safety. These arguments (together with supporting evidence) are typically referred to as a “safety case”. Safety arguments historically have been communicated through narrative text, leading often to problems of comprehension and communication. Over the last twenty years there has been increasing interest in using structured argumentation notations such as GSN (Goal Structuring Notation) or CAE (Claims-Argument-Evidence) to communicate the structure of the argument. Whilst such arguments are structured, they remain informal. There is increasing interest in exploring how these informal arguments may be modeled in formal logic, potentially opening up benefits of forms of analysis not possible with informally recorded argument. This position paper discusses a number of considerations in balancing the role of informal and formal logic in modeling assurance case arguments.

09:15
Quantification of Verification Progress
SPEAKER: unknown

ABSTRACT. A key disadvantage of software verification over other quality assurance techniques, such as testing, is its unpredictable cost. A lot of people-hours have to be invested before correctness can be proved, and, in contrast to testing, there is no quantifiable evidence that incremental verification effort results in incremental quality improvements. On the other hand, the process of verifying code can be seen as a sort of audit or code-walk, so there is an intuition that the process itself improves quality, even before a proof can be computed. In this paper we discuss our first attempts to quantify the incremental quality improvements that are achieved during verification using a metric called verification coverage.

08:45-10:15 Session 159C: Invited Talk (Guet); Contributed Talk (Stewart) (HSB)
Location: FH, Seminarraum 101A
08:45
Synthetic Systems Biology
SPEAKER: Calin Guet
09:45
FM-Sim : A Hybrid Protocol Simulator of Fluorescence Microscopy Neuroscience Assays with Integrated Bayesian Inference
SPEAKER: Donal Stewart

ABSTRACT. We present FM-Sim, a domain-specific simulator for defining and simulating fluorescence microscopy assays. Experimental protocols as performed in vitro may be defined in the simulator. The defined protocols then interact with a computational model of presynaptic behaviour in rodent central nervous system neurons, allowing simulation of fluorescent responses to varying stimuli. Rate parameters of the model may be obtained using Bayesian inference functions integrated into the simulator, given experimental fluorescence observations of the protocol performed in vitro as training data. These trained protocols allow for predictive in silico modelling of potential experimental outcomes prior to time-consuming and expensive in vitro studies.

08:45-10:15 Session 159D: Superposition I (PAAR)
Location: FH, Hörsaal 7
08:45
A Deductive-Complete Constrained Superposition Calculus for Ground Flat Equational Clauses

ABSTRACT. We describe an algorithm that generates prime implicates of equational clause sets without variables and function symbols. The procedure is based on constrained superposition rules, where constraints are used to store literals that are asserted as additional axioms (or hypotheses) during the proof search. This approach is sound and deductive-complete, and it is more ecient than previous algorithms based on conditional paramodulation. It is also more exible in the sense that it allows one to restrict the search space by imposing additional properties that the generated implicates should satisfy (e.g., to ensure relevance).

09:15
Invited talk: Hierarchic Superposition Revisited
SPEAKER: Uwe Waldmann

ABSTRACT. In 1994, Bachmair, Ganzinger, and Waldmann introduced the hierarchical superposition calculus as a generalization of the superposition calculus for black-box style theory reasoning. Their calculus works in a framework of hierarchic specifications. It tries to prove the unsatisfiability of a set of clauses with respect to interpretations that extend a background model such as the integers with linear arithmetic conservatively, that is, without identifying distinct elements of old sorts ("confusion") and without adding new elements to old sorts ("junk"). We show how the calculus can be improved, report on practical experiments, and present a new completeness result for non-compact classes of background models (i.e., linear integer or rational arithmetic restricted to standard models).

08:45-10:15 Session 159E (GeTFun)
Location: FH, Hörsaal 4
08:45
From display logic to nested sequents via residuated frames
SPEAKER: Nick Galatos
09:30
First-order non-classical logics: an order-based approach
SPEAKER: Petr Cintula

ABSTRACT. It is well-known, already in classical logic, that propositional logics have a limited expressive power which can fail to provide satisfactory means for modelling and analysis in certain contexts. For this reason it is highly desirable to introduce, for every logic, a predicate extension allowing to distinguish between properties and objects. There have been systematic studies of predicate versions for some families of non-classical logics, such as super-intuitionistic, modal, or fuzzy logics. In other cases there are numerous competing partial approaches.

In this talk I will present a unified approach allowing to build a predicate version for (almost) any non-classical logic as long as its algebraic semantics features some natural ordering. Following the pioneering works of Mostowski, Rasiowa, and Sikorski, one interprets quantifiers using infima/suprema with respect to this ordering. I will provide a complete axiomatization of this intended semantics (generalizing Henkin's proof for classical logic), formulate Skolem and Herbrand theorems in a restricted setting, survey known results from model theory of these logics, and present further possible research directions in this fascinating area.

08:45-10:15 Session 159F: Invited Talk and Contributed Talk (ARW-DT, VERIFY and WING)
Location: FH, Hörsaal 3
08:45
Chasing the Perfect Specification
09:45
Typed First-Order Logic
SPEAKER: Peter Schmitt

ABSTRACT. This paper contributes to the theory of typed first-order logic. We present a sound and complete axiomatization lifting restriction imposed by previous results. We resolve an issue with modular reasoning. As a third contribution this paper provides complete axiomatizations for the type predicates $instance_{T}$, $exactInstance_{T}$, and functions $cast_{T}$ indispensible for reasoning about object-oriented programming languges.

08:55-10:15 Session 160A: Introduction & Invited Talk 1 (SYNT)
Location: FH, Seminarraum 101B
08:55
Automating Deductive Synthesis with Leon (Invited Talk)
SPEAKER: Viktor Kuncak
08:55-09:00 Session 160B: Opening (FRIDA)
Location: FH, Seminarraum 104
09:00-10:15 Session 161A: KR Invited Talk: Tony Cohn (Knowledge Representation meets Computer Vision: From Pixels to Symbolic Activity Descriptions) (KR)
Location: EI, EI 7
09:00
Knowledge Representation Meets Computer Vision: From Pixels to Symbolic Activity Descriptions
SPEAKER: Tony Cohn

ABSTRACT. While the fields of KR and computer vision have diverged since the early days of AI, there have been recent moves to re-integrate these fields. I will talk about some of this work, focusing on research from Leeds on building models of video activity from video input. I will present techniques, both supervised and unsupervised, for learning the spatiotemporal structure of tasks and events from video or other sensor data, particularly in the case of scenarios with concurrent activities. The representations exploit qualitative spatio-temporal relations which have been an active area of research in KR for a number of years. I will also talk about the problem of robustly computing symbolic descriptions from noisy video data. Finally, I will show how objects can be functionally categorised according to their spatiotemporal behaviours.

09:00-10:15 Session 161B: Invited talk: Joseph Halpern (FRIDA)
Location: FH, Seminarraum 104
09:00
Beyond Nash Equilibrium: Solution Concepts for the 21st Century

ABSTRACT. Nash equilibrium is the most commonly-used notion of equilibrium in game theory.  However, it suffers from numerous problems.  Some are well known in the game theory community; for example, the Nash equilibrium of repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate "faulty" or "unexpected" behavior), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game.  In this talk, I discuss solution concepts that try to address these shortcomings of Nash equilibrium.  This talk represents joint work with various collaborators, including Ittai Abraham, Danny Dolev, Rica Gonen, Rafael Pass, and Leandro Rego.  No background in game theory will be presumed.

09:00-09:15 Session 161C: Opening (PRUV)
Location: FH, Seminarraum 107
09:25-10:15 Session 164: Theory (ASPOCP)
Location: FH, Seminarraum 138A
09:25
Supported Semantics for Modular Systems

ABSTRACT. Modular systems offer a language-independent declarative framework in which modules from different languages are combined to describe a conceptually and/or computationally complex problem. The original semantics developed to combine modules in a modular system is a model-theoretical semantics with close ties to relational algebraic operations.

In this paper, we introduce a new semantics for modular systems, called supported model semantics, that extends the original model-theoretic semantics of modular systems and captures non-monotonic reasoning on the level of modular system (rather than the level of individual modules). Moreover, we also use supported model semantics for modular systems to compare the expressiveness of modular systems with that of heterogeneous non-monotonic multi-context systems [BE'07]. We show that, under very few conditions, all multi-context systems can be translated into equivalent modular systems. Our result is true for both major semantics of multi-context systems, i.e., the equilibrium semantics and the grounded equilibrium semantics. Thus, we show that modular systems under supported model semantics generalize multi-context systems under either of its major semantics.

09:50
Infinitary Equilibrium Logic
SPEAKER: unknown

ABSTRACT. Strong equivalence of logic programs is an important concept in the theory of answer set programming. Equilibrium logic was used to show that propositional formulas are strongly equivalent if and only if they are equivalent in the logic of here-and-there. We extend equilibrium logic to formulas with infinitely long conjunctions and disjunctions, define and axiomatize an infinitary counterpart of the logic of here-and-there, and show that the theorem on strong equivalence holds in the infinitary case as well.

10:15-10:45Coffee Break
10:45-13:00 Session 166A: Description Logic 3 (KR)
Location: EI, EI 7
10:45
Stable Model Semantics for Guarded Existential Rules and Description Logics
SPEAKER: unknown

ABSTRACT. In this paper, we prove the decidability of stable model reasoning with guarded existential rules where rule bodies may contain negated atoms, as well as for reasoning using rules of the corresponding fragment of guarded Datalog+/-. In addition, we provide complexity results for these reasoning tasks (including conjunctive query answering) for a number of settings. Given that several lightweight description logics (in particular, DL-Lite, ELH, and OWL QL) can be directly translated to guardede Datalog+/-, we herewith obtain a direct stable model semantics for several description logics (DLs). This allows us to use negation (or complementation) directly in the axioms of a TBox, and to obtain a clear and natural semantics for such theories via their simple translation into an answer set program. We refer to this approach as to the "direct" stable model semantics in contrast to previous hybrid approaches that combine logic programming primitives with DL constructs, where the latter can be embedded into an answer set logic program, similar to generalized quantifiers (for example, the so-called dl-programs by Eiter et al. KR 2004; AIJ 2008).

11:15
Practical Uniform Interpolation and Forgetting for ALC TBoxes with Applications to Logical Difference
SPEAKER: unknown

ABSTRACT. We develop a clausal resolution-based approach for computing uniform interpolants of TBoxes formulated in the description logic ALC when such uniform interpolants exist. We also present an experimental evaluation of our approach and of its application to the logical difference problem for real-life ALC ontologies. Our results indicate that in many practical cases uniform interpolants exist and that they can be computed with the presented algorithm.

11:45
Nominal Schemas in Description Logics: Complexities Clarified
SPEAKER: unknown

ABSTRACT. Nominal schemas extend description logics (DLs) with a restricted form of variables, thus integrating rule-like expressive power into standard DLs. They are also one of the most recently introduced DL features, and in spite of many works on algorithms and implementations, almost nothing is known about their computational complexity and expressivity. We close this gap by providing a comprehensive analysis of the reasoning complexities of a wide range of DLs---from EL to SROIQ---extended with nominal schemas. Both combined and data complexities increase by one exponential in most cases, with the one previously known case of SROIQ being the main exception. Our proofs employ general modeling techniques that exploit the power of nominal schemas to succinctly represent many axioms, and which can also be applied to study DLs beyond those we consider. To further improve our understanding of nominal schemas, we also investigate their semantics, traditionally based on finite grounding, and show that it can be extended to infinite sets of individuals without affecting reasoning complexities. We argue that this might be a more suitable semantics when considering entailments of axioms with nominal schemas.

12:15
Decidable Gödel Description Logics without the Finitely-Valued Model Property

ABSTRACT. In the last few years there has been a large effort for analysing the computational properties of reasoning in fuzzy Description Logics. This has led to a number of papers studying the complexity of these logics, depending on their chosen semantics. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy DLs w.r.t. witnessed models over the Goedel t-norm. We show that in the logic G-IALC, reasoning cannot be restricted to finitely-valued models in general. Despite this negative result, we also show that all the standard reasoning problems can be solved in this logic in exponential time, matching the complexity of reasoning in classical ALC.

10:45-13:00 Session 166B: Reasoning about Actions and Processes 2 (KR)
Location: EI, EI 9
10:45
A First-Order Semantics for Golog and ConGolog under a Second-Order Induction Axiom for Situations
SPEAKER: Fangzhen Lin

ABSTRACT. Golog and ConGolog are languages defined in the situation calculus for cognitive robotics. Given a Golog program P, its semantics is defined by a macro Do(P,s,s') that expands to a logical sentence that captures the conditions under which performing P in s can terminate in s'. A similar macro is defined for ConGolog programs. In general, the logical sentences that these macros expand to are second-order, and in the case of ConGolog, may involve quantification over programs. In this paper, we show that by making use of the foundational axioms in the situation calculus, these macro expressions can actually be defined using first-order sentences.

11:15
How To Progress Beliefs in Continuous Domains
SPEAKER: Vaishak Belle

ABSTRACT. When Lin and Reiter introduced the progression of basic action theories in the situation calculus, they were essentially motivated by long-lived robotic agents functioning over thousands of actions. However, their account does not deal with probabilistic uncertainty about the initial situation nor with effector or sensor noise, as needed in robotic applications. In this paper, we obtain results on how to progress continuous degrees of belief against continuous effector and sensor noise in a semantically correct fashion. Most significantly, and perhaps surprisingly, we identify conditions under which our account is not only as efficient as the filtering mechanisms commonly used in robotics, but considerably more general.

11:45
Transforming Situation Calculus Action Theories for Optimised Reasoning
SPEAKER: unknown

ABSTRACT. Among the most frequent reasoning tasks in the situation calculus are projection queries that query the truth of conditions in a future state of affairs. However, in long running action sequences solving the projection problem is complex. The main contribution of this work is a new technique which allows the length of the action sequences to be reduced by reordering independent actions and removing dominated actions; maintaining semantic equivalence with respect to the original action theory. This transformation allows for the removal of actions that are problematic with respect to progression, allowing for periodical update of the action theory to reflect the current state of affairs. We provide the logical framework for the general case and give specific methods for two important classes of action theories. The work provides the basis for handling more expressive cases, such as the reordering of sensing actions in order to delay progression, and forms an important step towards facilitating ongoing planning and reasoning by long-running agents. It provides a mechanism for minimising the need for keeping the action history while appealing to both regression and progression.

12:15
Forgetting in Action

ABSTRACT. In this paper we develop a general framework that allows for both knowledge acquisition and forgetting in the Situation Calculus. Based on the Scherl and Levesque possible worlds approach to knowledge in the Situation Calculus, we allow for both sensing as well as explicit forgetting actions to take place. This model of forgetting is then compared to existing frameworks, particularly showing that forgetting is well-behaved with respect to the contraction operator of the well-known AGM theory of belief revision.

10:45-11:35 Session 166C: Semantics (ASPOCP)
Location: FH, Seminarraum 138A
10:45
A Refinement of the Language of Epistemic Specifications
SPEAKER: unknown

ABSTRACT. In this paper we present a new version of the language of Epistemic Specifications. The goal is to simplify and improve the intuitive and formal semantics. We describe an algorithm for computing solutions of programs written in this new version of the language. The new semantics is illustrated by a number of examples, including an Epistemic Specifications-based framework for conformant planning.

11:10
Epistemic Logic Programs with Sorts
SPEAKER: unknown

ABSTRACT. An epistemic logic program is a program written in the language of Epistemic Specifications, an extension of the language of answer set programming (ASP) that allows for introspective reasoning through the use of modal operators K and M. The language of Epistemic Specifications was introduced by Gelfond in the early 1990s after observing the need for a more powerful form of introspective reasoning than that offered by ASP alone. The semantics of an epistemic logic program was given in terms of its world view--a collection of sets of literals (belief sets), analogous to the answer sets of an ASP program. In 2011, Gelfond proposed a change to the semantics in a preliminary attempt to avoid certain unintended world views. Gelfond, Kahl, Watson, and Zhang continued this work in the hopes of finding a semantics that was satisfactory with respect to intuition and modeling of problems for which the added power of Epistemic Specifications showed promise.

Concurrent with this latter work was an effort by Balai, Gelfond, and Zhang to enhance ASP by providing a framework for specifying sorts for the parameters of predicates in a language they called SPARC. This promoted better program design, helped to eliminate certain common errors (e.g., misspellings), and encouraged development of rules specific to a particular domain without the concern for rule safety that has been associated with variable terms by popular ASP solvers (e.g., clingo, DLV).

In this paper, we combine these recent efforts by providing within the language of Epistemic Specifications a framework for specifying predicate parameter domains. We call a program written in this new language an epistemic logic program with sorts (ELPS). In addition to the benefits a sorted signature provides to the programmer, it also allows for a straightforward algorithm for computing world views. We describe such an algorithm--one that can take advantage of existing state-of-the-art ASP solvers--and give a synopsis of test results from our implementation. We conclude with a discussion of related work and a brief summary.

10:45-13:00 Session 166D: Invited talks: Hoare; Grumberg; Kwiatkowska; Maoz (RS)
Location: FH, Zeichensaal 3
10:45
A Geometric Model for Concurrent Programming
SPEAKER: Tony Hoare

ABSTRACT. Diagrams (aka. graphs, charts, nets) are widely used to describe the behaviour of physical systems, including computer clusters under control of a program. Their behaviour is modelled as a non-metric plane geometry, with coordinates representing space and time. Points represent events, arrows represent messages and state, and lines represent objects and threads. The geometry is governed by a simple set of axioms, which are yet powerful enough to prove correctness of geometric constructions. Perhaps they extend to more general programs written in a concurrent programming language.

11:30
Automated Abstraction-Refinement for the Verification of Behavioral UML Models
SPEAKER: Orna Grumberg
12:00
Probabilistic model checking at the nanoscale: from molecular signalling to molecular walkers

ABSTRACT. Probabilistic model checking is an automated method for verifying probabilistic models against temporal logic specifications. This lecture will give an overview of the role of that probabilistic model checking, and particularly the probabilistic model checker PRISM can play when modelling and analysing molecular networks at the nanoscale. In particular, the lecture will discuss how the technique has been used to advance scientific discovery through ‘in silico’ experiments for molecular signalling networks; debugging of DNA circuits; and performance and reliability analysis for molecular walkers.

http://www.prismmodelchecker.org ; http://www.veriware.org/dna.php

12:30
A Short Story on Scenarios
SPEAKER: Shahar Maoz

ABSTRACT. TBA

10:45-12:45 Session 166E: Contributed Talks 1 (SYNT)
Location: FH, Seminarraum 101B
10:45
Synthesis of a simple self stabilizing system
SPEAKER: unknown

ABSTRACT. With the increasing importance of distributed systems as a computing paradigm, a systematic approach to the design of distributed systems is needed. Although the area of formal verification has made enormous advances towards this goal, the resulting functionalities are limited to detecting problems in a particular design. We propose a simple template-based approach to computer-aided design of distributed synthesis based on leveraging the well-known technique of bounded model checking to the synthesis setting.

11:15
Program Synthesis and Linear Operator Semantics

ABSTRACT. For deterministic and probabilistic programs we investigate the problem of program synthesis and program optimisation (with respect to non-functional properties) in the general setting of global optimisation. This approach is based on the representation of the semantics of programs and program fragments in terms of linear operators, i.e. as matrices. We exploit in particular the fact that we can automatically generate the representation of the semantics of elementary blocks. These can then can be used in order to compositionally assemble the semantics of a whole program, i.e. the generator of the corresponding Discrete Time Markov Chain (DTMC). We also utilise a generalised version of Abstract Interpretation suitable for this linear algebraic or functional analytical framework in order to formulate semantical constraints (invariants) and optimisation objectives (for example performance requirements)

11:45
How to Handle Assumptions in Synthesis
SPEAKER: unknown

ABSTRACT. The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

12:15
Symblicit algorithms for optimal strategy synthesis in monotonic Markov decision processes
SPEAKER: Aaron Bohy

ABSTRACT. When treating Markov decision processes (MDPs) with large state spaces, using explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have proposed a so-called symblicit algorithm for the synthesis of optimal strategies in MDPs, in the quantitative setting of expected mean-payoff. This algorithm, based on the strategy iteration algorithm of Howard and Veinott, efficiently combines symbolic and explicit data structures, and uses binary decision diagrams as symbolic representation. The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains) provides another interesting alternative, especially for the class of monotonic MDPs. We design efficient pseudo-antichain based symblicit algorithms (with open source implementations) for two quantitative settings: the expected mean-payoff and the stochastic shortest path. For two practical applications coming from automated planning and LTL synthesis, we report promising experimental results w.r.t. both the run time and the memory consumption.

10:45-13:00 Session 166F: Vampire Tutorial and Regular Talks (Vampire)
Location: FH, Seminarraum 134
10:45
First-Order Theorem Proving with Vampire

ABSTRACT. The first version of the Vampire theorem prover was implemented 20 years ago, in 1994. Vampire has always been known for its fast and efficient implementation, for example it has won 30 world cup winner titles in first-order theorem proving.

In this talk we will first briefly overview the history of Vampire. Next, we address current results and trends in Vampire, with special focus on applications of reasoning in program analysis and verification. The current version of Vampire is faster than any of its previous versions and includes several new, non-standard features such as SAT/SMT solving, reasoning with theories, program analysis, consequence elimination, invariant generation, interpolation, and reasoning with very large theories.

12:00
Proofs in Vampire

ABSTRACT. In various applications of theorem proving, they not only check if a theorem holds true but also need a proof of the theorem, e.g., spurious counterexample analysis in verification.

The proofs can be presented in various formats to a user. Not all proof formats are conducive to the need of a user. The user may prefer one proof system over another. A user may prefer to have concise proofs, and other may like to have detailed proofs that allow inexpensive proof checking. Furthermore, there is no commonly acceptable API or file format for proof output. This situation allows the provers to produce proofs that are mutually incompatible and do not match the need of the users. Producing a proof after a run of an efficient and complex theorem prover is not a straight forward task. For example, the provers may apply various simplifications on input formulas before running the core proving algorithms. Such simplifications should be added as proof steps in the generated proofs. In long and arduous development process of a theorem prover, developers may forget to provide proper annotations to all the deduction steps such that a sufficiently detailed proof can be generated.

Currently, a Vampire proof only contains the information about the antecedents of the deduced clauses and the proof rule that is used to produce the clauses. This proof format leaves out various details. For example, an application of the superposition rule does not specify the terms that were unified to obtain the consequences. We propose to develop a comprehensive proof format and API that allows proofs with different level of granularity and are easy to use.

12:30
Playing with AVATAR
SPEAKER: Giles Reger

ABSTRACT. This paper presents and explores experimental results examining the usage of AVATAR (Advanced Vampire Architecture for Theories and Resolution). This architecture introduces a new method for splitting in first-order resolution and superposition theorem provers that makes use of a SAT (or SMT) solver to make splitting decisions. The architecture (as implemented in Vampire) has many options and components that can be varied and this paper explores the effects of these variations on the performance of the prover.

10:45-11:45 Session 166G: Making Hard Assurance Problems Feasible (VeriSure)
Location: FH, Hörsaal 2
10:45
Introducing Semi-Formal Specifications into a Distributed Development Process
SPEAKER: unknown

ABSTRACT. In Europe, the development of safety-critical rail IT is regulated by the Common Safety Method (CSM 352) and CENELEC standards. The CSM requires the railway undertaking to unambigously specify requirements on equipment. The infrastructure branch of German Railways (DB Netz) has chosen to base the specications of interlockings and related equipment like eld element controllers on SysML statecharts. Although these specications will be more consistent and precise than the text-based ones in use before, this approach faces opposition from manufacturers who find it difficult to justify the correctness of their implementation.

This paper reports on the approach, its motivation and its practical difficulties, and it endeavours to identify its merits and weaknesses.

11:15
Witnessing an SSA Transformation

ABSTRACT. The correctness of program compilation is important to assuring the correctness of an overall software system. In this work, we describe an effort to verify the compiler transformation which turns memory references into corresponding register references. It is one of the most important transformations in modern compilers, which rely heavily on the SSA (static single assignment) form produced by this transformation. Formally verifying the algorithm behind this transformation is thought to be a difficult task. Verifying the actual code, as implemented in a production compiler, is currently infeasible. We describe our experience with an alternative verification strategy, which is based on generating and checking "witnesses'' for each instance of the transformation. This approach enormously simplifies the verification task, primarily because it does not require showing that the transformation code is correct.

10:45-13:00 Session 166H: Tutorial (Donze), Collaboration Success Stories (Fanchon, de Jong, Halasz) (HSB)
Location: FH, Seminarraum 101A
10:45
Tutorial: Simulation-based Parameter Synthesis in Systems Biology
11:30
Collaboration success stories: Modeling Iron Homeostasis in Mammalian Cells (with J.M. Moulis)
SPEAKER: Eric Fanchon
12:00
Collaboration success stories: Control of Gene Expression in E.coli (with Hans Geiselman)
SPEAKER: Hidde de Jong
12:30
Collaboration success stories: Membrane-bound Receptor Imaging, Data Analysis and Model Building (with J.S. Edwards)
10:45-13:00 Session 166I: Meta-methods for theorem proving (PAAR)
Location: FH, Hörsaal 7
10:45
Automated Theorem Proving using the TPTP Process Instruction Language

ABSTRACT. The TPTP (Thousands of Problems for Theorem Provers) World is a well established infrastructure for Automated Theorem Proving (ATP). In the context of the TPTP World, the TPTP Process Instruction (TPI) language provides capabilities to input, output and organize logical formulae, and control the execution of ATP systems. This paper reviews the TPI language, describes a shell interpreter for the language, and demonstrates their use in theorem proving.

11:15
The Efficiency of Automated Theorem Proving by Translation to Less Expressive Logics

ABSTRACT. Many Automated Theorem Prover (ATP) systems for different logics, and translators for translating different logics from one to another, have been developed and are now available. Some logics are more expressive than others, and it is easier to express problems in those logics. On the other hand, the ATP systems for less expressive logics have been under development for many years, and are more powerful and reliable. There is a trade-off between expressivity of a logic, and the power and reliability of the available ATP systems. Translators and ATP systems can be combined to try to solve a problem. In this research, an experiment has been carried out to compare the performance of difference combinations of translators and ATP systems.

11:45
Machine Learner for Automated Reasoning 0.4 and 0.5
SPEAKER: Josef Urban

ABSTRACT. Machine Learner for Automated Reasoning (MaLARea) is a learning and reasoning system for proving in large formal libraries where thousands of theorems are available when attacking a new conjecture, and a large number of related problems and proofs can be used to learn specific theorem-proving knowledge. The last version of the system has by a large margin won the 2013 CASC LTB competition. This paper describes the motivation behind the methods used in MaLARea, discusses the general approach and the issues arising in evaluation of such system, and describes the Mizar@Turing100 and CASC'24 versions of MaLARea.

12:15
BliStr: The Blind Strategymaker
SPEAKER: Josef Urban

ABSTRACT. BliStr is a system that automatically develops strong targetted theorem-proving strategies for the E prover on a large set of diverse problems. The main idea is to interleave (i) iterated low-timelimit local search for new strategies on small sets of similar easy problems with (ii) higher-timelimit evaluation of the new strategies on all problems. The accumulated results of the global higher-timelimit runs are used to define and evolve the notion of "similar easy problems'", and to control the evolution of the next strategy. The technique significantly strengthened the set of E strategies used by the MaLARea, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division. Similar improvement was obtained on the problems created from the Flyspeck corpus.

10:45-13:00 Session 166J: Contributed Talks 1 (FRIDA)
Location: FH, Seminarraum 104
10:45
Towards binary circuit models that faithfully reflect physical (un)solvability

ABSTRACT. Binary circuit models are high-level abstractions intended to reflect the behavior of digital circuits, while restricting signal values to 0 and 1. Such models play an important role in assessing the correctness and performance characteristics of digital circuit designs: (i) modern circuit design relies on fast digital timing simulation tools and, hence, on accurate binary-valued circuit models that faithfully model signal propagation, even throughout a complex design, and (ii) binary circuit models provide a level of abstraction that is amenable to formal analysis.

Of particular importance is the ability to trace glitches and other short pulses, as their presence/absence may affect a circuit's correctness and its performance characteristics.

We show that that no existing binary-valued circuit model proposed so far, including the two most commonly used pure and inertial delay channels, faithfully captures glitch propagation: For the simple Short-Pulse Filtration (SPF) problem, which is related to a circuit's ability to suppress a single glitch, we show that the quite broad class of bounded single-history channels either contradict the
unsolvability of SPF in bounded time or the solvability of SPF in unbounded time in physical circuits.

We then propose a class of binary circuit models that do not suffer from this deficiency: Like bounded single-history channels, our involution channels involve delays that may depend on the time of the previous output transition. Their characteristic property are delay functions which are based on involutions, i.e., functions that form their own inverse.

We prove that, in sharp contrast to what is possible with bounded single-history channels, SPF cannot be solved in bounded time whereas it is easy to provide an unbounded SPF implementation. It hence follows that binary-valued circuit models based on involution channels allow to solve SPF precisely when this is possible in physical circuits.

This renders them a promising candidate, both, for simulation and the formal analysis of circuits.

(Part of the results presented in this talk were published at ASYNC'13.)

11:15
Mechanical Verification of a Constructive Proof for FLP

ABSTRACT. We present a formalization of Völzer's paper "A constructive proof for FLP" using the interactive theorem prover Isabelle/HOL. We focus on the main differences between our proof and Völzer's and summarize necessary design decisions in our formal approach.

11:45
Having SPASS with Pastry and TLA+
SPEAKER: Noran Azmy

ABSTRACT. Peer-to-peer protocols are becoming more and more popular for modern internet applications. While such protocols typically come with certain correctness and performance guarantees, verification attempts using formal methods invariably discover inconsistencies. We are interested in using the SPASS theorem prover for the verification of peer-to-peer protocols, that are modeled in the specification language TLA+. In addition to the specification language, TLA+ comes with its own verification tools: an interactive proof written in the TLA+ proof language consists of steps, or proof obligations, that are processed by TLA+'s own proof manager, and passed to one of several back-end provers such as Zenon or Isabelle/TLA. In 2013, Tianxiang Lu already made the first steps in the case of the protocol Pastry, where the author's attempt at formal verification reveals that the full protocol is incorrect with respect to a safety property which he calls “correct delivery”. His final proof of correctness is for a restricted version of the protocol and seriously lacks automation, due to the inability of current back-end provers to tackle proof obligations from this class of problems, which typically contain a mixture of uninterpreted functions, modular integer arithmetic, and set theory with the cardinality operator.

Our ultimate goal is to create a SPASS back end for TLA+ that is better capable of solving this kind of proof obligations. This includes (1) the development of an efficient and effective translation from the strongly untyped, higher order TLA+ to a typed, first-order input language for SPASS, and (2) incorporating theory reasoning, typed reasoning and other necessary techniques into SPASS itself.

In this paper, we give the first insights from running the current version of SPASS on the proof obligations from Lu's proof using a prototype, untyped translation. We also devise a modification to the current translation that achieves an impressive improvement in the way SPASS deals with the particularly problematic CHOOSE operator of TLA+.

12:15
Concurrent Data Structures: Semantics and (Quantitative) Relaxation
SPEAKER: Ana Sokolova

ABSTRACT. tbd

10:45-13:00 Session 166K (GeTFun)
Location: FH, Hörsaal 4
10:45
Broadly truth-functional logics through classical lenses
SPEAKER: João Marcos

ABSTRACT. With different motivations and goals, Roman Suszko, Dana Scott and Newton da Costa have all suggested in the 70s that Tarskian logics, however many-valued they might initially appear to be, could always be characterized with the use of only two _logical values_. A programmatic effort to show how such a presentation could be attained for increasingly larger classes of logics, while at the same time avoiding serious losses of nice computational properties, was carefully undertaken in the last decade. In this talk I will show how bivalent effective semantics and uniform classic-like analytic tableaux may be obtained for logics characterized by way of finite-valued nondeterministic semantics.

The latest developments concern ongoing work with Carlos Caleiro and Carlos Silva.

11:30
Natural deduction for non-deterministic ﬁnite-valued logics
12:00
The procedural understanding of meaning and compositionality in formal semantics
12:30
Semi-BCI-algebras

ABSTRACT. BCI-algebras are related to formal systems of Fuzzy Logic. They model the so
called BCI-logics which are based on Curry combinators (B) xyz.x(yz), (C)
xyz.xzy and (I) x.x. They are algebras of the form <A,⊖,⊥> which satisfy
the following properties:

BCI-1 ((x ⊖ y) ⊖ (x ⊖ z)) ⊖ (z ⊖ y) =⊥
BCI-2 (x ⊖ (x ⊖ y)) ⊖ y =⊥
BCI-3 x ⊖ x =⊥
BCI-4 x ⊖ y =⊥ and y ⊖ x =⊥ =⇒ x = y.

A BCI-algebra is called BCK-algebra, whenever it also satisfies:

BCK-1 ⊥ ⊖ x =⊥

On any BCI-algebra it is possible to define a partial order “≤” as:

“x ≤ y iff x ⊖ y =⊥”.

In this talk we will propose a generalization for such algebras, in order to capture some phenomena of Lukasiewcz Interval implications.

10:45-12:15 Session 166L: Contributed Talks (ARW-DT, VERIFY and WING)
Location: FH, Hörsaal 3
10:45
Reasoning About Vote Counting Schemes Using Light-weight and Heavy-weight Methods
SPEAKER: unknown

ABSTRACT. We compare and contrast our experiences in specifying, implementing and verifying the monotonicity property of a simple plurality voting scheme using modern light-weight and heavy-weight verification tools.

11:15
Introducing a Sound Deductive Compilation Approach
SPEAKER: unknown

ABSTRACT. Compiler verification is difficult and expensive. Instead of formally verifying a compiler, we introduce a sound deductive compilation approach, whereby verified bytecode is generated based on symbolic execution of source code embedded in a program logic. The generated bytecode is weakly bisimilar to the original source code relative to a set of observable locations. The framework is instantiated for Java source and bytecode. The compilation process is fully automatic and first-order solvers are employed for bytecode optimization.

11:45
Verifying safety properties of Artificial General Intelligence: The ultimate safety-critical system?

ABSTRACT. In this paper, we discuss the challenge of verifying a system which does not exist today, but may be of the utmost importance in the future: an Artificial General Intelligence (AGI) as smart or smarter than human beings, and capable of improving itself further.

A self-improving AGI with the potential to become more powerful than any human would be a system we'd need to design for safety from the ground up. We argue that this will lead to foreseeable technical difficulties which we can and should start working on today, despite the fact that we do not yet know how to build an AGI. In this paper, we focus on one particular such difficulty: We could require an AGI to exhibit a formal proof that each of its actions and self-rewrites satisfy a certain safety property, and otherwise switch to a baseline controller (a variation on the well-known simplex architecture), but straight-forward ways of implementing this for a self-rewriting AGI run into problems related to Gödel's incompleteness theorems. We review several potential solutions to this problem and discuss reasons to think that it is likely to be an issue for a wide variety of potential approaches to constructing a safe self-improving AGI.

10:45-12:15 Session 166M: Vagueness to some degree (PRUV)
Location: FH, Seminarraum 107
10:45
In Which Sense Is Fuzzy Logic a Logic for Vagueness?

ABSTRACT. The problem of artificial precision demonstrates the inadequacy of naive fuzzy semantics for vagueness. This problem is, nevertheless, satisfactorily remedied by fuzzy plurivaluationism; i.e., by taking a class of fuzzy models (a fuzzy plurivaluation), instead of a single fuzzy model, for the semantics of a vague concept. Such a fuzzy plurivaluation in turn represents the class of models of a formal theory, preferably formulated in first- or higher-order fuzzy logic, which formalizes the meaning postulates of the vague concepts involved. The consequence relation of formal fuzzy logic then corresponds to the (super)truth of propositions involving these vague concepts. An adequate formal treatment of vague propositions by means of fuzzy logic thus consists in derivations in the formal calculus of a suitable fuzzy logic, while the particular truth degrees found in engineering applications actually pertain to artificially precisified (so no longer vague) gradual notions.

11:15
Stable Models of Fuzzy Propositional Formulas
SPEAKER: Joohyung Lee

ABSTRACT. We introduce the stable model semantics for fuzzy propositional formulas, which generalizes both fuzzy propositional logic and the stable model semantics of classical propositional formulas. Combining the advantages of both formalisms, the introduced language allows highly configurable default reasoning involving fuzzy truth values. We show that several properties of Boolean stable models are naturally extended to this formalism, and discuss how it is related to other approaches to combining fuzzy logic and the stable model semantics.

11:45
Towards a Logic of Dilation
SPEAKER: unknown

ABSTRACT. We investigate the notion of dilation of a propositional theory based on neighbourhoods in a generalized approximation space. We take both a semantic and a syntactic approach in order to define a suitable notion of theory dilation in the context of approximate reasoning on the one hand, and a generalized notion of forgetting in propositional logic on the other hand. We place our work in the context of existing theories of approximation spaces and forgetting, and show that neighbourhoods obtained by combining collective and selective dilation provide a suitable semantic framework within which to reason computationally with uncertainty in a classical setting.

10:50-11:00 Session 167: Opening (ARQNL)
Location: FH, Seminarraum 138C
10:50
ARQNL Workshop - Opening

ABSTRACT. The ARQNL Workshop provides a forum for researchers to present and discuss work on the development of proof calculi, automated theorem proving systems and model ﬁnders for all sorts of quantiﬁed non-classical logics.

The ARQNL Workshop proceedings can be downloaded at http://www.iltp.de/ARQNL-2014.

11:00-13:00 Session 168: First-Order Modal Logics, Quantified Dynamic and Temporal Logic, Problem Libraries (ARQNL)
Location: FH, Seminarraum 138C
11:00
Coalescing: Syntactic Abstraction for Reasoning in First-Order Modal Logics
SPEAKER: unknown

ABSTRACT. We present a syntactic abstraction method to reason about first-order modal logics by using theorem provers for standard first-order logic and for propositional modal logic.

11:30
A Logic for Verifying Metric Temporal Properties in Distributed Hybrid Systems
SPEAKER: Ping Hou

ABSTRACT. We introduce a logic for specifying and verifying metric temporal properties of distributed hybrid systems that combines quantified differential dynamic logic (QdL) for reasoning about the possible behavior of distributed hybrid systems with metric temporal logic (MTL) for reasoning about the metric temporal behavior during their operation. For our combined logic, we generalize the semantics of dynamic modalities to refer to hybrid traces instead of final states. Further, we prove that this gives a conservative extension of QdL for distributed hybrid systems. On this basis, we provide a modular verification calculus that reduces correctness of metric temporal behavior of distributed hybrid systems to generic temporal reasoning and then non-temporal reasoning, and prove that we obtain a complete axiomatization relative to the non-temporal base logic QdL.

12:00
Problem Libraries for Non-Classical Logics
SPEAKER: Jens Otten

ABSTRACT. Problem libraries for automated theorem proving (ATP) systems play a crucial role when developing, testing, benchmarking and evaluating ATP systems for classical and non-classical logics. We provide an overview of existing problem libraries for some important non-classical logics, namely first-order intuitionistic and first-order modal logics. We suggest future plans to extend these existing libraries and discuss ideas for a general problem library platform for non-classical logics.

12:30
HOL Provers for First-order Modal Logics --- Experiments

ABSTRACT. Higher-order automated theorem provers have been employed to automate first-order modal logics. Extending previous work, an experiment has been carried out to evaluate their collaborative and individual performances.

11:35-11:45 Session 169: Short Break (ASPOCP)
Location: FH, Seminarraum 138A
11:45-13:00 Session 170A: Dynamic Domains and Action Languages (ASPOCP)
Location: FH, Seminarraum 138A
11:45
Temporal Stable Models are LTL-representable

ABSTRACT. Many scenarios in Answer Set Programming (ASP) deal with dynamic systems over (potentially infinite) linear time. Temporal Equilibrium Logic (TEL) is a formalism that allows defining the idea of "temporal stable model" not only for dynamic domains in ASP, but aso for any arbitrary theory in the syntax of Linear-Time Temporal Logic (LTL). In the past, several tools for computing temporal stable models have been built using well-established LTL and automata techniques. These tools displayed the set of temporal stable models of a given theory as a Büchi-automaton and, in many cases, it was also possible to capture such a set by the LTL-models of a given temporal formula. The fundamental theoretical question of whether this was a general property or not remained open, since it is well-known that, in general, Büchi-automata are more expressive than LTL. In this paper we show that, indeed, the set of temporal stable models of any arbitrary temporal theory can be succinctly captured as the LTL models of a given temporal formula.

12:10
Applying Action Language BC with Hierarchical Domain Abstraction to Mobile Robots
SPEAKER: unknown

ABSTRACT. Action language BC provides an elegant way of formalizing robotic domains which need to be expressed using default logic as well as indirect and recursive action effects. However, generating plans efficiently for large domains using BC can be challenging, even when state-of-the-art answer set solvers are used. Since a number of task planning domains in mobile robotics can be easily broken up hierarchically using macro-actions, we investigate the computational gains achieved by planning over a hierarchical abstraction of the domain. Each layer of abstraction is described independently using BC, and we plan independently over each description, resolving macro-actions as we go down the hierarchy. We present a case study where at least an order of magnitude speedup was achieved in a robot mail collection task by using hierarchical domain abstractions.

12:35
Action Language BC+: Preliminary Report
SPEAKER: Joohyung Lee

ABSTRACT. Recently, action language BC, which combines the attractive features of action languages B and C+, was proposed. While BC allows Prolog-style recursive definitions that are not available in C+, it is less expressive than C+ in other ways, such as inability to express non-Boolean and non-exogenous actions. We propose a new action language called BC+, which encompasses all the features of BC and the definite fragment of C+. The syntax of BC+ is identical to the syntax of C+ allowing arbitrary propositional formulas in the causal laws, but its semantics is defined in terms of propositional formulas under the stable model semantics instead of nonmonotonic causal theories. This approach allows many useful ASP constructs, such as choice rules and aggregates, to be directly used in language BC+, and exploits computational methods available in ASP solvers.

12:15-13:00 Session 171A: Human Interfaces and Medical Devices, and discussion (VeriSure)
Location: FH, Hörsaal 2
12:15
Human-Computer Interaction and the Formal Certification and Assurance of Medical Devices: The CHI+MED Project
SPEAKER: unknown

ABSTRACT. The number of recalls of medical device with embedded computers due to safety issues in recent years suggests there is a need for new approaches to support the process. There is increasing concern about the impact of systematic use errors. There has been little research focusing on model-based tool support for their assurance and certification with respect to systematic use error, however. The CHI+MED project (http://www.chi-med.ac.uk) aims to address this gap. It is concerned with the design of safer medical devices with a specific focus on human-computer interaction. We are developing a range of integrated model-based engineering methods and other formal and semi-formal techniques to support the certification process, both pre- and post-market, including their use in the wider system context. In this position paper we review our approach and the contributions to date.

12:15-13:15 Session 171B: Verification: Contributed Talks (ARW-DT, VERIFY and WING)
Location: FH, Hörsaal 3
12:15
SPEAKER: unknown

ABSTRACT. In the ForMaRE project formal mathematical reasoning is applied to economics. After an initial exploratory phase, it focused on auction theory and has produced, as its first results, formalized theorems and certified executable code.

12:45
Automating Regression Verification

ABSTRACT. Regression verification is an approach to prevent regressions in software development using formal verification. The goal is to prove that two versions of a program behave equally or differ in a specified way. We worked on an approach for regression verification, extending Strichman and Godlin's work by relational equivalence and two ways of using counterexamples.

13:00-14:30Lunch Break
14:30-16:00 Session 172A: Knowledge Representation and Reasoning 2 (KR)
Location: EI, EI 7
14:30
Certain Answers as Objects and Knowledge
SPEAKER: Leonid Libkin

ABSTRACT. The standard way of answering queries over incomplete databases is to compute certain answers to them. These have always been defined as the intersection of query answers on all complete databases that are represented by the incomplete database. But is this universally accepted definition correct? Our goal is to argue that this one-size-fits-all' definition can often lead to counterintuitive or just plain wrong results, and to propose an alternative framework for defining certain answers.

The idea of the framework is to move away from the standard, in the database literature, assumption that query results be given in the form of a database object, and to allow instead two alternative representations of answers: as objects defining all other answers, or as knowledge we can deduce with certainty about all such answers. We show that the latter is often easier to achieve than the former, that in general certain answers need not be defined as intersection, and may well contain missing information in them. We also show that with a proper choice of semantics, we can often reduce computing certain answers - as either objects or knowledge - to standard query evaluation. We describe the framework in the most general way, applicable to a variety of data models, and test it on three concrete relational semantics of incompleteness: open, closed, and weak closed world.

15:00
Tackling Winograd Schemas by Formalizing Relevance Theory in Knowledge Graphs

ABSTRACT. We study disambiguating of pronoun references in Winograd Schemas, which are part of the Winograd Schema Challenge, a proposed replacement for the Turing test. In particular we consider sentences where the pronoun can be resolved to both antecedents without semantic violations in world knowledge, that means for both readings of the sentence there is a possible consistent world. Nevertheless humans will strongly prefer one answer, which can be explained by pragmatic effects described in Relevance Theory. We state formal optimization criteria based on principles of Relevance Theory in a simplification of Roger Schank's graph framework for natural language understanding. We perform experiments using Answer Set Programming and report the usefulness of our criteria for disambiguation and their sensitivity to parameter variations.

15:30
Simultaneous Learning and Prediction

ABSTRACT. Agents in real-world environments may have only \emph{partial} access to available information, often in an arbitrary, or hard to model, manner. By reasoning with knowledge at their disposal, agents may hope to recover some missing information. By acquiring the knowledge through a process of learning, the agents may further hope to guarantee that the recovered information is indeed correct.

Assuming only a black-box access to a learning process and a prediction process that are able to cope with missing information in some principled manner, we examine how the two processes should interact so that they improve their overall joint performance. We identify natural scenarios under which the interleaving of the processes is provably beneficial over their independent use.

14:30-16:00 Session 172B: Reports from the Field (KR)
Location: EI, EI 9
14:30
Computing Narratives of Cognitive User Experience for Building Design Analysis: KR for Industry Scale Computer-Aided Architecture Design
SPEAKER: unknown

ABSTRACT. We present a cognitive design assistance system equipped with analytical capabilities aimed at anticipating architectural building design performance with respect to people-centred functional design goals. The paper focuses on the system capability to generate \emph{narratives of visuo-locomotive user experience} from digital computer-aided architecture design (CAAD) models. The system is based on an underlying declarative narrative representation and computation framework pertaining to conceptual, geometric, and qualitative spatial knowledge. The semantics of the declarative narrative model, i.e., the overall representation and computation model, is founded on: (a) conceptual knowledge formalised in an OWL ontology; (b) a general spatial representation and reasoning engine implemented in constraint logic programming; and (c) a declaratively encoded (narrative) construction process (over graph structures) implemented in answer-set programming.

We emphasise and demonstrate: complete system implementation, algorithmic scalability, and robust performance \& integration with industry-scale architecture industry tools (e.g., Revit, ArchiCAAD) \& standards (BIM, IFC).

15:00
Tweety: A Comprehensive Collection of Java Libraries for Logical Aspects of Artificial Intelligence and Knowledge Representation

ABSTRACT. This paper presents Tweety, an open source project for scientific experimentation on logical aspects of artificial intelligence and particularly knowledge representation. Tweety provides a general framework for implementing and testing knowledge representation formalisms in a way that is familiar to researchers used to logical formalizations. This framework is very general, widely applicable, and can be used to implement a variety of knowledge representation formalisms from classical logics, over logic programming and computational models for argumentation, to probabilistic modeling approaches. Tweety already contains over 15 different knowledge representation formalisms and allows easy computation of examples, comparison of algorithms and approaches, and benchmark tests. This paper gives an overview on the technical architecture of Tweety and a description of its different libraries. We also provide two case studies that show how Tweety can be used for empirical evaluation of different problems in artificial intelligence.

15:30
SmartPM: An Adaptive Process Management System through Situation Calculus, IndiGolog, and Classical Planning

ABSTRACT. In this paper we present SmartPM, a model and a prototype Process Management System featuring a set of techniques providing support for automatic adaptation of knowledge-intensive processes at run-time. Such techniques are able to automatically adapt process instances without explicitly defining policies to recover from exceptions and without the intervention of domain experts at run-time, aiming at reducing error-prone and costly manual ad-hoc changes, and thus at relieving users from complex adaptations tasks. To accomplish this, we make use of well-established techniques and frameworks from Artificial Intelligence, such as situation calculus, Indigolog and classical planning.

14:30-16:00 Session 172C: Invited talks: Henzinger; Kugler; Fisher (RS)
Location: FH, Zeichensaal 3
14:30
Quantitative Reactive Modeling

ABSTRACT. Formal verification aims to improve the quality of hardware and software by detecting errors before they do harm. At the basis of formal verification lies the logical notion of correctness, which purports to capture whether or not a circuit or program behaves as desired. We suggest that the boolean partition into correct and incorrect systems falls short of the practical need to assess the behavior of hardware and software in a more nuanced fashion against multiple criteria. We propose quantitative fitness measures for reactive models of concurrent systems, specifically for measuring function, performance, and robustness. The theory supports quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction, model checking, and synthesis.

15:00
On Statecharts, Scenarios and Biological Modeling
SPEAKER: Hillel Kugler

ABSTRACT. TBA

15:30
Cancer as Reactivity
SPEAKER: Jasmin Fisher

ABSTRACT. Cancer is a highly complex aberrant cellular state where mutations impact a multitude of signalling pathways operating in different cell types. In recent years it has become apparent that in order to understand and fight cancer, it must be viewed as a system, rather than as a set of cellular activities. This mind shift calls for new techniques that will allow us to investigate cancer as a holistic system. In this talk, I will discuss some of the progress made towards achieving such a system-level understanding by viewing cancer as a reactive system and using computer modelling and formal verification. I will concentrate on our recent attempts to better understand cancer through the following four examples: 1) drug target optimization for Chronic Myeloid Leukaemia using an intuitive interface called BioModelAnalyzer, which allows to prove stabilization of biological systems; 2) dynamic hybrid model of brain tumour development using F#; 3) state-based models of cancer signalling crosstalk and their analysis using model-checking; and 4) synthesis of biological programs from mutation experiments. Looking forward, inspired by David Harel’s Grand Challenge proposed a decade ago, I will propose a smaller grand challenge for computing and biology that could shed new light on our ability to control cell fates during development and disease and potentially change the way we treat cancer in the future.

14:30-16:00 Session 172D: Invited Talk by Jasmin Blanchette, Discussions (Vampire)
Location: FH, Seminarraum 134
14:30
My Life with an Automatic Theorem Prover

ABSTRACT. Sledgehammer integrates third-party automatic theorem provers in the proof assistant Isabelle/HOL. In the seven years since its first release in 2007, it has grown to become an essential part of most Isabelle users' workflow. Although a lot of effort has gone into tuning the system, the main reason for Sledgehammer's success is the impressive power of the external provers, especially E, SPASS, Vampire, and Z3. In this paper, I review Vampire's strengths and weaknesses in this context and propose a few directions for future work.

15:30
Discussions
14:30-16:00 Session 172E: Common Logic, Higher-Order Nominal Modal Logic, First-Order Intuitionistic Logic (ARQNL)
Location: FH, Seminarraum 138C
14:30
Proof Support for Common Logic
SPEAKER: unknown

ABSTRACT. We present the theoretical background for an extension of the Heterogeneous Tool Set Hets that enables proof support for Common Logic. This is achieved via logic translations that relate Common Logic and some of its sublogics to already supported logics and automated theorem proving systems. We thus provide the first theorem proving support for Common Logic covering the full language, including the possibility of verifying meta-theoretical relationships between Common Logic theories.

15:00
Embedding of Quantified Higher-Order Nominal Modal Logic into Classical Higher-Order Logic
SPEAKER: unknown

ABSTRACT. In this paper, we present an embedding of higher-order nominal modal logic into classical higher-order logic, and study its automation. There exists no automated theorem prover for first-order or higher-order nominal logic at the moment, hence, this is the first automation for this kind of logic. In our work, we focus on nominal tense logic and have successfully proven some first theorems.

15:30
Dialogues for proof search
SPEAKER: Jesse Alama

ABSTRACT. Dialogue games are a two-player semantics for a variety of logics, including intuitionistic and classical logic. Dialogues can be viewed as a kind of analytic calculus not unlike tableaux. Can dialogue games be an effective foundation for proof search in intuitionistic logic (both first-order and propositional)? We announce Kuno, an automated theorem prover for intuitionistic first-order logic based on dialogue games.

14:30-15:30 Session 172F: Assurance for Secure Systems (VeriSure)
Location: FH, Hörsaal 2
14:30
Implicit Assumptions in a Model for Separation Kernels
SPEAKER: unknown

ABSTRACT. In joint work with several industrial and academic partners throughout Europe, we are working towards the certification of PikeOS -- an industrial separation kernel developed at SYSGO -- according to the highest level of the Common Criteria. We present a new generic model of separation kernels that includes interrupts, control and context switches. For this generic model, noninterference has been proven in the Isabelle/HOL theorem prover from proof obligations on the step function. Noninterference holds for all separation kernels which satisfy these assumptions. In this paper, we discuss this methodology for certification purposes. On the one hand, it is clear that our instantiation of the step function for PikeOS must satisfy the proof obligations. On the other hand, there are many implicit assumptions made in defining the run function, that is, in defining how the step function is applied. Based on work-in-progress, we address the issue of providing evidence that PikeOS indeed satisfies our generic notion of noninterference.

15:00
Assurance and Verification of Security Properties

ABSTRACT. This paper reviews the results of our recent research in assurance and verification of embedded software systems. This research was part of an effort in which we prepared evidence, including an abstract formal model and mechanical proofs, of the security of an embedded system implemented in the C language. It then describes the evidence provided for the certification and how this evidence might be presented in an assurance case The paper also describes our more recent research on SecProve, a tool for automatically checking the security of C programs during their development. It concludes by listing some open research problems in system assurance and verification.

14:30-16:00 Session 172G: Tutorial (Antoniotti), Contributed Talk (Dreossi) (HSB)
Location: FH, Seminarraum 101A
14:30
Tutorial: The Cellular Potts Model
15:30
Parameter Synthesis using Parallelotopic Enclosure and Applications to Epidemic Models
SPEAKER: unknown

ABSTRACT. We consider the problem of refining a parameter set to ensure that the behaviors of a dynamical system satisfy a given property. The dynamics are defined through parametric polynomial difference equations and their Bernstein representations are exploited to enclose reachable sets into parallelotopes. This allows us to achieve more accurate reachable set approximations with respect to previous works based on axis-aligned boxes. Moreover, we introduce a symbolical precomputation that leads to a significant improvement on time performances. Finally, we apply our framework to some epidemic models verifying the strength of the proposed method.

14:30-16:00 Session 172H: Superposition II (PAAR)
Location: FH, Hörsaal 7
14:30
A Model Guided Instantiation Heuristic for the Superposition Calculus with Theories
SPEAKER: Joshua Bax

ABSTRACT. Generalised Model Finding (GMF) is a quantifier instantiation heuristic for the superposition calculus in the presence of interpreted theories with arbitrarily quantified free function symbols ranging into theory sorts. The free function symbols are approximated by finite partial function graphs along with some simplifying assumptions which are iteratively refined. Here we present an outline of the GMF approach, give an improvement that addresses some of these and then present some ideas for extending it with concepts from instantiation based theorem proving.

15:00
Logtk : A Logic ToolKit for Automated Reasoning and its Implementation
SPEAKER: Simon Cruanes

ABSTRACT. We describe the design and implementation of Logtk, an OCaml library for writing automated reasoning tools that deal with (possibly typed) first-order logic. The library provides data structures and algorithms to represent terms, formulas, substitutions, perform unification, index terms, parse problems, as well as a few tools to demonstrate itsuse. It is the basis of a full-fledged superposition prover.

15:30
Polymorphic+Typeclass Superposition
SPEAKER: Daniel Wand

ABSTRACT. We present an extension of superposition that natively handles a polymorphic type system extended with type classes, thus eliminating the need for type encodings when used by an interactive theorem prover like Isabelle/HOL. We describe syntax, typing rules, semantics, the polymorphic superposition calculus and an evaluation on a problem set that is generated from Isabelle/HOL theories. Our evaluation shows that native polymorphic+typeclass performance compares favorably to monomorphisation, a highly efficient but incomplete way of dealing with polymorphism.

14:30-15:45 Session 172I: Invited talk: Ahmed Bouajjani (FRIDA)
Location: FH, Seminarraum 104
14:30
On Checking Correctness of Concurrent Data Structures

ABSTRACT. We address the issue of checking the correctness of implementations of libraries of concurrent/distributed data structures. We present results concerning the verification of linearizability in the context of shared-memory concurrent data structures, and eventual consistency in the context of replicated, distributed data structures.

This talk is based on joint work with Michael Emmi, Constantin Enea, and Jad Hamza.

14:30-16:00 Session 172J: Contributed Talks (ARW-DT and WING)
Location: FH, Hörsaal 3
14:30
A Theorem Prover Backed Approach to Array Abstraction
SPEAKER: Nathan Wasser

ABSTRACT. We present an extension to an on-demand abstraction framework, which integrates deductive verification and abstract interpretation. Our extension allows for a significantly higher precision when reasoning about programs containing arrays. We demonstrate the usefulness of our approach in the context of reasoning about secure information flow. In addition to abstracting arrays that may have been modified, our approach can also keep full precision while adding additional information about array elements which have been only read but not modified.

15:00
ALICe: A Framework to Improve Affine Loop Invariant Computation
SPEAKER: unknown

ABSTRACT. A crucial point in program analysis is the computation of loop invariants. Accurate invariants are required to prove properties on a program but they are difficult to compute. Extensive research has been carried out but, to the best of our knowledge, no benchmark has ever been developed to compare algorithms and tools.

We present ALICe, a toolset to compare automatic computation techniques of affine loop scalar invariants. It comes with a benchmark that we built using 102 test cases which we found in the loop invariant bibliography, and interfaces with three analysis programs, that rely on different techniques: Aspic, isl and PIPS. Conversion tools are provided to handle format heterogeneity of these programs.

Experimental results show the importance of model coding and the poor performances of PIPS on concurrent loops. To tackle these issues, we use two model restructurations techniques whose correctness is proved in Coq, and discuss the improvements realized.

15:30
Loop Invariants by Mutation, Dynamic Validation, and Static Checking

ABSTRACT. Some of the intrinsic limitations of the widely used static techniques for loop invariant inference can be offset by combining them with dynamic techniques---based on executing automatically generated tests. We show how useful loop invariant candidates can be generated by systematically mutating postconditions; then, we apply dynamic checking to weed out invalid candidates, and static checking to select provably valid ones. We present a framework that automatically applies these techniques to carry out functional correctness proofs without manually written loop invariants. Applied to 28 methods (including 39 different loops) from various java.util classes, our DynaMate prototype automatically discharged 97% of all proof obligations, resulting in automatic complete correctness proofs of 25 out of the 28 methods---outperforming several state-of-the-art tools for fully automatic functional verification of programs with loops.

14:30-16:00 Session 172K (GeTFun)
Location: FH, Hörsaal 4
14:30
An intuitionistic ALC description default logic

ABSTRACT. Knowledge formalization and reasoning automation are central within

Artificial Intelligence.  Classical logic has been traditionally used

for such purposes. However, it is better suited to deal with complete

knowledge in ideal circumstances. In real situations, in which the

knowledge is partial, classical logic is not sufficient since it is

monotonic. Nonmonotonic logics have been proposed to better cope with

practical reasoning. A successful formalization of nonmonotonic

reasoning is the Reiter's default logic which extends classical logic

with default rules. Unfortunately, default logic is undecidable. One

reason for that is the use of classical logic as its monotonic basis.

In this work, we change the default logic monotonic basis and propose

a new default logics based on its intuitionistic version iALC. This

new default logics are decidable and useful to formalize practical

reasoning on hierarchical ontologies with exceptions, specially the

ones that deals with legal knowledge and reasoning. On the default

counterpart, we add some restrictions to the application of defaults

in order to obtain nice properties such as coherence and elimination

of anomalous extensions. We present the main algorithms used to build

the extension for this logic, including the sequent calculus for

iALC, with its complexity analysis.

15:00
An inﬁnitary deduction system for CTL*
SPEAKER: Luca Viganò
15:30
Modal functions as moody truth-functions
SPEAKER: Pedro Falcão
ABSTRACT.
We can think of the usual (S5) modalities as 'moody' truth-functions. E.g. the necessity operator 'works' as falsum (constant falsehood) if the argument is contingent, and it works as the identity function if the argument is rigid (i.e. non-contingent); the possibility operator 'works' as verum (constant truth) if the argument is contingent and works as the identity if the argument is rigid.

We show how (the 16) pairs of unary truth-functions correspond unequivocally to unary modal functions; moreover we show how to generalize this to establish a correspondence between modal functions of arbitrary degree and sequences of truth-functions.
14:30-16:00 Session 172L: Reasoning for Vagueness perhaps (PRUV)
Location: FH, Seminarraum 107
14:30
Similarity-based Relaxed Instance Queries in EL++
SPEAKER: Andreas Ecke

ABSTRACT. Description Logic (DL) knowledge bases (KBs) allow to express knowledge about concepts and individuals in a formal way. This knowledge is typically crisp, i.e., an individual either is an instance of a given concept or it is not. However, in practice this is often too restrictive: when querying for instances, one may often also want to find suitable alternatives, i.e., individuals that are not instances of query concept, but could still be considered good enough'. Relaxed instance queries have been introduced to gradually relax this inference in a controlled way via the use of similarity measures. So far, those algorithms only work for the DL EL, which has limited expressive power.

In this paper, we introduce a suitable similarity measure for EL++-concepts. EL++ adds nominals, role inclusion axioms, and concrete domains to EL and thus (besides others) allows the representation and comparison of concrete values and specific individuals. We extend the algorithm to compute relaxed instance queries w.r.t. this new CSM, and thus to work for general EL++ KBs.

15:00
Resolution and Clause Learning for Multi-Valued CNF Formulas

ABSTRACT. Conflict-directed clause learning (CDCL) is the basis of SAT solvers with impressive performance on many problems. This performance has led to many reasoning tasks being carried out either by reduction to propositional CNF, or by adaptations of the CDCL algorithm to other logics. CDCL with restarts (CDCL-R) has been shown to have essentially the same reasoning power as unrestricted resolution (formally, they p-Simulate each other). Here, we examine the generalization of resolution and CDCL-R to a family of multi-valued CNF formulas, which are possible reduction targets for a variety of multi-valued or fuzzy logics. In particular, we study the formulas called Signed (or Multi-Valued) CNF formulas, and the variant called Regular Formulas. The main purpose of the paper is to show that an analogous result holds for these cases: a natural generalization of CDCL-R to these logics has essentially the same reasoning power as natural verions of resolution which appear in the literature.

15:30
Many-valued Horn Logic is Hard

ABSTRACT. In this short paper we prove that deciding satisfiability of fuzzy Horn theories with n-valued Lukasiewicz semantics is NP-hard, for any n greater or equal to 4.

14:45-16:00 Session 173A: Applications (ASPOCP)
Location: FH, Seminarraum 138A
14:45
Query Answering in Resource-Based Answer Set Semantics
SPEAKER: unknown

ABSTRACT. In recent work, we defined Resource-Based Answer Set Semantics, which is an extension to traditional answer set semantics stemming from the study of its relationship with linear logic. In this setting there are no inconsistent programs, and constraints are defined "per se" in a separate layer. In this paper, we propose a query-answering procedure reminiscent of Prolog for answer set programs under this extended semantics.

15:10
Declarative Encodings of Acyclicity Properties
SPEAKER: unknown

ABSTRACT. Many knowledge representation tasks involve trees or similar structures as abstract datatypes. However, devising compact and efficient declarative representations of such structural properties is non-obvious and can be challenging indeed. In this paper, we take a number of acyclicity properties into consideration and investigate various logic-based approaches to encode them. We use answer set programming as the primary representation language but also consider mappings to related formalisms, such as propositional logic, difference logic, and linear programming. We study the compactness of encodings and the resulting computational performance on benchmarks involving acyclic or tree structures.

15:35
Computing Secure Sets in Graphs using Answer Set Programming

ABSTRACT. Problems from the area of graph theory always served as fruitful benchmarks in order to explore the performance of Answer Set Programming (ASP) systems. A relatively new branch in graph theory is concerned with so-called secure sets. It is known that verifying whether a set is secure in a graph is already co-NP-hard. The problem of enumerating all secure sets thus is challenging for ASP and its systems. In particular, encodings for this problem seem to require disjunction and also recursive aggregates. In this paper, we provide such encodings and analyze their performance using the Clingo system.

14:45-16:00 Session 173B: Invited Talk 2 (SYNT)
Location: FH, Seminarraum 101B
14:45
Synthesis using EF-SMT Solvers (Invited Talk)
SPEAKER: Ashish Tiwari

ABSTRACT. Satisfiability modulo theory (SMT) solvers check satisfiability of Boolean combination of formulas that contain symbols from several different theories. All variables are (implicitly) existentially quantified. Exists-forall satisfiability modulo theory (EF-SMT) solvers check satisfiability of formulas that have a exists-forall quantifier prefix.  Just as SMT solvers are used as backend engines for verification tools (such as, infinite bounded model checkers and k-induction provers), EF-SMT solvers are potential backends for synthesis tools.  This talk will describe the EF-extension of the Yices SMT solver and present results on using it for reverse engineering hardware.

16:00-16:30Coffee Break
16:30-18:00 Session 175A: Planning, Strategies and Diagnosis 2 (KR)
Chair:
Location: EI, EI 7
16:30
Diagnostic Problem Solving via Planning with Ontic and Epistemic Goals
SPEAKER: unknown

ABSTRACT. Diagnostic problem solving involves a myriad of reasoning tasks associated with the determination of diagnoses, the generation and execution of tests to discriminate diagnoses, and the determination and execution of actions to alleviate symptoms and/or their root causes. Fundamental to diagnostic problem solving is the need to reason about action and change. In this work we explore these myriad of reasoning tasks through the lens of AI automated planning. We characterize a diversity of reasoning tasks associated with diagnostic problem solving, prove properties of these characterizations, and define correspondences with established automated planning tasks and existing state-of-the-art planning systems. In doing so, we provide deeper insight into the computational challenges associated with diagnostic problems solving, as well as proposing practical algorithms for their realization.

17:00
A Formalization of Programs in First-Order Logic with a Discrete Linear Order
SPEAKER: Fangzhen Lin

ABSTRACT. We consider the problem of representing and reasoning about computer programs, and propose a translator from a core procedural iterative programming language to first-order logic with quantification over the domain of natural numbers that includes the usual successor function and the less than'' linear order, essentially a first-order logic with a discrete linear order. Unlike Hoare's logic, our approach does not rely on loop invariants. Unlike typical temporal logic specification of programs, our translation does not require a transition system model of the program, and is compositional on the structures of the program. Some non-trivial examples are given to show the effectiveness of our translation for proving properties of programs.

17:30
Satisfiability of Alternating-time Temporal Epistemic Logic through Tableaux

ABSTRACT. In this paper we present a tableau-based method to decide the satisfiability of formulas in ATEL, an extension of the alternating-time temporal logic ATL including epistemic modalities for individual knowledge. Specifically, we analyse satisfiability of ATEL formulas under a number of conditions. We evaluate the impact of the assumptions of synchronicity and of a unique initial state, which have been proposed in the context of Interpreted Systems. Also, we consider satisfiability at a initial state as opposed to any state in the system. We introduce a tableau-based decision procedure for each of these combinations. Moreover, we adopt an agent-based approach to satisfiability, namely, the decision procedure returns a set of agents inducing a concurrent game structure that satisfies the specification at hand.

16:30-18:00 Session 175B: Uncertainty (KR)
Location: EI, EI 9
16:30
Linear Programs for Measuring Inconsistency in Probabilistic Logics
SPEAKER: Nico Potyka

ABSTRACT. Inconsistency measures help analyzing contradictory knowledge bases and resolving inconsistencies. In recent years several measures with desirable properties have been proposed, but often these measures correspond to combinatorial or non-convex optimization problems that are hard to solve in practice. In this paper, I study a new family of inconsistency measures for probabilistic knowledge bases. All members satisfy many desirable properties and can be computed by means of convex optimization techniques. For two members, I present linear programs whose computation is barely harder than a probabilistic satisfiability test.

17:00
Reasoning with Uncertain Inputs in Possibilistic Networks
SPEAKER: Karim Tabia

ABSTRACT. Graphical belief models are compact and powerful tools for representing and reasoning under uncertainty. Possibilistic networks are graphical belief models based on possibility theory. In this paper, we address reasoning under uncertain inputs in both quantitative and qualitative possibilistic networks. More precisely, we first provide possibilistic counterparts of Pearl's methods of virtual evidence then compare them with the possibilistic counterparts of Jeffrey's rule of conditioning. As in the probabilistic setting, the two methods are shown to be equivalent in the quantitative setting regarding the existence and uniqueness of the solution. However in the qualitative setting, Pearl's method of virtual evidence which applies directly on graphical models disagrees with Jeffrey's rule and the virtual evidence method. The paper provides the precise situations where the methods are not equivalent. Finally, the paper addresses related issues like transformations from one method to another and commutativity.

17:30
Relational Logistic Regression
SPEAKER: unknown

ABSTRACT. Logistic regression is a commonly used representation for aggregators in Bayesian belief networks when a child has multiple parents. Variations of population, as well as a desire to model interactions between parents in relational models, introduce representational problems for using logistic regression in relational models. In this paper, we first examine the representational problems caused by population variation. We show how these problems arise even in simple cases with a single parametrized parent, and propose a linear relational logistic regression which we show can represent arbitrary linear (in population size) decision thresholds, where the traditional logistic regression cannot. Then we examine representing interactions between parents of a child node, and representing non-linear dependency on population size. We propose a multi-parent relational logistic regression which can represent interactions between parents and arbitrary polynomial decision thresholds. Finally, we show how other well-known aggregators can be represented using this relational logistic regression.

16:30-18:10 Session 175C: Systems, Tools, and Frameworks (ASPOCP)
Location: FH, Seminarraum 138A
16:30
Are Preferences Giving You a Headache?'' - Take asprin!''
SPEAKER: unknown

ABSTRACT. In this paper we introduce asprin [1], a general, flexible, and extensible framework for handling preferences among the stable models of a logic program. We show how complex preference relations can be specified through user-defined preference types and their arguments. We describe how preference specifications are handled internally by so-called preference programs which are used for dominance testing. We also give algorithms for computing one, or all, optimal stable models of a logic program. Notably, the algorithms depend on the complexity of the dominance tests and make use of incremental answer set solving technology.

[1] asprin stands for {AS}P for {Pr}eference handl{in}g''.

16:55
On the Implementation of Weak Constraints in WASP
SPEAKER: unknown

ABSTRACT. Optimization problems in Answer Set Programming (ASP) are usually modeled by means of programs with weak constraints. These programs can be handled by algorithms for solving Maximum Satisfiability (MaxSAT) problems, if properly ported to the ASP framework. This paper reports on the implementation of several of these algorithms in the ASP solver WASP, whose empirical analysis highlights pros and cons of different strategies for computing optimal answer sets.

17:20
Interactive Query-based Debugging of ASP Programs

ABSTRACT. Broad application of answer set programming (ASP) for declarative problem solving requires the development of tools supporting the coding process. Program debugging is one of the crucial activities within this process. Modern ASP debugging approaches allow efficient computation of possible explanations of a fault. However, even for a small program a debugger might return a large number of possible explanations and selection of the correct one must be done manually.

In this paper we present an interactive query-based ASP debugging method which extends previous approaches and finds a preferred explanation by means of observations. The system automatically generates a sequence of queries to a programmer asking whether a set of ground atoms must be true in all (cautiously) or some (bravely) answer sets of the program. Since some queries can be more informative than the others, we discuss query selection strategies which, given user's preferences for an explanation, can find the best query. That is, the query an answer of which reduces the overall number of queries required for the identification of a preferred explanation.

17:45
Computing Answer Sets for Monadic Logic Programs via MapReduce

ABSTRACT. In this paper the applicability of the MapReduce framework for parallel computation for monadic logic programs is studied. In monadic programs all predicates are of arity one. Two different approaches are suggested: the first is based on parallelizing on constants, the second parallelizes on predicates. In each case, a method is provided that makes use of MapReduce and calls standard ASP solvers as back-ends via an abstract API. For each method, we provide a theoretical analysis of its computational impact.

16:30-18:15 Session 175D: Invited talks: Clarke; Dershowitz; Vardi (RS)
Location: FH, Zeichensaal 3
16:30
Model Checking Hybrid Systems

ABSTRACT. TBA

17:15
Towards a General Model of Evolving Interaction

ABSTRACT. TBA

17:45
Compositional Temporal Synthesis
SPEAKER: Moshe Vardi

ABSTRACT. Synthesis is the automated construction of a system from its specification. In standard temporal-synthesis algorithms, it is assumed the system is constructed from scratch. This, of course, rarely happens in real life. In real life, almost every non-trivial system, either in hardware or in software, relies heavily on using libraries of reusable components. Furthermore, other contexts, such as web-service orchestration and choreography, can also be modeled as synthesis of a system from a library of components. In this talk we describe and study the problem of compositional temporal synthesis, in which we synthesize systems from libraries of reusable components. We define two notions of composition: data-flow composition, which we show is undecidable, and control-flow composition, which we show is decidable. We then explore a variation of control-flow compositional synthesis, in which we construct reliable systems from libraries of unreliable components. Joint work with Yoad Lustig and Sumit Nain.

16:30-18:00 Session 175E: Special Session on Competitions: SyGuS & SyntComp (SYNT)
Location: FH, Seminarraum 101B
16:30
The 1st Syntax-Guided Synthesis Competition (SyGuS)
SPEAKER: Rajeev Alur
17:15
The 1st Synthesis Competition for Reactive Systems (SyntComp)
SPEAKER: Swen Jacobs
16:30-18:00 Session 175F: Vampire Regular Talks (Vampire)
Location: FH, Seminarraum 134
16:30
SAT solving experiments in Vampire
SPEAKER: Ioan Dragan

ABSTRACT. In order to better understand how well an external SAT solver would behave in the framework of a first order automated theorem prover we have decided to integrate in Vampire one of the best performing solvers available on the market, Lingeling. Although the process of integration is straight forward, by simply implementing some interfaces in order to make the two tools communicate and putting them together, there are a few problems that have to be overcome.

In this talk we are going to address both the issues that arise from integration of the two solver and present initial results obtained by using this combination. And also details about different strategies that are currently implemented.

17:00
First Class Boolean Type in First-Order Theorem Proving and TPTP

ABSTRACT. Automated analysis and verification of programs requires proving properties in the combination of first-order theories of various data structures, such as integers, arrays, or lists.  Automated first-order theorem provers face new requirements from reasoning-based analysis and verification, and their efficiency largely depends on how programs are translated  into a collection of logical formulas capturing the program semantics. Recent extentions to the TPTP language, the input language of automated provers, support  new programming languages features aimed to keep translation-sensitive semantics on the level of the theorem prover. For example, TPTP has recently been extended with if-then-else and let-in constructs over terms and formulas. In this paper we propose to extend TPTP with the built-in first class boolean type that can facilitate translation of programs with conditional statement and simplify the treatment of if-then-else and let-in constructs of TPTP.  The treatment of a built-in first class boolean type can be efficiently implemented in the Vampire theorem prover without major changes in its architecture.

17:30
Reasoning in First-Order Theories with Extensionality

ABSTRACT. Extensionality resolution is a new inference rule that helps first-order theorem provers to overcome limitations when reasoning with extensionality axioms. The inference rule naturally resembles the use of extensionality axioms in human proofs and integrates easily into the architecture of superposition-based theorem provers. The recent implementation of extensionality resolution in Vampire provides strong experimental evidence of the effectiveness of the technique. We present ongoing work on new options to control the recognition of extensionality axioms as well as the application of extensionality resolution inferences in the proof search of Vampire.

16:30-18:00 Session 175G: Logic with Partial Functions, First-Order Sequent Logics, Discussion (ARQNL)
Location: FH, Seminarraum 138C
16:30
Theorem Proving for Logic with Partial Functions Using Kleene Logic and Geometric Logic

ABSTRACT. We introduce a theorem proving strategy for Partial Classical Logic (PCL) that is based on geometric logic. The strategy first translates PCL theories into sets of Kleene formulas. After that, the Kleene formulas are translated into 3-valued geometric logic. The resulting formulas can be refuted by an adaptation of geometric resolution.

The translation to Kleene logic does not only open the way to theorem proving, but it also sheds light on the relation between PCL, Kleene Logic, and classical logic.

17:00
Computer-oriented inference search in first-order sequent logics

ABSTRACT. An approach to the construction of computer-oriented first-order sequent logics (with or without equality) for both classical and intuitionistic cases and their modal extensions is developed. It exploits the original notions of admissibility and compatibility, which permits to avoid preliminary skolemization being a forbidden operation for a number of non-classical logics in general. Following the approach, the cut-free sequent (modal) calculi avoiding the dependence of inference search on different orders of quantifier rules applications are described. Results relating to the coextensivity of various sequent calculi are given. The research gives a way to the construction of computer-oriented quantifier-rule-free calculi for classical and intuitionistic logics and their modal extensions.

17:30
Open Discussion
16:30-17:30 Session 175H: Assurance and Probability (VeriSure)
Location: FH, Hörsaal 2
16:30
A Graphical Notation for Probabilistic Specifications
SPEAKER: unknown

ABSTRACT. Nowadays formal methods represent a powerful but in practice not well supported way for verification. One reason among others for this is that such methods are often conceived of being too theoretical, can likely be misunderstood by non-experts, and notations are therefore misapplied.

For example to specify formal specifications, complex logics are state-of-the-art, especially for probabilistic verification properties. Not only that users are not able to construct correct specications but they also miss to understand what it is meant by the verication results based on the specication.

In this paper we address this problem and endorse the usage of a graphical notation to construct such specification properties, especially probabilistic ones. We present how such a notation may look like and emphasize that one single notation is sufficient to solve both lack of comprehensibility of the specification as well as of the verification results. Moreover, we extract some future research directions.

17:00
Assurance of some system reliability characteristics in formal design verification

ABSTRACT. In this paper we consider two ways of using both formal-logical and probabilistic models based on Model checking and Markov chains for semi-automatic verification of fault tolerant properties and Soft Error robustness of a target design. One of this approach to verification is well-known Probabilistic Model Checking while another is suggested recently a combination of traditional Model Checking with a –dimensional Markov chain. A Comparison an analysis of these approaches show that such combination of formal verification with analysis of fault-tolerant property could reduce overall design cost, what can increase considerably the formal verification tools effectiveness.

16:30-18:00 Session 175I: Evaluation, Sets, Tableaux (ARW-DT and WING)
Location: FH, Hörsaal 3
16:30
Towards Evaluating the Usability of Interactive Theorem Provers
SPEAKER: unknown

ABSTRACT. The effectiveness of interactive theorem provers (ITPs) has increased in a way that the bottleneck in the interactive process shifted from effectiveness to efficiency. Proving large theorems still needs a lot of effort for the user interacting with the system. This issue is recognized by the ITP-communities and improvements are being developed. However, in contrast to properties like soundness or completeness, where rigorous methods are applied to provide evidence, the evidence for a better usability is lacking in many cases. Our contribution is the application of methods from the human-computer-interaction (HCI) field to ITPs. We report on the application of focus groups to evaluate the usability of Isabelle/HOL and the KeY system. We apply usability evaluation methods in order to a) detect usability issues in the interaction between ITPs and their users, and b) to analyze whether methods such as focus groups are applicable to the field of ITP.

17:00
Combined Reasoning with Sets and Aggregation Functions
SPEAKER: Markus Bender

ABSTRACT. We developed a method that allows to check the satisfiability of a formula in the combined theories of sets and the bridging functions card, sum, avg, min, max by using a prover for linear arithmetic. Since abstractions of certain verification task lie in this fragment, this method can be used for checking the behaviour of a program.

17:30
Tableau Development for a Bi-Intuitionistic Tense Logic

ABSTRACT. Motivated by the theory of relations on graphs and applications to spatial reasoning, we present a bi-intuitionistic logic BISKT with tense operators. The logic is shown to be decidable and have the effective finite model property. We present a sound, complete and terminating tableau calculus for the logic and use the MetTeL system to generate an implementation. A significant part of the presentation will focus on how we developed the calculus using our tableau synthesis framework and give a demonstration of how to use the MetTeL tableau prover generation tool.

16:30-18:00 Session 175J: Proving and Disproving (PAAR)
Location: FH, Hörsaal 7
16:30
Beagle as a HOL4 external ATP method

ABSTRACT. This paper presents BEAGLE TAC, a HOL4 tactic for using Beagle as an external ATP for discharging HOL4 goals. We implement a translation of the higher-order goals to the TFA format of TPTP and add trace output to Beagle to reconstruct the intermediate steps derived by the ATP in HOL4. Our translation combines the characteristics of existing successful translations from HOL to FOL and SMT-LIB, however we needed to adapt certain stages of the translation in order to benefit form the expressivity of the TFA format and the power of Beagle. In our initial experiments, we demonstrate that our system can prove, without any arithmetic lemmas, 81% of the goals solved by Metis.

17:00
Razor: Provenance and Exploration in Model-Finding

ABSTRACT. Razor is a model-finder for first-order theories presented in geometric form;  geometric logic is a variant of first-order logic that focuses on "observable'' properties. An important guiding principle of Razor is that it be accessible to users who are not necessarily expert in formal methods;  application areas include software design, analysis of security protocols and policies, and configuration management.

A core functionality of the tool is that it supports exploration of the space of models of a given input theory, as well as presentation of provenance information about the elements and facts of a model.   The crucial mathematical tool is the ordering relation on models determined by homomorphism, and Razor prefers models that are minimal with respect to this homomorphism-ordering.

17:30
SGGS Theorem Proving: an Exposition

ABSTRACT. We present in expository style the main ideas in SGGS, which stands for Semantically-Guided Goal-Sensitive theorem proving. SGGS uses sequences of constrained clauses to represent models, instance generation to go from a candidate model to the next, and resolution as well as other inferences to repair the model. SGGS is refutationally complete for first-order logic, model based, semantically guided, proof confluent, and goal sensitive, which appears to be a rare combination of features. In this paper we describe the core of SGGS in a narrative style, emphasizing ideas and trying to keep technicalities to a minimum, in order to advertise it to builders and users of theorem provers.

16:30-18:30 Session 175K: Contributed Talks 2 (FRIDA)
Location: FH, Seminarraum 104
16:30
A Logic-based Framework for Verifying Consensus Algorithms
SPEAKER: unknown

ABSTRACT. Fault-tolerant distributed algorithms play an important role in ensuring the reliability of many software applications. In this paper we consider distributed algorithms whose computations are organized in rounds. To verify the correctness of such algorithms, we reason about (i) properties (such as invariants) of the state, (ii) the transitions controlled by the algorithm, and (iii) the communication graph. We introduce a logic that addresses these points, and contains set comprehensions with cardinality constraints, function symbols to describe the local states of each process, and a limited form of quantier alternation to express the verication conditions. We show its use in automating the verication of consensus algorithms. In particular, we give a semi-decision procedure for the unsatisability problem of the logic and identify a decidable fragment. We successfully applied our framework to verify the correctness of a variety of consensus algorithms tolerant to both benign faults (message loss, process crashes) and value faults (message corruption).

17:00
Monotonic Abstraction Techniques: from Parametric to Software Model Checking
SPEAKER: unknown

ABSTRACT. Monotonic abstraction is a technique introduced in model checking parameterized distributed systems in order to cope with transitions containing global conditions within guards. The technique has been re-interpreted in a declarative setting in previous papers of ours and applied to the verification of fault tolerant systems under the so-called ‘stopping failures’ model. The declarative reinterpretation consists in logical techniques (quantifier relativizations and, especially, quantifier instantiations) making sense in a broader context. In fact, we recently showed that such techniques can over-approximate array accelerations, so that they can be employed as a meaningful (and practically effective) component of CEGAR loops in software model checking too.

17:30
Model Checking Distributed Consensus Algorithms
SPEAKER: unknown

ABSTRACT. We present formal models of distributed consensus algorithms like Paxos and Raft in the executable specification language Promela
extended with a new type of guards, called counting guards, needed to implement transitions that depend
on majority voting. Our formalization exploits abstractions that follow from reduction theorems (w.r.t. the number of processes) extracted from specific case-studies. We exploit reductions to apply the model checker Spin to automatically validate finite instances of the model and to extract preconditions on the size of quorums used in the election phases of the protocol.
We discuss verification results obtained via different optimizations we obtained by a careful design of the Promela specification.

18:00
Model-Checking of Parameterized Timed-Systems
SPEAKER: unknown

ABSTRACT. In this work we extend the Emerson and Kahlon’s cutoff theorems for process skeletons with conjunctive guards to Parameterized Networks of Timed Automata, i.e. systems obtained by an unbounded number of Timed Automata instantiated from a finite set U_1,...,U_n of Timed Automata templates. In this way we aim at giving a first tool to universally verify software systems where an unknown number of software components (i.e. processes) interact with temporal constraints. It is often the case, indeed, that distributed algorithms show an heterogeneous nature, combining dynamic aspects with real-time aspects. In the paper we will also show how to model check a protocol that uses special variables storing identifiers of the participating processes (i.e. PIDs) in Timed Automata with conjunctive guards. This is non-trivial, since solutions to the parameterized verification problem often relies on the processes to be symmetric, i.e. indistinguishable. On the other side, many popular distributed algorithms make use of PIDs and thus cannot directly apply those solutions.

16:30-18:00 Session 175L (GeTFun)
Location: FH, Hörsaal 4
16:30
On subexponentials, focusing and modalities in concurrent systems

ABSTRACT. Concurrent Constraint Programming (CCP) is a simple and powerful model for concurrency where agents interact by telling and asking constraints. Since their inception, CCP-languages have been designed for having a strong connection to logic. In fact, the underlying constraint system can be built from a suitable fragment of intuitionistic (linear) logic --ILL-- and processes can be interpreted as formulas in ILL. Constraints as ILL formulas fail to represent accurately situations where preferences'' (called soft constraints) such as probabilities, uncertainty or fuzziness are present. In order to circumvent this problem, c-semirings have been proposed as algebraic structures for defining constraint systems where agents are allowed to tell and ask soft constraints. Nevertheless, in this case, the tight connection to logic and proof theory is lost. In this work, we give a proof theoretical meaning to soft constraints: they can be defined as formulas in a suitable fragment of ILL with subexponentials (SELL) where subexponentials, ordered in a c-semiring structure, are interpreted as preferences. We hence achieve two goals: (1) obtain a CCP language where agents can tell and ask soft constraints and (2) prove that the language in (1) has a strong connection with logic. Hence we keep a declarative reading of processes as formulas while providing a logical framework for soft-CCP based systems. An interesting side effect of (1) is that one is also able to handle probabilities (and other modalities) in SELL, by restricting the use of the promotion rule for non-idempotent c-semirings. This finer way of controlling subexponentials allows for considering more interesting spaces and restrictions, and it opens the possibility of specifying more challenging computational systems.

17:00
Quantum state transformations and distributed temporal logic
SPEAKER: Luca Viganò
17:30
Combining non-determinism and context awareness in consistency restoration systems
SPEAKER: Anna Zamansky
16:30-18:00 Session 175M: Favorite reasoning procedures for preferences (PRUV)
Location: FH, Seminarraum 107
16:30
Learning Preferences for Collaboration
SPEAKER: Eva Armengol

ABSTRACT. In this paper we propose the acquisition of a set of preferences of collaboration between classifiers based on decision trees. A classifier uses $k$-NN with leaf-one-out on its own knowledge base to generate a set of tuples with information about the object to be classified, the number of similar precedents, the maximum similarity, and about if it is a situation of collaboration or not. We considered that a classifier does not collaborate when it is able to reach by itself the correct classification for an object, otherwise it has to collaborate. These tuples are given as input to generate a decision tree from which a set of collaboration preferences is obtained.

17:00
Computing k-Rank Answers with Ontological CP-Nets

ABSTRACT. The tastes of a user can be represented in a natural way by using qualitative preferences. In this paper, we describe how to combine ontological knowledge with CP-nets to represent preferences in a qualitative way and enriched with domain knowledge. Specifically, we focus on conjunctive query (CQ) answering under CP-net-based preferences. We define k-rank answers to CQs based on the user’s preferences encoded in an ontological CP-net, and we provide an algorithm for k-rank answering CQs.

17:30
Multi-Attribute Decision Making using Weighted Description Logics
SPEAKER: Erman Acar

ABSTRACT. We introduce a framework based on Description Logics, which can be used to encode and solve decision problems in terms of combining inference services in DL and utility theory to represent preferences of the agent. The novelty of the approach is that we consider ABoxes as alternatives and weighted concept and role assertions as preferences in terms of possible outcomes. We discuss a relevant use case to show the benefits of the approach from the decision theory point of view.

18:10-18:20 Session 178: Closing (ASPOCP)
Location: FH, Seminarraum 138A
18:30-19:45 Session 179: KR Great Moments Invited Talk : Sheila McIlraith (Situation Calculus: The last 15 years) (KR)
Location: EI, EI 7
18:30
Situation Calculus: The Last 15 Years

ABSTRACT. In 2001 Ray Reiter published his seminal book, Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The book represented the culmination of 10 years of work by Reiter and his many collaborators investigating, formalizing, and extending the situation calculus, first introduced by John McCarthy in 1963 as a way of logically specifying dynamical systems. While researchers continue to extend the situation calculus, it has also seen significant scientific deployment to aid in the specification and implementation of a diversity of automated reasoning endeavors including diagnosis, web services composition and customization, and nonclassical automated planning. In this talk I will examine the important role the situation calculus has more recently been playing in explicating nuances in the logical specification and realization of some of these diverse automated reasoning tasks.