19TH INTERNATIONAL CONFERENCE ON LOGIC FOR PROGRAMMING, ARTIFICIAL INTELLIGENCE AND REASONING

PROGRAM FOR MONDAY, DECEMBER 16TH, 2013

09:00-10:00 Session 9: Invited talk by Volker Diekert (LPAR)
09:00
The Local Divisor Approach to First-Order Languages
ABSTRACT.

Algebraic techniques play an important role in relating logical formalisms with automata theory. Famous decidability results for various logics have been derived by the following scheme. Start with a logical sentence, translate the sentence to some NFA (or Büchi-automaton), compute the transformation monoid, and verify some algebraic properties. The most classical example for the success of this method is the possibility to check whether a given sentence in monadic second order logic can be expressed in linear temporal logic LTL.

Frequently, the actual transformations are complex and proofs are difficult to understand.  Local divisors were introduced in semigroup theory as a tool for proving results of this type by a refined induction method. This led to simpler proofs as well as new results.

In my lecture I will present the local divisor technique and I will give some applications and examples.  One example is a simplified proof for classic result that aperiodic languages are LTL definable. Another example is the recent result showing that "bounded synchronization delay" and first-order definability (resp. LTL-definability) are equivalent for omega-languages. For the extension to omega-languages the local divisor approach is the only known technique to date.

10:00-10:30Coffee Break
10:30-12:00 Session 10: Interpolation and SMT (LPAR)
10:30
Tree Interpolation in Vampire
ABSTRACT.

We describe new extensions of the Vampire theorem prover for computing tree interpolants. These extensions generalize Craig interpolation in Vampire, and can also be used to derive sequence interpolants. We evaluated our implementation on a large number of examples over the theory of linear integer arithmetic and integer-indexed arrays, with and without quantifiers. When compared to other methods, our experiments show that some examples could only be solved by our implementation.

10:50
Instantiations, Zippers and EPR Interpolation
ABSTRACT.

This paper describes interpolation procedures for EPR. In principle, interpolation for EPR is simple: It is a special case of first-order interpolation. In practice, we would like procedures that take advantage of properties of EPR: EPR admits finite models and those models are sometimes possible to describe very compactly. Inspired by procedures for propositional logic that use models and cores, but not proofs, we develop a procedure for EPR that uses just models and cores.

11:00
Revisiting the Equivalence of Shininess and Politeness
SPEAKER: Filipe Casal
ABSTRACT.

The Nelson-Oppen method [1] allows the modular combination of quantifier free satisfiability procedures of first-order theories into a quantifier-free satisfiability procedure for the union of the theories. However, this method requires the theories to have disjoint signatures and to be stably infinite. Due to the importance of the result, several attempts to extend the method to different and wider classes of theories were made. Recently, two different extensions of the Nelson-Oppen method were proposed, where the stably infinite requirement was replaced by another condition: in [2] it was required that all but one of the theories are shiny, and in [3] it was required that, when combining two theories, one of them is polite. The relationship between shiny and polite theories was analysed in [3]. Later, a stronger notion of polite theory was proposed, see [4], in order to overcome a subtle issue with the proof of the Nelson-Oppen method in [3]. In this paper, we analyse the relationship between shiny and strongly polite theories in the one-sorted case. We show that a shiny theory with a decidable quantifier-free satisfiability problem is strongly polite and we provide two different sufficient conditions for a strongly polite theory to be shiny. Based on these results, we derive a combination method for the union of a polite theory with an arbitrary theory.

[1] G. Nelson and D. C. Oppen. Simplification by cooperating decision procedures. ACM Transactions on Programming Languages and Systems, 1(2):245–257, 1979.

[2] C. Tinelli and C. G. Zarba. Combining nonstably infinite theories. Journal of Automated Reasoning, 34(3):209–238, 2005.

[3] S. Ranise, C. Ringeissen, and C. G. Zarba. Combining data structures with nonstably infinite theories using many-sorted logic. In Proceedings of the Fifth International Workshop on Frontiers of Combining Systems (FroCoS’2005), volume 3717 of LNAI, pages 48–64, 2005.

[4] D. Jovanovic and C. Barrett. Polite theories revisited. In Proceedings of the Seventeenth International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR’2010), volume 6397 of LNCS, pages 402–416, 2010.

11:20
Resourceful Reachability as HORN-LRA
SPEAKER: Jael Kriener
ABSTRACT.

The program verification tool SLAyer uses abstractions during analysis and relies on a solver for reachability to refine spurious counterexamples. In this context, we extract a reachability benchmark suite and evaluate methods for encoding reachability properties with heaps using Horn clauses over linear arithmetic. Our experimental evaluation establishes arithmetical encodings as a viable technique compared to bounded model checking over array based systems, but only given significant pre-processing and controls over how arrays are eliminated.

11:40
Proving Infinite Satisfiability
ABSTRACT.

We consider the problem of automatically disproving invalid conjectures over data structures such as lists and arrays over integers, in presence of additional hypothesis over these data structures. Such invalid conjectures come up frequently in applications of automated reasoning to software verification and the analysis of data-rich state-based systems, for example. More formally, the disproving problem is to show that LIST u ARRAY u HYP does not entail a sentence CON, where LIST are the list axioms, ARRAY are array axioms, HYP are the mentioned additional hypothesis, and CON is the conjecture in question. The obvious approach to disproving is to show satisfiability of LIST u ARRAY u HYP u { not CON }, where not is the negation operator, by means of a (complete) theorem prover. Unfortunately, current theorem proving technology is of limited usefulness for that: finite model finders cannot be used because the list axioms do not admit finite models, SMT-solvers are typically incomplete on quantified formulas and face the same problem, and theorem provers based on saturation often do not terminate on satisfiable input (let alone completeness issues in presence of arithmetic background theories).

We propose a different, conceptually simple approach based on refutational theorem proving. It consists in assuming that LIST u ARRAY is satisfiable and providing templates for HYP that are guaranteed to preserve satisfiability of LIST u ARRAY u HYP. Then, disproving is done by proving that LIST u ARRAY u HYP entails not CON, i.e., that LIST u ARRAY u HYP u { CON } is unsatisfiable.

The main point of the paper is to demonstrate the practical feasibility of our approach. By means of examples, we demonstrate that our template language covers many widely used cases. We also report on our rather positive experiences on disproving sample conjectures by current theorem provers, and we compare their performance (Z3, SPASS+T and Beagle).

12:00-14:00Lunch
14:00-15:30 Session 11 (LPAR)
14:00
Incremental Tabling for Query-Driven Propagation of Logic Program Updates
ABSTRACT.

We foster a novel implementation technique for logic program updates, that further exploits incremental tabling in logic programming (here using XSB Prolog). Propagation of fluent updates is controlled by initially keeping updates pending in the database, and then making active only those up to the timestamp of an actual query by incremental assertions. Such assertions automatically trigger system-implemented incremental tabling of fluents (and their negated complements), on the initiative of queries, with respect to a predefined upper time limit. The frame problem can then be dealt with by inspecting the table for the latest time a fluent is known to be assuredly true, i.e., the latest time it is not supervened by its negated complement, relative to the given query time. We adapt the dual program transformation for helping propagate the complement of a fluent incrementally, in order to establish whether the fluent is still true at some time point or if rather its complement is. This affords us a form of controlled, though automatic, system level truth-maintenance, up to the actual query time. Our approach reconciles high-level top-down deliberative reasoning about a query, with autonomous low-level bottom-up world reactivity to ongoing updates, and that might be adopted elsewhere for reasoning in logic.

14:20
On Minimality and Integrity Constraints in Probabilistic Abduction
ABSTRACT.

Abduction is a type of logical inference that can be successfully combined with probabilistic reasoning. However, the role of integrity constraints has not received much attention when performing logical-probabilistic inference. The contribution of our paper is a probabilistic abductive framework based on the distribution semantics for normal logic programs that handles negation as failure and integrity constraints in the form of denials. Integrity constraints are treated as evidence from the perspective of probabilistic inference. An implementation is provided that computes alternative (non-minimal) abductive solutions, using an appropriately modified abductive system, and generating the probability of a query, for given solutions. An example application of the framework is given, where gene network topologies are abduced according to biological expert knowledge, to explain observed gene expressions. The example shows the practical utility of the proposed framework and directions of future work.

14:40
Herbrand Theorems for Substructural Logics
SPEAKER: Petr Cintula
ABSTRACT.

Herbrand and Skolemization theorems are obtained for a broad family of first-order substructural logics. These logics typically lack equivalent prenex forms, a deduction theorem, and reductions of semantic consequence to satisfiability. The Herbrand and Skolemization theorems therefore take various forms, applying either to the left or right of the consequence relation, and to restricted classes of formulas.

15:00
An Epistemic Event Calculus for ASP-based Reasoning About Knowledge of the Past, Present and Future
SPEAKER: Rob Miller
ABSTRACT.

We present a generalisation of the Event Calculus, specified in classical logic and implemented in ASP, that facilitates reasoning about non-binary-valued fluents in domains with non-deterministic, triggered, concurrent, and possibly conflicting actions. We show via a case study how this framework may be used as a basis for a "possible-worlds" style approach to epistemic and causal reasoning in a narrative setting. In this framework an agent may gain knowledge about both fluent values and action occurrences through sensing actions, lose knowledge via non-deterministic actions, and represent plans that include conditional actions whose conditions may be initially unknown.

15:10
Defining Privacy is Supposed to be Easy
ABSTRACT.

Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce in this paper a novel and declarative way to specify privacy goals, called α-β privacy, and relate it to static equivalence. This new approach is based on specifying two formulae α and β in first-order logic with Herbrand universes, where α reflects the intentionally released information and β includes the actual cryptographic (“technical”) messages the intruder can see. Then α-β privacy means that the intruder cannot derive any “non-technical” statement from β that he cannot derive from α already. We describe by a variety of examples how this notion can be used in practice. Even though α-β privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for α-β privacy.

15:30-16:00Coffee Break
16:00-18:00 Session 12 (LPAR)
16:00
Relaxing Synchronization Constraints in Behavioral Programs
SPEAKER: Guy Katz
ABSTRACT.

In behavioral programming, a program consists of separate modules called behavior threads, each representing a part of the system’s allowed, necessary or forbidden behavior. An execution of the program is a series of synchronizations between these threads, where at each synchronization point an event is selected to be carried out. As a result, the execution speed is dictated by the slowest thread. We propose an eager execution mechanism for such programs, which builds upon the realization that it is often possible to predict the outcome of a synchronization point even without waiting for slower threads to synchronize. This allows faster threads to continue running uninterrupted, whereas slower ones catch up at a later time. Consequently, eager execution brings about increased system performance, better support for the modular design of programs, and the ability to distribute programs across several machines. It also allows to apply behavioral programming to a variety of problems that were previously outside its scope. We illustrate the method by concrete examples, implemented in a behavioral programming framework in C++

16:20
Putting Newton into Practice: A Solver for Polynomial Equations over Semirings
ABSTRACT.

We present the first implementation of Newton's method for solving systems of equations over $\omega$-continuous semirings (based on \cite{DBLP:journals/jacm/EsparzaKL10,DBLP:conflataLuttenbergerS13}). For instance, such equation systems arise naturally in the analysis of interprocedural programs or the provenance computation for Datalog. Our implementation provides an attractive alternative for computing their exact least solution in some cases where the ascending chain condition is not met and hence, standard fixed-point iteration needs to be combined with some over-approximation (e.g., widening techniques) to terminate. We present a generic C++ library along with the main algorithms and analyze their complexity. Furthermore, we describe our implementation of the counting semiring based on semilinear sets. Finally, we discuss motivating examples as well as performance benchmarks.

16:40
Solving Geometry Problems using a Combination of Symbolic and Numerical Reasoning
ABSTRACT.

We describe a framework that combines deductive, numeric, and inductive reasoning to solve geometric problems. Applications include the generation of geometric models and animations, as well as problem solving in the context of intelligent tutoring systems. Our novel methodology uses (i) deductive reasoning to generate a partial program from logical constraints, (ii) numerical methods to evaluate the partial program, thus creating geometric models which are solutions to the original problem, and (iii) inductive synthesis to read off new constraints that are then applied to one more round of deductive reasoning leading to the desired deterministic program. By the combination of methods we were able to solve problems that each of the methods was not able to solve by itself. The number of nondeterministic choices in a partial program provides a measure of how close a problem is to being solved and can thus be used in the educational context for grading and providing hints. We have successfully evaluated our methodology on 18 Scholastic Aptitude Test geometry problems, and 11 ruler/compass-based geometry construction problems. Our tool solved these problems using an average of a few seconds per problem.

17:00
May-Happen-in-Parallel Analysis for Priority-based Scheduling
ABSTRACT.

A may-happen-in-parallel (MHP) analysis infers the sets of pairs of program points that may execute in parallel along a program's execution. This is an essential piece of information to infer more complex properties of concurrent programs, e.g., deadlock freeness, termination, resource consumption or data race detection can greatly benefit from the MHP relations to increase their accuracy. Previous MHP analyses have assumed a worst case scenario by adopting a simplistic (non-deterministic) task scheduler which can select any available task. While the results of the analysis for a non-deterministic scheduler are obviously sound, they can lead to an overly pessimistic result. We present a MHP analysis for an asynchronous language with prioritized tasks buffers. Priority-based scheduling is arguably the most common scheduling strategy used in the implementation of concurrent languages. The challenge is to be able to take task priorities into account at static analysis time in order to filter out unfeasible MHP pairs.

17:20
An Event Structure Model for Probabilistic Concurrent Kleene Algebra
ABSTRACT.

We give a true-concurrent model for probabilistic concurrent Kleene algebra. The model is based on probabilistic event structures, which combines ideas from Katoen's work on probabilistic concurrency and Varacca's probabilistic prime event structures. We find that pomset-distribution inclusion is not a precongruence (as for interleaving models); we avoid this problem by giving a true-concurrent version of the well known probabilistic simulation.

17:40
SAT Preprocessing for MaxSAT
SPEAKER: Anton Belov
ABSTRACT.

State-of-the-art algorithms for industrial instances of MaxSAT prob- lem rely on iterative calls to a SAT solver. Preprocessing is crucial for the acceler- ation of SAT solving, and the key preprocessing techniques rely on the application of resolution and subsumption elimination. Additionally, satisfiability-preserving clause elimination procedures are often used. Since MaxSAT computation typi- cally involves a large number of SAT calls, we are interested in whether an input instance to MaxSAT problem can be preprocessed up-front, i.e. prior to running the MaxSAT solver, rather than (or, in addition to) during each iterative SAT solver call. The key requirement in this setting is that the preprocessing has to be sound, i.e. so that the solution can be reconstructed correctly and efficiently after the execution of a MaxSAT algorithm on the preprocessed instance. While, as we demonstrate in this paper, certain clause elimination procedures are sound for MaxSAT, it is well-known that this is not the case for the resolution and the subsumption elimination. In this paper we show how to adapt these preprocessing techniques to MaxSAT. To achieve this we recast the MaxSAT problem in a re- cently introduced labelled-CNF framework, and show that within the framework the preprocessing techniques can be applied soundly. Furthermore, we show that MaxSAT algorithms restated in the framework have a natural implementation on top of an incremental SAT solver. We evaluate the prototype implementation of a MaxSAT algorithm WMSU1 in this setting, demonstrate the effectiveness of preprocessing, and show overall improvement with respect to non-incremental versions of the algorithm on some classes of problems.