View: session overviewtalk overviewside by side with other conferences

08:45-10:15 Session 86G: Opening and Invited Talk (joint with QUANTIFY)
Location: MB, Aufbaulabor
LaSh and QUANTIFY Openings
Instantiation-based reasoning, EPR encodings and all that

ABSTRACT.  Instantiation-based reasoning aims to combine the expressivity of first-order logic with the efficiency of propositional and SMT reasoning. Most instantiation- based methods are complete for first-order logic and are especially efficient for the effectively propositional fragment (EPR). It turns out that many problems such as model checking, bit-vector reasoning, finite model finding and QBF solving can be succinctly encoded into the EPR fragment. In this talk I will discuss recent developments in instantiation-based reasoning, EPR based encodings and related challenges.


10:15-10:45Coffee Break
10:45-13:00 Session 90AN: LaSh morning 2: Contributed Talks
Location: MB, Aufbaulabor
Meta-level Representations in the IDP Knowledge Base System: Bootstrapping Inference Engine Development

ABSTRACT. Declarative systems aim at solving tasks by running inference engines on a specification, to free its users from having to specify how a task should be tackled. In order to provide such functionality, declarative systems themselves apply complex reasoning techniques, and, as a consequence, the development of such systems can be laborious work. In this paper, we demonstrate that the declarative approach can be applied to develop such systems, by tackling the tasks solved inside a declarative system declaratively. In order to do this, in many cases a meta-level representation of those specifications is required. Furthermore, by using the language of the system for the meta-level representation, it opens the door to bootstrapping: an inference engine can be implemented using the inference it performs itself.

One such declarative system is the IDP knowledge base system, based on the language FO(.)IDP, a rich extension of first-order logic. In this paper, we discuss how FO(.)IDP can support meta-level representations in general and which language constructs make those representations even more natural. Afterwards, we show how meta-FO(.)IDP can be applied to bootstrap its model expansion inference engine. We discuss the advantages of this approach: the resulting program is easier to understand, easier to maintain and more flexible.

Logical Machinery of Heuristics (Preliminary Report)

ABSTRACT. This paper is a preliminary report on a new declarative language that allows the programmer to specify heuristics (problem-specific inference methods) about the problem that is being solved. The heuristics that are defined in our language control and/or change the behavior of the underlying solver. In this way, we are able to attain problem-specific solvers that benefit from both the state-of-the-art general inference methods available in general-purpose solvers, and the problem-specific reasoning methods that are applicable only to this specific problem.

Modeling High School Timetabling as PartialWeighted maxSAT

ABSTRACT. High School Timetabling (HSTT) is a well known and wide spread problem. The problem consists of coordinating resources (e.g. teachers, rooms), time slots and events (e.g. lectures) with respect to various constraints. Unfortunately, HSTT is hard to solve and just finding a feasible solution for simple variants of HSTT has been proven to be NP-complete. In addition, timetabling requirements vary from country to country and because of this many variations of HSTT exist. Recently, researchers have proposed a general HSTT problem formulation in an attempt to standardize the problem from different countries and school systems.

In this paper, for the first time we provide a new detailed modeling of the general HSTT as SAT, in which all constraints are treated as hard constraints. In order to take into account soft constraints, we extend the previous model to Partial Weighted maxSAT. In addition, we present experimental results and compare to other approaches, using both artificial and real-world instances, all of which were taken from the Third International Timetabling Competition 2011 benchmark repository. Our approach gives competitive results and in some cases outperforms the winning algorithm from the competition.

Learning Optimal Bounded Treewidth Bayesian Networks via Maximum Satisfiability

ABSTRACT. Bayesian network structure learning is the well-known computationally hard problem of finding a directed acyclic graph structure that optimally describes given data. A learned structure can then be used for probabilistic inference. While exact inference in Bayesian networks is in general NP-hard, it is tractable in networks with low treewidth. This provides good motivations for developing algorithms for the NP-hard problem of learning optimal bounded treewidth Bayesian networks (BTW-BNSL). In this work, we develop a novel score-based approach to BTW-BNSL, based on casting BTW-BNSL as weighted partial Maximum satisfiability. We demonstrate empirically that the approach scales notably better than a recent exact dynamic programming algorithm for BTW-BNSL.

Submitted to LaSh’14 for presentation only.

Conference version of this paper appears as Jeremias Berg, Matti Järvisalo, and Brandon Malone. Learning Optimal Bounded Treewidth Bayesian Networks via Maximum Satisfiability. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS 2014), volume 33 of JMLR Workshop and Conference Proceedings, pages 86-95. JMLR, 2014.

Depart for Workshop Lunch

ABSTRACT. The LaSh workshop lunch is included in the registration fee.  Contact the workshop organizers if you are not registered by wish to join the lunch.

13:00-14:30Lunch Break
14:30-16:10 Session 96AQ: LaSh afternoon 1: Invited Talks - Lazy Grounding
Location: MB, Aufbaulabor
Laziness is next to Godliness
SPEAKER: Peter Stuckey

ABSTRACT.   Solving combinatorial optimization problems is hard.  There are many
  techniques which we need to combine to successfully solve these
  problems: including inference, search, relaxation, and
  decomposition.  Any method to solve combinatorial optimization
  problems must tradeoff how much time to spend in search, and how
  much time to spend on methods to reduce the search.  In this talk I
  will propose that we should spend the least amount of time possible
  on methods to reduce search, until there is evidence that it will be
  useful.  I will demonstrate how this "laziness" principle can be
  highly effective in solving

Lazy Model Expansion: Interleaving Grounding with Search
SPEAKER: Bart Bogaerts

ABSTRACT. Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state- of-the-art approach for bounded model generation for rich knowledge representation languages, like Answer Set Programming (ASP), FO(·) and the CSP modeling language Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory.

An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoret- ical framework and an implementation in the context of the FO(·) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non- grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified.

The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding- on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach.


16:00-16:30Coffee Break
16:30-18:00 Session 99AP: LaSh afternoon 2: Invited Talks - Answer Set Programming
Location: MB, Aufbaulabor
The D-FLAT System for Dynamic Programming on Tree Decompositions

ABSTRACT. Complex reasoning problems over large amounts of data pose a great challenge for computer science. To overcome the obstacle of high computational complexity, exploiting structure by means of tree decompositions has proved to be effective in many cases. However, the implementation of suitable efficient algorithms is often tedious. D-FLAT is a software system that combines the logic programming language Answer Set Programming with problem solving on tree decompositions and can serve as a rapid prototyping tool for such algorithms. Since we initially proposed D-FLAT, we have made major changes to the system, improving its range of applicability and its usability. In this paper, we present the system resulting from these efforts.

Cross-Translating Answer Set Programs
SPEAKER: Tomi Janhunen

ABSTRACT.  Answer set programming (ASP)  is a declarative programming paradigm where computation takes place in two phases. The rules of a logic program are first instantiated after which the answer sets of the resulting ground program are computed. There are two mainstream approaches to the search for answer sets: either by using native answer set solvers or by translating ground rules into other kinds of constraints and using other off-the-shelf solver technology to perform the search. Over the years, we have developed a number of translations from ASP to Boolean satisfiability (SAT), difference logic, bit-vector logic, and mixed integer programming. If a target formalism does not support extended rule types, they have to be normalized before translation. Our latest translation of ASP into SAT modulo acyclicity enables a cross-compilation architecture where the back-end formalism is determined by the last translation step generating Clark’s completion extended by acyclicity constraints.  In this invited talk, I describe the resulting cross-translation framework for ASP and discuss lessons learned from the ten-year development of the framework.