previous day
next day
all days

View: session overviewtalk overview

08:45-09:05 Session 3

CP 2023 Opening

09:05-10:10 Session 4

Invited Talk 1Great Hall (#1022) (Chair: Roland Yap)

10:10-10:40Coffee Break

Great Hall Lower Gallery (#1022K)

10:40-12:10 Session 5A

Modelling 1, Great Hall (#1022) (Chair: Helmut Simonis)

Simplifying Step-wise Explanation Sequences
PRESENTER: Ignace Bleukx

ABSTRACT. Explaining constraint programs is useful to debug an unsatisfiable set of constraints, to understand why a given solution is optimal, or to understand the values in a unique solution. A recently-proposed framework for step-wise explaining constraint programs works well to step-by-step explain unique solutions to a problem. It can also be used to step-wise explain why a model is unsatisfiable, but this may create redundant steps and introduce superfluous information in explanation sequences. This paper proposes methods to simplify any step-wise explanation sequence, to generate simple steps that together form a short, interpretable sequence. We propose an algorithm to greedily construct an initial sequence and two filtering algorithms that eliminate redundant steps and unnecessarily complex parts of explanation sequences. Experiments on diverse benchmark instances show our techniques can significantly simplify step-wise explanation sequences.

Addressing problem drift in the UNHCR fund allocation problem

ABSTRACT. Optimisation models are concise mathematical representations of real-world problems, usually developed by modelling experts in consultation with domain experts. Typically, domain experts are only indirectly involved in the problem modelling process, providing information and feedback, and thus perceive the deployed model as a black box. Unfortunately, real-world problems “drift“ over time, where changes in the input data parameters and/or requirements cause the developed system to fail. Problem drift requires modelling experts to revisit and update deployed models. This paper presents a model we developed for the United Nations High Commissioner for Refugees (UNHCR) to help them allocate funds to different crises. We describe the initial model, which was successfully deployed, and the challenges due to problem drift that occurred over the following years. We use this case study to explore techniques for mitigating problem drift by including domain experts in the modelling process via techniques such as domain specific languages

Searching for smallest universal graphs and tournaments with SAT
PRESENTER: Stefan Szeider

ABSTRACT. A graph is induced k-universal if it contains all graphs of order k as an induced subgraph. For over half a century, the question of determining k-universal graphs with the smallest number of vertices has been studied. A related question asks for a smallest k-universal tournament containing all tournaments of order k.

This paper proposes and compares SAT-based methods for answering these questions for small values of k. Our methods scale to values for which it is not feasible to enumerate all candidate graphs up to isomorphism and check for each graph whether it is universal. Therefore we need to integrate isomorph-free generation and universality check. With our methods we can show, for instance, that an induced 7-universal graph has more than 16 vertices, although the number of all connected graphs on 16 vertices, modulo isomorphism, is a number with 22 digits. Our methods include static and dynamic symmetry breaking and lazy encodings that employ external subgraph isomorphism testing.

10:40-12:10 Session 5B

Scheduling, East Common Room (#1034) (Chair: Chris  Beck)

Partially Preemptive Multi Skill/Mode Resource-constrained Project Scheduling with Generalized Precedence Relations and Calendars

ABSTRACT. Multi skill resource-constrained project scheduling Problems (MS-RCPSP) have been object of studies from many years. Also, preemption is an important feature of real-life scheduling models. However, very little research has been investigated concerning MS-RCPSPs including preemption, and even less research moving out from academic benchmarks to real problem solving. In this paper we present a solution to those problems based on a hybrid method derived from large neighborhood search incorporating constraint programming components tailored to deal with complex scheduling constraints. The method is implemented in a new open source python library allowing to easily reuse existing modeling languages and solvers. We evaluate the method on an industrial case study from aircraft manufacturing including additional complicating constraints such as generalized precedence relations, resource calendars and partial preemption on which the standard CP Optimizer solver is unable to provide solutions in reasonable times. The method is also able to find new best solutions on standard multi-skill project scheduling instances, performing better than a reference method from the literature.

An Efficient Constraint Programming Approach for Scheduling Interruptible Tasks Requiring Disjoint Disjunctive Resources

ABSTRACT. Constraint Programming has been widely, and very successfully, applied to scheduling problems.

However, the focus has been on uninterruptible tasks, and preemptive scheduling problems are typically harder for existing constraint solvers. Indeed, one usually needs to represent all potential task interruptions thus introducing many variables and symmetrical or dominated choices.

In this paper, building on mostly known results, we observe that a large class of preemptive disjunctive scheduling problems do not require to explicitly model task interruptions. We then introduce a new constraint programming approach for this class of problems that significantly outperforms state-of-the-art dedicated approaches in our experimental results.

Horizontally Elastic Edge Finder Rule for Cumulative Constraint Based on Slack and Density

ABSTRACT. In this paper, we propose an enhancement of the filtering power of the edge finding rule based on the \textit{Profile} and the \textit{TimeTable} data structures. The minimal slack and the maximum density are used to select potential task interval for the edge finding rule. The strong detection rule of \cite{fetgoBetmbeExt} is then applied on those intervals, which results in a new filtering rule named \textit{Slack-Density Horizontally Elastic Edge Finder}. The new rule subsumes the edge finding rule and it is not comparable to the Gingras and Quimper rule \cite{GingrasQ16} and the \textit{TimeTable} edge finding rule \cite{Vilim11}. A two-phase filtering algorithm of complexity $\mathcal{O}(n^2)$ each is proposed for the new rule. Improvements based on the \textit{TimeTable} are obtained by considering fixed part of external tasks during the horizontally elastic scheduling of set of tasks. Experimental results on well-known suite benchmark instances of Resource-Constrained Project Scheduling Problems show that the propounded algorithms are competitive to the state-of-the-art algorithms in terms of running time and tree search reduction.

12:10-13:30Lunch Break

Great Hall Lower Gallery (#1022K)

13:30-14:30 Session 6A

Tutorial 1Great Hall (#1022) (Chair: Gilles Pesant)

  • Vijay Ganesh (Univesity of Waterloo), Machine Learning for Solvers
13:30-14:30 Session 6B

Search 1, East Common Room (#1034) (Chair: Neng-Fa Zhou)

Improving Local Search for Pseudo Boolean Optimization by Deep Optimization

ABSTRACT. Pseudo-Boolean optimization (PBO) is usually used to model many combinatorial optimization problems, especially for some real-world applications. Despite its significant importance in both theory and applications, there are few works on using local search to solve PBO. This paper develops a novel local search framework for PBO, which has three main ideas. First, we design a two-level selection strategy to evaluate all candidate variables. Second, we propose a novel deep optimization strategy to disturb some search spaces. Third, a sampling flipping method is applied to help the algorithm jump out of local optimum. Experimental results show that the proposed algorithms outperform three state-of-the-art PBO algorithms on most instances.

Towards More Efficient Local Search for Pseudo-Boolean Optimization

ABSTRACT. Pseudo-Boolean (PB) constraints are highly expressive, and many combinatorial optimization problems can be modeled using pseudo-Boolean optimization (PBO). It is recognized that stochastic local search (SLS) is a powerful paradigm for solving combinatorial optimization problems, but the development of SLS for solving PBO is still in its infancy. In this paper, we develop an effective SLS algorithm for solving PBO, dubbed NuPBO, which introduces a novel scoring function for PB constraints and a new weighting scheme. We conduct experiments on a broad range of six public benchmarks, including three real-world benchmarks, a benchmark from PB competition, an integer linear programming optimization benchmark, and a crafted combinatorial benchmark, to compare NuPBO against five state-of-the-art competitors, including a recently-proposed SLS PBO solver LS-PBO, two complete PB solvers PBO-IHS and HYBRID, and two mixed integer programming (MIP) solvers Gurobi and SCIP. NuPBO has been exhibited to perform best on these three real-world benchmarks. On the other three benchmarks, NuPBO shows competitive performance compared to state-of-the-art competitors, and it significantly outperforms LS-PBO, indicating that NuPBO greatly advances the state of the art in SLS for solving PBO.

14:30-15:30 Session 7A

Decision Diagrams 1, Great Hall (#1022) (Chair: Jean-Charles Regin)

Boosting Decision Diagram-Based Branch-and-Bound by Pre-Solving with Aggregate Dynamic Programming

ABSTRACT. Discrete optimization problems expressible as dynamic programs can be solved by branch-and-bound with decision diagrams. This approach dynamically compiles bounded-width decision diagrams to derive both lower and upper bounds on unexplored parts of the search space, until they are all enumerated or discarded. Assuming a minimization problem, relaxed decision diagrams provide lower bounds through state merging while restricted decision diagrams obtain upper bounds by excluding states to limit their size. As the selection of states to merge or delete is done locally, it is very myopic to the global problem structure. In this paper, we propose a novel way to proceed that is based on pre-solving a so-called aggregate version of the problem with a limited number of states. The compiled decision diagram of this aggregate problem is tractable and can fit in memory. It can then be exploited by the original branch-and-bound to generate additional pruning and guide the compilation of restricted decision diagrams towards good solutions. The results of the numerical study we conducted on three combinatorial optimization problems show a clear improvement in the performance of DD-based solvers when blended with the proposed techniques. These results also suggest an approach where the aggregate dynamic programming model could be used in replacement of the relaxed decision diagrams altogether.

SAT-Based Learning of Compact Binary Decision Diagrams for Classification

ABSTRACT. Decision trees are popular classifiers due to their interpretability and performance. However, the number of splits in decision trees grows exponentially with tree depth incurring computational cost and increased data fragmentation, hindering interpretability, and restricting their applicability to memory-constrained hardware. In contrast, binary decision diagrams (BDD) utilize a linear number of splits. Recent work uses BDDs as compact and accurate classifiers, but only focuses on binary datasets and does not explicitly optimize diagram size. We present a SAT encoding for learning accurate and compact multi-terminal BDDs over numerical features. We learn a tree BDD and model the size of the reduced diagram as a secondary objective that can be optimized in a one- or two-stage optimization scheme. Alternatively, we directly learn diagrams via multi-dimensional splits. Our encoding outperforms the baselines in accuracy and compactness, illustrating the size-accuracy trade-off and highlighting that our compact decision-diagram classifiers can enhance generalization.

14:30-15:30 Session 7B

Symmetries, East Common Room (#1034) (Chair: Jimmy Lee)

Symmetries for cube-and-conquer in finite model finding

ABSTRACT. The cube-and-conquer paradigm enables massive parallelization of SAT solvers, which has proven to be crucial in solving highly combinatorial problems. In this paper, we apply the paradigm in the context of finite model finding, where we show that isomorphic cubes can be discarded since they lead to isomorphic models. However, we are faced with the complication that a well-known technique, the Least Number Heuristic (LNH), already exists in finite model finders to effectively prune (some) isomorphic models from the search. Therefore, it needs to be shown that isomorphic cubes still can be discarded when the LNH is used. The presented ideas are incorporated into the finite model finder Mace4, where we demonstrate significant improvements in model enumeration.

Using Canonical Codes to Efficiently Solve the Benzenoid Generation Problem with Constraint Programming

ABSTRACT. The Benzenoid Generation Problem (BGP) aims at generating all benzenoid molecules that satisfy some given properties. This problem has important applications in chemistry, and Carissan et al (2021) have shown us that Constraint Programming (CP) is well suited for modelling this problem because properties defined by chemists are easy to express by means of constraints. Benzenoids are described by hexagon graphs and a key point for an efficient enumeration of these graphs is to be invariant to rotations and symmetries. In this paper, we introduce canonical codes that uniquely characterise hexagon graphs while being invariant to rotations and symmetries. We show how to define these codes by means of constraints. We also introduce a global constraint for ensuring that codes are canonical, and a global constraint for ensuring that a pattern is included in a code. We experimentally compare our new CP model with the CP-based approach of Carissan et al (2021), and we show that it has better scale-up properties.

15:30-16:00Coffee Break

Great Hall Lower Gallery (#1022K)

16:00-17:00 Session 8A

Applications 1, Great Hall (#1022) (Chair: Mats Carlson)

Constraint Programming with External Worst-Case Traversal Time Analysis

ABSTRACT. The allocation of software functions to processors under compute capacity and network links constraints is an important optimization problem in the field of embedded distributed systems. We present a hybrid approach to solve the allocation problem combining a constraint solver and a worst-case traversal time (WCTT) analysis that verifies the network timing constraints. The WCTT analysis is implemented as an industrial black-box program, which makes a tight integration with constraint solving challenging. We contribute to a new multi-objective constraint solving algorithm for integrating external under-approximating functions, such as the WCTT analysis, with constraint solving, and prove its correctness. We apply this new algorithm to the allocation problem in the context of automotive service-oriented architectures based on Ethernet networks, and provide a new dataset of realistic instances to evaluate our approach.

A CP Approach for the Liner Shipping Network Design Problem

ABSTRACT. The liner shipping network design problem consists, for a shipowner, in determining, on the one hand, which maritime lines (in the form of rotations allowing to serve a set of ports) to open, and, on the other hand, the assignment of ships (container ships) with the adapted sizes for the different lines allowing to carry all the container flows.

In this paper, we propose a modeling of this problem using constraint programming. Then, we present a preliminary study of its solving using a state-of-the-art solver, namely the OR-Tools CP-SAT solver.

16:00-17:00 Session 8B

Constraints 1, East Common Room (#1034) (Chair: Stefan Szeider)

Binary Constraint Trees and Structured Decomposability

ABSTRACT. Binary constraint tree (BCT, Wang and Yap 2022) is a normalized binary CSP whose constraint graph is a tree. BCT constraint is a constraint represented with a BCT where some of the variables may be hidden (i.e. existentially quantified and used only for internal representation). Structured decomposable negation normal forms (SDNNF) were introduced by Pipatsrisawat and Darwiche (2008) as a restriction of decomposable negation normal forms (DNNF). Both DNNFs and SDNNFs were studied in the area of knowledge compilation. In this paper we show that these two notions are polynomially equivalent. In particular, a BCT constraint can be represented with an SDNNF of polynomial size and, on the other hand, a constraint that can be represented with an SDNNF, can be represented as a BCT constraint of polynomial size. This generalizes the result of Wang and Yap (2022) that shows that a multivalued decision diagram (MDD) can be represented with a BCT. Moreover, our result provides a full characterization of binary constraint trees using a language that is well studied in the area of knowledge compilation. In addition, we show that if a binary CSP on n variables of domain sizes bounded by d has a constraint graph with treewidth k, it has an encoding with a BCT on O(n) variables with domains of size O(d^{k+1}). This allows us to compile any binary CSP to an SDNNF of size that is parameterized by d and k.

Distribution Optimization in Constraint Programming

ABSTRACT. Stochastic Constraint Programming introduces stochastic variables following a probability distribution to model uncertainty. In the classical setting, probability distributions are given and constant. We propose a framework in which random variables are defined by a set of possible distributions and only one should be selected. A solution is obtained when all variable distributions are assigned, and all decision variables are assigned too. In such setting, a constraint on random variables limits the possible distributions its random variables may take. We generalize the notion of chance as the probability of satisfaction of a constraint, namely probabilization, given variable distributions. Probabilization can be seen as a generalization of reification in a random setting. We define the minimal arithmetic to work with stochastic variables having a variable distribution. Using the introduced representation, our framework can in theory save an exponential number of decisions, and represents problems that were previously not representable with finite integer domains. Finally, we model and solve two industrial problems that require this extension – virtual network configuration and assignment of chemical delivery – and show improvement in terms of quality of solution and speed


CP 2023 Welcome Reception, Great Hall (#1022)