APMOD CONFERENCE 2014: INTERNATIONAL CONFERENCE ON APPLIED MATHEMATICAL OPTIMIZATION AND MODELLING 2014

PROGRAM FOR WEDNESDAY, APRIL 9TH

08:30-08:40 Session 1: Opening Session
08:40-09:40 Session 2: Plenary talk
08:40
Robust Optimisation and its Guarantees
SPEAKER: Berç Rustem
ABSTRACT.

Robust optimization provides solutions to decision making under uncertainty. We introduce the problem by considering the formulation of robust policy in the presence of rival models purporting to represent the underlying economic system. We then discuss a collection of models for robust decision making in finance, starting with the robust portfolio optimization problem with uncertain return mean and second moments. We review the robust equal risk contribution and the robust omega ratio problems. Further performance guarantees can be injected by integrating options within a robust framework. This can also be applied to currency portfolios, employing additional constraints to safeguard against arbitrage. We also discuss extensions to multi-stage decision models. The basic paradigm that ensures robustness is minimax: the determination of the optimal decision in view of the worst-case scenario. Most models can be solved as straightforward mathematical programming problems and by dualising the inner optimization problem. Finally, we consider robust option portfolios, for minimizing the maximum hedge error, that require specialized minimax algorithms.

09:45-11:00 Session 3A: Handling Uncertain Data in Decision Support
09:45
Robust optimization in Xpress Mosel
ABSTRACT.

We present the new robust optimization modelling module for FICO Xpress Mosel. While focusing on the design and decisions behind the language syntax using example models, the current reformulation capabilities to the robust counterparts and some algorithmic details are also explained.

Submitted to the invited session 'Handling uncertain data in decision support'.

10:10
Applying oracles of on-demand accuracy in two-stage stochastic programming - a computational study
ABSTRACT.

Traditionally, two variants of the L-shaped method based on Benders' decomposition principle are used to solve two-stage stochastic programming problems: the single-cut and the multi-cut version. The concept of an oracle with on-demand accuracy was originally proposed in the context of bundle methods for unconstrained convex optimzation to provide approximate function data and subgradients. In this paper, we show how a special form of this concept can be used to devise a variant of the L-shaped method that integrates the advantages of the traditional variants while avoiding their disadvantages. On a set of 104 test problems, we compare and analyze parallel implementations of regularized and unregularized versions of the algorithms. The results indicate that significant speed-ups in computation time can be achieved by applying the concept of on-demand accuracy.

10:35
Computational aspects of risk-averse optimization in two-stage stochastic models
SPEAKER: Csaba Fabian
ABSTRACT.

We generalize the on-demand accuracy approach of Oliveira and Sagastizabal to constrained convex problems, adapting the constrained level method of Lemarechal, Nemirovski and Nesterov to this framework. The new method is then applied to two-stage risk-averse problems. In this talk, we consider the CVaR-constrained model of Ahmed, and the stochastic ordering-constrained model of Dentcheva and Martinez. (The latter model is re-formulated using an appropriate constraint function.) We present a computational study on two-stage stochastic programming problems with a CVaR constraint for the recourse.

09:45-11:00 Session 3B: Search Games
09:45
Submodular search games
ABSTRACT.

Search games are usually defined on graphs or metric spaces. They have a geometric flavour. In this talk I will consider search games from an operational point of view, by considering the cost of the search. This leads to a type of convex game, which generalizes the expanding search games that have recently been introduced by Alpern and Lidbetter.

This is joint work with David Ramsey and Ken Kikuta.

10:10
Spatial Dispersion of Simple Agents over a Network
ABSTRACT.

For various possible reasons (including animal foraging, guarding a museum or search for an object) a team of agents may wish to disperse themselves over the nodes of a network. Alpern and Reyniers (2002) considered how n non-communicating agents can most quickly locate themselves evenly over a set of n locations. The current work, joint with Steve Alpern, adds a network structure to the locations (nodes), in that agents can only stay still or move to an adjacent location in each period. We assume they stay still with a given probability (which may depend on the population of their current node) and otherwise move randomly. We also consider the use of territoriality, where an agent who has been at his current node the longest comes to ‘own’ it and stays on it forever. The aim in all the problems we consider is to minimize the expected time to achieve full dispersal.

10:35
Optimal search for a small (or well hidden) object
ABSTRACT.

In traditional models of search games, an immobile Hider picks a point in some search space and a Searcher picks a constant speed trajectory in the space with the aim of minimising the expected time to reach the Hider. However, in practice a Searcher's ability to detect a Hider may depend on the speed at which he is travelling. For example, when searching for a contact lens there are two possible modes of travel: a fast walking speed and a slow searching speed. Equally, an explosives expert searching for an IED may be able to move from place to place quickly in a vehicle, but in order to detect the IEDs he must get out of his vehicle and move at a slower pace. Hence we adapt the traditional model to allow the Searcher to move at either a slow speed at which he is able to detect the Hider, or a fast speed at which he cannot. We view this as a zero sum game and show that the solution of the game is complicated even if it is played on a single arc. We give the solution to this, and to the game played on trees and other networks. We also consider what happens if the Searcher is able to detect the Hider with a small probability when he is travelling at the fast speed.

09:45-11:00 Session 3C: Combinatorial Optimization I
09:45
Workforce Allocation for IT Services using Stable Matching
ABSTRACT.

Workforce allocation to project requirements in an IT service is an important task. Improper allocation results in poor on-job performance, increase in attrition rate, and financial loss to the projects. The general methodology followed by managers for workforce allocation is either a round-robin fashion assignment (greedy heuristics) or using cost optimization assignment model. In both these approaches, one could find out unstable pairs. Pairs which are unhappy with their current allocation and overall solution could be improved by meager increase in cost of allocation. We propose a technique based on the theory of stable matching to improve this allocation process.

In this paper, we address a specific trainee allocation problem faced by an IT service organization. The problem is as follows: We have n trainees and m projects. Each trainee few parameters, namely, skill, training score, three location choices are known. Each project specifications such as location, skill requirements, type of project, and the number of people required are known. The problem is to allocate each trainee to a single project requirement as per his/her preference, such that we obtain a good allocation.

We design utility function to measure matching satisfaction and to generate preference list of employees and projects. The utility function uses the empirical data distribution and managerial insights for generating preferences. We compare solution generated by different allocation methods using this preference list. Our findings are based on the experimentation done on different test cases by comparing the average satisfaction (in terms of preference matching) and allocation cost. The findings are as follows: allocation strategies such as greedy heuristics and assignment cost optimization produces unstable pairs, stable matching performs better with average trainee and project satisfaction, percent increase in average satisfaction achieved by stable matching is at a lower percent increase in cost of allocation.

10:10
Mixed Integer Programming for the Workforce Scheduling and Routing Problem
ABSTRACT.

We propose a mixed integer programming (MIP) approach to tackle a workforce scheduling and routing problem. This problem arises in scenarios where workforce has to travel to different locations in order to perform the assigned tasks (e.g. home care, technicians, cleaners, security patrols). Hence, this problem integrates elements from both workforce scheduling and also from vehicle routing. In this work we focus on a home care scenario with real-world data provided by our industrial partner. For this specific scenario, the solutions developed by human planners often result in a considerable number of unassigned tasks. This means additional costs if those unassigned tasks are to be covered by hiring additional workforce. Hence, our focus is to minimise the number of unassigned tasks in addition to minimise the total travelling distance by the whole workforce. The MIP model is tackled using IBM’s CPLEX solver and as expected, finding a solution is a challenge in terms of computational time, particularly considering the problem size of our real-world scenarios. Therefore, we split a problem instance into multiple sub-problems by clustering tasks and then solving the sub-problems individually. The solutions obtained for the full-size scenarios are of course guaranteed to meet all the established constraints (e.g. workforce qualifications and geographical working restrictions), which is hardly the case in solutions produced by the human planners. We present a computational study that helps to better understand the difficulty of this integrated scheduling and routing problem and assess the suitability of tackling the problem with a contemporary MIP solver such as CPLEX. This work is helping us to determine an effective way to integrate clustering algorithms and heuristics with the MIP solver in order to solve real-world size problems in practical time and robustly, producing solutions that can be implemented in reality by our industrial partner.

09:45-11:00 Session 3D: Supporting Advanced Modeling Constructs in Algebraic Modeling Languages
09:45
Analyzing Structured Optimization Models with Automatic Transformations
SPEAKER: John Siirola
ABSTRACT.

Computational tools for modeling mathematical programs are widely used within both academia and industry. Available commercial and open-source modeling packages support generic modeling by separating modeling constructs from instance data through concepts like sets, parameters, and parameterized constraints. However, their model representations are limited to constructs that directly correspond to established solver inputs. In general, this implies that mathematical programs must be expressed as either linear or nonlinear algebraic models; that is, a list of variables, an algebraic objective expression, and a list of algebraic constraints. Modelers must then explicitly convert non-algebraic constructs like switching decisions in disjunctive programs into algebraic relaxations (e.g., relaxing constraints implied by the decision with Big-M terms). This approach can obfuscate or eliminate the original model structure and it forces the modeler to sacrifice abstraction by injecting decisions related to how to solve a model into the model representation. In this presentation, we demonstrate how high-level non-algebraic modeling constructs can be coupled with automated model transformations to improve model clarity and abstraction. This coupling provides a more flexible workflow where the modeler can explicitly apply transformations that link the structured model to a particular solver. Additionally, this modeling approach simplifies the model specification by eliminating explicit modeling of reformulation decisions. We will highlight two key non-algebraic constructs: disjunctive programming and complementarity constraints. We present transformations that rely on both direct transcription to the Big-M or convex hull relaxation, as well as heuristic procedures leverage both relaxations. Similarly, we show how to reformulate complementarity constraints as smooth nonlinear expressions or as a disjunctive expression that subsequently applies disjunctive transformations. These constructs and transformations are available in the open source Coopr optimization project, which includes the Pyomo modeling library.

10:10
Toward Supporting Bi-Level Programming in an Algebraic Modeling Language
SPEAKER: William Hart
ABSTRACT.

A bilevel program is a mathematical program in which a subset of decision variables is constrained to take values associated with an optimal solution of a distinct, “lower” level mathematical program. Bilevel programs are widespread in practice, and they arise in applications ranging from graph analysis to sensor and facility location. However, no algebraic modeling language (AML) natively supports bilevel programming, which forces users to directly implement intricate model transformations by hand.

In this presentation, we describe recent developments to support bilevel programming in an AML, specifically in the context of the Pyomo AML and the Coopr open-source optimization project. We focus on the class of bilevel programs that features two sequential decision makers (upper-level and lower-level) and information transparency. The upper-level program is a mixed-integer linear program and the lower-level program is a linear program parameterized by a subset of the upper-level program’s binary variables. We describe modeling extensions that support the expression of bilevel programs in Pyomo, and transformations that automatically construct an equivalent single-level mixed-integer linear program. These transformations can be applied automatically to allow for direct solution by general solvers. The transformations are derived using duality and linearization/reformulation via disjunctive constraints. We demonstrate the use of these new constructs and tools using specific motivating problems from real-world application domains: shortest-path network interdiction and water sensor placement.

10:35
Specification and Automatic Discretization of ODE and DAE Systems in an AML
ABSTRACT.

Dynamic optimization problems that include differential equations as constraints are typically solved by first discretizing the model and then solving the resulting nonlinear optimization problem. Several strategies exist for discretizing such dynamic systems, including multiple shooting and collocation over finite elements. However, implementing these discretization strategies is usually done by hand, which is both a time consuming and error prone process.

We begin this talk by introducing new modeling constructs to support the specification of ODE and DAE systems within the Pyomo algebraic modeling language. While simple, these constructs significantly expand the scope of optimization problem that can be addressed by typical AMLs, e.g., GAMS and AMPL. We then discuss mechanisms to perform automatic discretization of the resulting models, using flexible and configurable discretization transformations.

Given these new capabilities, we examine two illustrative dynamic optimization problems. The first problem involves parameter estimation for an infectious disease propagation model. We use this example to demonstrate the relative cost of automatic versus manual discretization, and to demonstrate validity of the proposed approach. The second problem involves operation of a natural gas transportation network. We use this example to illustrate more advanced discretization schemes and challenges, and show how the availability of ODE/DAE constructs in an AML enables expression and solution of stochastic dynamic optimization problems, using the PySP library for stochastic programming.

11:30-12:45 Session 4A: Uncertainty in Logistics and Transportation
11:30
Joint Revenue Management and Vehicle Routing in Home-Delivery Operations
SPEAKER: Arne Strauss
ABSTRACT.

Attended home delivery services face the challenge of providing narrow delivery time slots to ensure customer satisfaction, whilst keeping the signicant delivery cost under control. To that end, the firm can try to influence customers when they are booking their delivery time slot so as to steer them towards choosing slots that are expected to result in cost-effective schedules. We estimate a multinomial logit customer choice model from historic booking data and demonstrate that this can be calibrated well on a genuine e-grocer data set. We propose dynamic pricing policies based on this choice model to determine which and how much incentive (discount or charge) to offer for which time slot at the time a customer intends to make a booking. A crucial role in these dynamic pricing problems is played by the delivery cost, which is also estimated dynamically. We show in a simulation study based on real data that anticipating the likely future delivery cost of an additional order in a given location can lead to signicantly increased profit as compared to current industry practice

11:55
New models and methods for the multi-path Travelling Salesman Problem with stochastic travel costs
SPEAKER: Luca Gobbato
ABSTRACT.

City Logistics and the concept of Smart City are bringing researchers towards the definition of new transportation and supply-chain applications, integrated in the urban context, which, in particular, incorporate information about the uncertainty related to the transportation system. A recently introduced problem, specifically designed for this context, is the multi-path Travelling Salesman Problem with stochastic travel costs (mpTSPs). Given any pair of nodes, mpTSPs considers a set of paths between the two nodes. Each path is characterized by a random travel time, which represents the travel time oscillation due to the path congestion or the presence of different power trains for hybrid vehicles. In this paper we propose a two stage stochastic programming formulation where tour design makes up the first stage, while a series of recourse decisions are made in the second stage to select the best paths to use, according to observed travel time costs, with the objective to minimize the total cost due to paths congestion. To solve this formulation we propose a parallel heuristic method inspired by the progressive Hedging algorithm [R.T. Rockafellar, R.J.-B. Wets, Scenarios and policy aggregation in optimization under uncertainty, Mathematics of Operations Research, 1991, pp.119-147]. It applies a scenario decomposition technique to separate the stochastic problem following the possible scenarios of random events. Our methodology takes also advantage of strategies aimed at penalizing non-consensus amongst scenario sub-problems and at accelerating its convergence. New instances representing a medium-sized city derived from the speed sensor network of the city of Turin and a flow-based representation of the problem are introduced in order to numerically qualify the solution method and to examine the impact of the stochastic travel time costs on the problem solution, showing the benefits of the proposed methodology in both solution quality and computational effort when compared to commercial solver based methods.

12:20
Bounds for Stochastic Multistage Transportation Problems
ABSTRACT.

In this talk we evaluate approximation methods based on different level of information gained from deterministic, sub-problems solutions and rolling-horizon approaches for stochastic transportation problems. The idea behind is that in case the solution of a large multistage stochastic program is quite demanding, one may try to solve simpler problems for finding lower and upper bounds and proceed to find tighter and tighter bounds, if the difference between the first two is quite large. Structural similarities and differences between the stochastic solution and its deterministic counterpart are also analyzed. It turns out that a large VSS does not necessarily imply that the deterministic solution is useless for the stochastic setting. Measures of the structure of the deterministic solution which generalize the loss using the skeleton solution by taking into account the information on reduced costs associated to the variables at zero in the expected value solution, will be introduced and basic inequalities in relation to the standard VSS are presented. The described relationships are illustrated on real cases supply transportation problems provided by Italcementi. The stochastic parameters are the demand of the customers and the production capacity of the production plants. Shipments are performed by capacitated vehicles, which have to be booked in advance, before the realization of the uncertainty. Once the production capacity and the demand are revealed, there is an option to cancel some of the booked vehicles against a cancellation fee; if the quantity shipped from the suppliers using the booked vehicles is not enough to satisfy the demand, the residual quantity is purchased from an external company. The problem is to determine the number of vehicles to book in order to minimize the total cost. Bounds are then tested on multistage stochastic mixed integer linear programming models formulated to solve the problems considered.

11:30-12:45 Session 4B: Portfolio Management
11:30
Inverse Portfolio Problem with Mean-Deviation Model
ABSTRACT.

A Markowitz-type portfolio selection problem is to minimize a deviation measure of portfolio rate of return subject to constraints on portfolio budget and on desired expected return. In this context, the inverse portfolio problem is finding a deviation measure by observing the optimal mean-deviation portfolio that an investor holds. Necessary and sufficient conditions for the existence of such a deviation measure are established. It is shown that if the deviation measure exists, it can be chosen in the form of a mixed CVaR-deviation, and in the case of n risky assets available for investment (to form a portfolio), it is determined by a combination of n+1 CVaR-deviations. In the later case, an algorithm for constructing the deviation measure is presented, and if the number of CVaR-deviations is constrained, an approximate mixed CVaR-deviation is offered as well. The solution of the inverse portfolio problem may not be unique, and the investor can opt for the most conservative one, which has a simple closed-form representation.

11:55
On the Efficient Hedging in Incomplete Markets
SPEAKER: Hirbod Assa
ABSTRACT.

In this note, we will make a quantitative study of the risk-price trade-off and find a necessary and sufficient condition for a market in which superhedging strategy cannot be improved on by a more efficient strategy for some contingent claims. Remarkably, we find that in a market without Good Deals such contingent claims always exist. We will also show that even in a complete market, the advantages of using the efficient strategy are marginal.

12:20
Risk Management of Electricity Portfolios Using Hedging Strategies
ABSTRACT.

After deregulations in electricity markets contract planning has become one of the major risk management tools for either electricity generation or distribution companies. Participants of the electricity market are exposed to market risk due to the special characteristic of the electricity prices. Since electricity cannot be stored efficiently, spikes are common in the prices resulting as extremely high or negative levels in the price which results as highly volatile structure compared to other commodities or assets in financial markets. High levels of volatility encourage companies to use derivative contracts for risk management. In this paper we analyze the dynamics of spot prices for electricity market in UK and compare the forecasting power of different models such as econometric time series models and stochastic models with regime switching characteristics. Using different models we price the derivative contracts. Then we suggest and solve alternative portfolio optimization problems with different risk metrics. As a result we compare the expected payoff and risk structure of different strategies for an electricity company with a predefined demand structure.

11:30-12:45 Session 4C: Game Theory
11:30
Generalized minimum spanning tree game
SPEAKER: Phuoc Le
ABSTRACT.

The minimum spanning tree game is a special class of cooperative games defined on a graph with a set of vertices and a set of edges, where each player owns a vertex. Solutions of the game represent ways to distribute the total cost of the minimum spanning tree among all the players. When the graph is partitioned into clusters, the generalized minimum spanning tree problem is to determine a minimum-cost tree including exactly one vertex from each cluster. This paper introduces a new class of cooperative games called the generalized minimum spanning tree game and proposes a constraint generation algorithm to calculate a stable payoff distribution. The paper also presents some properties of this game and computational results with the proposed algorithm.

11:55
Bounding the price of anarchy for congestion games with stochastic demand
SPEAKER: Chenlan Wang
ABSTRACT.

Congestion games are noncooperative games in which players compete for finite resources, and the utility of a player depends only on the number of players sharing the same resource. Price of anarchy is a measure of system degradation due to travellers' selfish behaviours, comparing two classic assignments, i.e., user equilibrium and system optimum. For user equilibrium, each traveller aims to maximize the individual utility selfishly at a cost of whole system efficiency. For system optimum, all the network resources are well organized and assigned in the most efficient way by an ideal central controller. The study of the price of anarchy deems whether the user equilibrium is approximated to the system optimum, and how much can be improved by coordination from a system perspective. We extend the notions by considering day to day variation of demands and study the price of anarchy in our stochastic models. Without any restriction of cost functions, the price of anarchy is unbounded. We prove upper bounds the price of anarchy with affine cost functions and more general polynomial cost functions. The tightness of all upper bounds will also be presented in our study.

11:30-12:45 Session 4D: Numerical Algorithms I
11:30
Hybrid adaptive blind equalization algorithm with new cost function
ABSTRACT.

Constant modulus algorithm (CMA) and its extensions like the multimodulus algorithm (MMA) and the extended constant modulus algorithm (ECMA) have been proposed to combat intersymbol interference (ISI) in digital communications systems transmitting signals with constant modulus such as phase shift keying (PSK) modulated signals. However, for quadrature amplitude modulated (QAM) signals suited for high date-rate communications these algorithms have shown their limits. Hybrid blind (unsupervised) equalization algorithms consist of adding a penalty term to a standard criterion which results in inhenced performance for communications systems transmitting QAM signals. The penalty term is introduced to take account of the non constant nature of modulus and hence forces the equalizer output to match the constellation points and is referred to as constellation matching error (CME). We have recently proposed a new CME for which we analyse the convergence in this paper. Through this analysis, we show that it confers to the resulting penalized criteria substantial improvements and makes it outperform standard and other hybrid algorithms.

11:55
Convergence of truncated stochastic algorithms with an extra control
SPEAKER: Ramin Okhrati
ABSTRACT.

Many finance and risk management problems are modeled through solutions of optimization problems. While these problems are theoretically well developed, their numerical solutions are not much studied. Stochastic algorithms can be useful tools in developing numerical solutions for these problems. We design a stochastic approximation algorithm based on Robbins-Monro algorithm. At every step we have both a random truncation and a control on the approximating sequence. We prove almost sure convergence under improved local hypotheses which are easily verifiable.

12:20
Solving nonlinear second order conic optimization problems
SPEAKER: Alain Zemkoho
ABSTRACT.

We consider a second order conic optimization problem with twice continuously differentiable functions. Various solution approaches are discussed with special attention on augmented Lagrangian-type methods. In particular, a Lowner-Log-Sigmoid transformation is used to deal with the conic constraint. This allows the preservation of some important properties of the initial problem including the order of differentiability. Newton and quasi-newton methods are then applied to solve the resulting unconstrained optimization problem. In this talk, we discuss the features of each method and present some preliminary experiments.

14:15-15:30 Session 5A: Optimization in Industry
14:15
A Branch-and-Price Approach for Energy-efficient Deployment of Multi-tier Services in Cloud Data Centres
ABSTRACT.

Providers of cloud software services are concerned about being cost- and energy-efficient in their service delivery, while providing a quality of service in accordance with the end-users' requirements. We present a branch-and-price approach for a multi-tier service deployment problem where various types of virtual machines, with different functionality, are mapped to physical nodes while considering performance requirements and the energy costs of the service provider. In order to satisfy the performance requirements the provider might need to deploy several, identical virtual machines of a given type.

The columns in the master problem represent physical nodes packed with virtual machines, and the columns are generated in a pricing problem which is at the moment solved as an MIP. The structure of the master problem complicates the branching since the column variables are general integer variables and not restricted to be binary. Preliminary results show that a branch-and-price approach provides better solutions more quickly than a direct formulation.

14:40
Which information is important in decision making under uncertainty? A case study from maritime transportation
ABSTRACT.

Using stochastic programming comes in pair with modeling the uncertainty affecting the problem. We present an analysis which can help understanding which properties of uncertainty are more important to capture in a given model for decisions under uncertainty. The analysis is suitable for general stochastic programs, and especially for inherently multistage ones, where using incorrect models of uncertainty can lead to repeating poor decisions. Such analysis, performed before data collection, can illustrate which information should be primarily sought, and which is not critical for the final decision. We apply the analysis to a real instance of the maritime fleet renewal problem. Results show that, in our case study, some properties of the stochastic phenomena have very little relevance from a decision making point of view.

15:05
Optimisation models for salmon farming
ABSTRACT.

In the last decade the salmon farming industry has expanded rapidly and gone towards consolidation, thus the complexity of planning has increased. Therefore, the need for better planning tools has arisen. We present a stochastic optimization model for integrated planning of deployment of smolt and harvesting of salmon This is a tactical planning problem in an environment where several parameters regarding production are uncertain. The most important sources of uncertainty in seawater production are growth, price and mortality. In addition we present a stochastic linear programming model for optimizing the smolt production.

14:15-15:30 Session 5B: Combinatorial Optimization II
14:15
Order Acceptance and Scheduling with and without due dates: A Review and New Results
ABSTRACT.

We provide a review of existing order acceptance and scheduling literature considering due-date and non-due-date related dimensions. We introduce a unified three field representation scheme for classifying the existing problems into different classes and for each class of the problems, we give an overview of computational tractability with solution algorithms for various problems studied in the literature. We provide details on the tractability of some open problems unaddressed in the literature. We extend our research into problems arising in systems having parallel machine configurations especially in client server systems, where the server has to process different client requests under limited capacity conditions with an objective of maximizing the profit by considering weighted completion time. Weighted completion time represents the significance associated with the service delivered to the clients by the provider. We present a Mixed Integer Linear Programming formulation for this problem and show that the complexity of this problem is NP-hard. A Branch and Bound algorithm is presented to solve the problem and its effectiveness is tested against the computational time required using CPLEX Solver through computational study. We find that CPLEX Optimizer solves the parallel machine problem with sizes no more than 5 machines and 20 customer orders, where as the Branch and Bound scales up to 5 machines with 60 customer orders. We also identify several problem areas and issues for future research.

14:40
Multidimensional tropical optimization problems with applications to job scheduling
ABSTRACT.

Optimization problems are considered which are formulated and solved in the tropical mathematics setting. The problems are to minimize or maximize functions defined on vectors of finite-dimensional semimodules over idempotent semifields, subject to linear inequality and equality constraints. The objective functions can be linear or take the form of non-linear functions calculated by using a conjugate transposition of vectors. We give an overview of known problems and briefly discuss available solution methods. Furthermore, recent results on the solution of certain new problems are presented which give the problems direct explicit solutions in a compact vector form.

We apply the obtained results to solve scheduling problems for a set of jobs operating under various precedence relations in the form of start-start, start-finish, early-start, late-finish and other temporal constraints. The problems are formulated to find optimal schedules according to certain optimality criteria, which involve the minimization of the maximum deviation of job completion times from given due dates, the minimization and maximization of the maximum deviation time between job completion times, and the minimization of the maximum job flow (processing) time.

15:05
Combining Levels of Crew Aggregation in Network Flow Based Airline Crew Scheduling Models
ABSTRACT.

Due to its complexity, the airline crew scheduling problem is typically solved in two steps: In crew pairing optimization, flights are combined into pairings (legal sequences of daily flight duties) starting and ending at the same crew domicile. In the subsequent crew rostering step, these anonymous pairings are assigned to concrete crew members along with rest periods and standby duties. A major advantage of this decomposition is the fact that the highly combinatorial problem of identifying an optimal set of crew pairings is performed at the home base level of crew aggregation which avoids the replication of multiple similar (and thus symmetric) model layers for each crew member. However, a possible drawback of this approach is that important aspects such as crew members with multiple qualifications, time-dependent availability of crew members due to preassignments as well as weekly and monthly rest requirements are typically not accounted for in the pairing optimization step. In this paper, we discuss modelling techniques to efficiently combine different levels of crew aggregation in an integrated network flow based model for the crew pairing chain problem proposed by Mellouli. These techniques are used to deal with crew groups with hierarchically related qualifications and to anticipate crew rostering on the level of individual crew members within the crew pairing chain optimization. An important feature of the modelling techniques is the delegation of problem aspects exhibiting a high combinatorial complexity in model layers with a high level of crew aggregation while keeping track of crew-related aspects and rules in model layers with more detailed crew information. Computational experiments with real world data sets from a medium-sized German airline show that the resulting models are efficiently solvable using standard solvers enhanced by a problem-specific fixing technique and yield improvements with regard to the results of the overall planning process.

14:15-15:30 Session 5C: Student Best Paper Competition
14:15
Stochastic Local Search versus Genetic Algorithm for Feature Selection
ABSTRACT.

Feature selection is an important process in data classification. It permits to eliminate the redundant attributes and enhance the classification accuracy by keeping only the relevant attributes. In this paper, we study and compare two meta-heuristics for feature selection in data classification. The first one is a stochastic local search and the second one is a genetic algorithm. The two methods are used for selecting an optimal set of attributes that are sent to the Support vector machine based classifier. The latter is used to find the good classification for the considered data. The two methods are tested on some benchmark datasets of various sizes available in UCI Machine Learning Repository in order to measure their performance. The results are encouraging and demonstrate the benefits of the proposed approaches in data classification.

14:40
Adjustable robust optimization with decision rules based on inexact revealed data
ABSTRACT.

Adjustable robust optimization (ARO) is a technique to solve dynamic (multistage) optimization problems. In ARO, the decision in each stage is a function of the information accumulated from the previous periods on the values of the uncertain parameters. This information, however, is often inaccurate; there is much evidence in the information management literature that even in our Big Data era the data quality is often poor. Reliance on the data "as is" may then lead to poor performance of ARO, or in fact to any "data-driven" method. In this paper, we remedy this weakness of ARO by introducing a methodology that treats past data itself as an uncertain parameter. We show that algorithmic tractability of the robust counterparts associated with this extension of ARO is still maintained. The benefit of the new approach is demonstrated by a production-inventory application.

15:05
Local Search methods for the Minimum Interference Frequency Assignment Problem
ABSTRACT.

This paper studied three local search meta-heuristics for the Minimum Interference Frequency Assignment Problem (MI-FAP) in GSM network (Global System for Mobile communication). The MI-FAP is the problem of finding an assignment of small number of frequencies to a large number of transceivers that minimizes the interferences level. The MI-FAP is known to be NP-Hard and plays an important role in the GSM network planning. We first studied two local search based methods for MI-FAP. The first one is a Variable Neighborhood Search (VNS) and the second one is a Stochastic Local Search (SLS). Then, we proposed to combine these two methods in a new method so-called VNS-SLS technique. The SLS method is incorporated into the VNS process as a local search step where the objective is to enhance the solution quality and the performance of the VNS process. The three methods for MI-FAP which are: SLS, VNS and VNS-SLS are implemented and tested on Benchmarks to measure their performance. The experimental study shows that VNS-SLS succeeds in finding good results for MI-FAP compared to both VNS and SLS which demonstrate the effectiveness of SLS as a local search routine into the VNS process.

14:15-15:30 Session 5D: Developments in Modeling Languages and Platforms
14:15
Modelling and Solving Generalized Graph-based Transport Problems.
ABSTRACT.

When you want to solve realistic transport problems, you are confronted with several different constraints and cost types. Nevertheless, many transport problems boil down to a set of many shipments (or commodities) which have to be transported from a node A to a node B in a graph. Therefore it would be nice to have a meta-heuristic that works on this level of generality while not ignoring a priori information about the graph and the constraints.

We have defined a graph-based modelling language that was first defined for hub location and open vehicle routing problems but can also model mixed problems including some sort of pre-haul or scheduling. It takes a shipment-driven view, i.e. all costs are derived from the routes of the shipments, and forbidden interactions between shipment routes are modelled by penalty costs so that all shipments can be routed individually. From a mathematical description we derived an XML structure and programmed a framework. Upon this framework, we defined a Simulated Annealing heuristic and tuned it. The numerical results show that we can handle complicated cost functions much better than commercial MIP solvers.

14:40
Developments in the AMPL Ecosystem
ABSTRACT.

We report new developments in the ecosystem of the AMPL modelling language; these include language constructs and operational methodologies. AMPLDev SP edition is a fully featured Integrated Development Environment (IDE) for AMPL, with workspace management, editors with syntax highlighting, solution viewers and console support. It also includes Stochastic AMPL (SAMPL), an extended version of AMPL designed to support Stochastic Programming (SP) and Robust Optimisation (RO). Formulation of RO models is greatly simplified by a subset of the extended syntax that SAMPL supports. SP models expressed in SAMPL are generated, at instance level, in the SMPS format, which are then solved using another component of the software suite: FortSP. FortSP is a solver designed for Stochastic Programming, based on Bender’s decomposition. The performance of the solver is enhanced through regularisation by the level method. FortSP has Stochastic Integer Programming capability and uses CPLEX, Gurobi or FortMP as embedded solver. We give use case examples and discuss the benefits of the extended syntax and of the integrated environment. We are making the AMPLDev modelling system and the solvers CPLEX and FortSP more readily available to the industrial users and the academic community through a cloud-based service. We illustrate the usage and the benefits of this approach.

15:05
Fair Division as an Applied Optimization Problem
SPEAKER: David Curran
ABSTRACT.

We present a method to fairly divide a collection of goods amongst a group of people. The method handles allocating mixtures of divisible and indivisible goods for two or more people. Our solution has been mainly applied to the creation of wills (the division of estates). It can also be applied to divorces, mergers and bankruptcies.

The process of turning this optimization method into an internet startup is then described. This includes competing for a place in and attending a startup incubator program, exploring market verticals, user interface and user experience refinements based on consumer trials and patent issues.

We also give a solution and our experience with the related 'dirty work' problem of dividing tasks or 'bads'. From customer interviews we found the practical requirements of this problem are drastically different to how it is described in the literature on fair division.

The literature dealing with fair division uses theoretical valuations of goods and mathematical definitions of fairness. We describe how real peoples valuations of goods and metrics of fairness differ from these in practice.

16:00-17:15 Session 6A: Stochastic Programming & Applications
16:00
A stochastic optimisation model for offshore wind turbine micro-siting
SPEAKER: Arne Klein
ABSTRACT.

The positioning of wind turbines within an offshore wind farm, also called turbine micro-siting, has a significant influence on the power production of the farm. The reason for this is the so-called wake effect, which is a decrease of the wind velocity and increase of turbulence behind a wind turbine. It causes a lower power production for turbines within the wake of another turbine and creates an optimisation problem for the placement of the wind turbines. This location problem can be approached by maximising the total power production, calculating the wind field within the farm with a wake model, and taking long-term wind estimates into account. In the current work we introduce an improved version of the frequently cited Jensen wake model, which is, in contrast to the original model, differentiable on the whole two-dimensional turbine location space. The optimisation problem can be formulated as a non-linear program with continuous turbine position variables for a fixed number of turbines, and be solved by a sequential quadratic programming method with multiple random starting positions.

Estimates of the expected wind over the lifetime of a wind farm are difficult to calculate, and usually include significant uncertainty. We thus set up a corresponding stochastic model to investigate the influence of uncertainty in the wind estimates on the resulting turbine layout and power production. Experimental results for a sample wind farm with different wind estimates and numbers of turbines are given. We compare the results of the deterministic model to those of the stochastic model with different levels of uncertainty in the wind estimates.

16:25
Scenario reduction methods for stochastic optimization: an experimental comparison
ABSTRACT.

To solve stochastic optimization problems using sample average approximation, choosing the right set of scenarios is key, since approximation quality highly depends on the quality of the selected scenarios. To apply sample average approximation, we have to draw a sample of realizations of those random variables that define the stochastic part of our optimization problem. Since computational complexity scales with the number of scenarios, we normally want to reduce the set of scenarios while retaining its approximation quality. While methods for drawing a reduced sample from a univariate distribution are well established, empirical evidence on the performance of scenarios reduction methods for multivariate distributions is still scarce. The most widely used methods in this domain are quasi Monte Carlo, moment matching, as well as methods based on minimizing probability metrics. This work presents results of a large-scale experimental study on the efficiency of different scenario reduction methods, which is meant to serve as guidance for practitioners of computational stochastic optimization. Most notably, the study provides empirical evidence for the ineffectiveness of methods based on minimizing probability metrics in higher dimensions.

16:50
The Stochastic Capacitated Branch Restructuring Problem
ABSTRACT.

In the aftermath of the Great Recession of the first decade of this century, we have witnessed an important increase in the number of bank mergers and acquisitions. One of the most important problems faced after a merger is the reduction of the branch network, eliminating redundant branches and adapting the capacity of the resulting network in order to accommodate the demand. This problem becomes even more complex when the uncertainty in the way that the market will react to the restructuring is considered. In this work, we present a stochastic capacitated branch restructuring problem, formulated as a two-stage recourse stochastic programming model. It takes into account the size of the shuttered branches, the existence of competitors, and the uncertainty in the demand’s response. We propose three alternative versions of our formulation that model different ways in which the different stages of the restructuring may be carried out. The model’s performance is tested on 25 alternative settings designed on an extension of Swain’s network. The results show that the banks may obtain important savings, if the necessary changes in the service capacity are carried out after the information about the market’s reaction is available.

16:00-17:15 Session 6B: Algorithms for Integer Programming
16:00
BFC-SDC, an algorithm for solving multistage stochastic mixed 0-1 problems with a risk averse strategy
ABSTRACT.

We extend to the multistage case the recent two-stage risk averse measures introduced in Gollmer et al., (2008) and (2011), such that in the formulation of the whole multistage stochastic mixed 0-1 optimization problem an objective function is maximized in the domain of a feasible region subject to first- and second-order Stochastic Dominance Constraints integer-recourse (SDC). In this work we present the multistage time consistent mixture of those two SDC measures. So, an objective function is maximized in the domain of a feasible region subject also to first- and second-order SDC. Its advantage and the computing price to pay for it in case of plain use of MIP solvers are discussed. The SDC measure that we propose is included by user-driven set of profiles, each one consists of a threshold on the value of a function, a bound target for the shortfall probability and a bound target for the expected shortfall of the scenario to occur as soft constraints whose violations are appropriately penalized in the objective function, and a hard bound on the maximum shortfall. Given the dimensions of these large-scale problems augmented by the new set of variables and constraints required by these two risk measures, it is unrealistic to solve the problem up to optimality by plain use of MIP solvers. Instead of it, decomposition algorithms of some type should be used. We present an extension of our Branch-and-Fix Coordination algorithm, the so named BFC-SDC, where (besides some important refinements for cutting branches purposes) a special treatment is given to cross scenario cluster constraints that appear in SDC risk measures. A computational experience is presented by comparing the risk neutral approach and the tested risk averse strategies by using a randomly generated set of instances. The performance of the new version of the BFC-MS algorithm versus the plain use of a state-of-the-art MIP solver is also reported.

16:25
An algorithm for solving multistage stochastic nonlinear mixed 0-1 problems
ABSTRACT.

An algorithm for solving nonlinear, multistage mixed 0-1 problems under uncertainty is presented. This approach uses the twin node family concept within the algorithm framework of the Branch and Fix Coordination (BFC) method. Important features of this approach (BFC-MSMIP) with respect to other stochastic integer approaches are that it addresses multistage environments with both continuous and binary variables in each stage and that the objective function is nonlinear, the constraints being linear. It is very important to determine the stage where the nonanticipativity constraints are explicitly considered in the model. This information is used when the full model is divided into a scenario cluster partition with smaller problems. In this work an algorithm to solve these multistage stochastic nonlinear problems is designed, it is implemented in C++, and each nonlinear subproblem generated in the nodes of the trees associated with this method is solved by solving sequences of quadratic subproblems with the help of the Cplex library. Numerical results are reported.

16:50
Heuristic for a bilevel railroad-infrastructure access-pricing problem
SPEAKER: Michal Kaut
ABSTRACT.

The Norwegian railroad network is used to its full capacity in the peak time, but is underutilized in other parts of the day. This is partly due to the fact that there are no incentives for the operators to move to the off-peak times. In co-operation with the Norwegian National Rail Administration (Jernbaneverket) and two train operators (Flytoget and Cargolink), we have developed a model that studies whether the infrastructure utilization can be improved by introducing scarcity costs and subsidies.

Since the model has to include the operators' response to the costs, we get a bilevel problem with integer variables in both levels. To solve this model, we have developed a heuristic consisting of several different MIP models, solved in sequence.

In this talk, we will present the heuristic together with preliminary results.

16:00-17:15 Session 6C: Applications in Healthcare
16:00
Whom to Serve next: Scheduling Patients belonging to different categories at a Multi-facility Healthcare Diagnostic centre
SPEAKER: Varun Jain
ABSTRACT.

One of the key operational decisions faced by any health care diagnostic provider is: Whom to serve next, that is, which category of patient (for example: Emergency (EP), outpatient (OP), inpatient or health check-up (HC)) should be served next at a particular facility. This is important as the revenues generated by the different categories are different as is the priority by which they need service. The answer to this problem is complex owing to the limitations forced by their medical status which we describe as category; and limitations on essential resources, example: availability of equipment, nurse, supporting staff and physician. The current practice of expert based scheduling often lead to long waiting time for patients, and thus affecting the revenue generated. Current research in this area is witnessing a trade-off between operational and economic objectives for serving the demand in cost effective manner. However, we find research is limited to single facility.

In this work, we model random arrival of patients belonging to different categories and priorities with some patients having pre-defined sequence and service time, at multiple diagnostic centres with dedicated resources over a finite planning horizon, thus extending the scope of research to include diagnostic centres with multiple facilities. We develop a mathematical model for sequential decision making under uncertainty using Markov Decision Process (MDP) with the objective of maximising the profit, incurring penalty cost for waiting patient, rejecting patient due to limitation posed by capacity and unmet demand at the end of day. We solve the problem using Dynamic Programming (DP) and improve the result obtainted by Neihbourhood search technique. we further perform computational experiments to compare these results with the results obtained for six different heuristic decision rules for serving patients.

16:25
Optimum Surgery Time for Rectal Cancer Patients Receiving Chemoradiotherapy
SPEAKER: Elvan Gokalp
ABSTRACT.

In the treatment of rectal cancer, neoadjuvant chemoradiotherapy (CRT) is a standard practice in many centers. The therapy is generally followed by a resection surgery in which the tumour is excreted from the rectum. The optimal surgery time is crucial in the treatment process and the effect of CRT (tumour regression). In this paper, we introduce a Markov decision process to determine the optimum time of surgery in the management of rectal cancer. We derive some structural properties of the model including the sufficiency conditions for optimal control-limit policies.

16:50
Simulation based evaluation of interdisciplinary ward clustering in a large university medical centre
ABSTRACT.

The current “Krankenhaus Rating Report 2013” documents that 51% of all hospitals in Germany incur a loss. Hospitals should improve efficiency and lower costs because of the cost pressure induce by the establishment of diagnosis related groups (DRGs) for hospital payment in Germany since 2003. Studies about resource management in hospitals show that improving flexibility could lead to more efficiency. As shown in our previous study in the university medical centre Halle (Saale), flexible bed management scenarios, like borrowing rooms to other wards nearby, are able to improve bed utilization and reduce waiting times for patient relocations considerably. Based on these findings, the hospital defined three realisable interdisciplinary scenarios mapping the 48 wards into seven or eight subject-specific allocation clusters (e.g. head, bone, internal medicine). These clusters contain similar wards and enable patient allocation in a cluster if the proper ward has no available beds. Based on analysis of patient data from 2012, we developed a discrete-event simulation model to study how ward-clustering effects bed utilisation, how to configure clusters, and whether clustering could deal with bed capacity reduction of about 1/3. The model is able to simulate patient flows (emergency and elective) under constraints like gender or isolation states. To evaluate the scenarios, we measured the excess of available bed capacity of wards or clusters and compared with current situation. Results show that configuration of clusters is more important than size. The scenario with eight clusters and one for internal medicine reduced capacity exceeding about 200 times which is twice as much as the scenario with seven clusters but without internal medicine. After bed capacity reduction, the scenario with seven clusters and internal medicine reduced the excess by 1.5 times compared to the actual inflexible ward-oriented system.

16:00-17:15 Session 6D: Metaheuristics
16:00
Development of a partially synchronized multithreaded metaheuristic to solve the supply chain network planning problem
ABSTRACT.

Supply Chain Network Planning (SCNP) is a planning problem from the logistics domain used to determine an optimal allocation of demands and capacities in storage, transportation and production facilities, aiming to fulfill customer demands at minimal costs. Consequently, the SCNP contains an integrated lotsizing for production and distribution, and therefore entails a considerable complexity. It has been shown in previous work that the population-based metaheuristic Fish School Search (FSS) is applicable to this problem and is able to find (near-) optimal solutions. However, it still performs slower than exact optimization approaches regarding common size SCNP problems. One possible reason is the lack of parallelization within the FSS. Thus, this work proposes a multithreaded version of the FSS, which nevertheless only requires a partial synchronization of the used threads, which perform calculations for each individual, due to certain characteristics of the algorithm: unlike the most famous population-based metaheuristic, the Particle Swarm Optimization, the FSS does not use a defined neighborhood of individuals. Instead, a combined movement of individuals is instead performed by gathering information about the positions of all existing individuals. The FSS uses three distinct movement operators, an individual one and two collective ones. The individual movement doesn’t require any synchronization. However, the collective movements require some information gathered from all individuals. Strong delays among the individual calculations may in turn cause undesirable behaviors. Therefore, a synchronization point is needed before the execution of these operators, in which an extra thread performs calculations in order to provide these collective information. Since these collective movements are independent of one another, they can be executed in parallel. This work concludes with an evaluation of the performance of this newly developed and, as explained beforehand, only partially synchronized multithreaded metaheuristic when used to solve the SCNP and offers comparisons to other state-of-the-art techniques.

16:25
Optimizing a linear fractional function over an integer efficient set
ABSTRACT.

In this communication we present a classe of problems that deals with optimization over the efficient set of multiple objective integer linear programming problems. We propose an exact method to solve the case where a linear objective function (the principal objective) is to be optimized over such integer non convex feasible domain. The technique is mainly based on two approches, one improves the principal objective from iteration to another and the second reduces the admissible region by the adjunction of a single constraints that successively eliminates all dominated points by the current efficient solution. We might explore somme edges when a non efficient solution is encountered. Our algorithm is implemented in MATLAB code and some randomly generated examples are tested.

16:50
Hybrid Metaheuristic Using Reactive GRASP and Reinforcement Learning
ABSTRACT.

Metaheuristics represent an important class of approximative algorithms for solving NP-hard problems. A trend in combinatorial optimization research has been the exploration of hybrid metaheuristics. This paper presents a hybrid version of Reactive GRASP (RG) metaheuristic that incorporates a Reinforcement Learning (RL) agent. In traditional RG, is considered a set of α parameter values previously determined. The choice of the α to be used in the construction phase is taken from this set with an associated probability distribution. In the hybrid algorithm proposal, a learner agent based on Q-learning algorithm is used to replace this process by learning and providing the best α parameter. This strategy give the method an adaptive memory which is updated with the experience gained over the iterations, allowing a best utilization of the search process informations in previous iterations. Preliminary tests were performed with the Traveling Salesman Problem confirming the applicability of the method. Hybrid RG-Learning was successfully applied to the p-center location problem. The proposed method was utilized here to determine the best location of police bases in Mossoró city, Brazil, aiming to reduce the response time to police requests, and cohibit criminality. The test instances were prepared based on the location history of serious crimes in this city. Six instances were used to realization of the computational experiments. The results obtained with the hybrid version were compared with those obtained by the traditional RG, showing a better performing in solution quality and in runtime by this new approach. A statistical analysis of the results was made to validate the method. This work also contributes to present a new metaheuristic that uses RL for combinatorial optimization problems without the need to model them as Markov decision processes, and also a prototype that can be applied to public safety to aid decision making.