APMOD CONFERENCE 2014: INTERNATIONAL CONFERENCE ON APPLIED MATHEMATICAL OPTIMIZATION AND MODELLING 2014

PROGRAM FOR THURSDAY, APRIL 10TH

08:40-09:40 Session 7: Plenary talk
08:40
Clearing the Jungle of Stochastic Optimization
ABSTRACT.

Mathematical programming has given us a widely used canonical framework for modeling deterministic optimization problems. However, if we introduce a random variable, the field fragments into a number of competing communities with names such as stochastic programming, dynamic programming, stochastic search, simulation optimization, and optimal control. Differences between these fields are magnified by notation and terminology, which hide subtle but more important differences in problem characteristics. Complicating matters further is a misunderstanding of basic terms such as dynamic program, policy, and state variable. While deterministic optimization problems are defined by solving for decisions, sequential stochastic optimization problems are characterized by finding functions known as policies. I will identify four fundamental classes of policies which unify the competing communities into a common framework I call computational stochastic optimization. These ideas will be illustrated using applications drawn from energy, health, freight transportation, and vehicle navigation.

09:45-11:00 Session 8A: Applied Dynamic Stochastic Optimization
09:45
Alternative portfolio optimization approaches in the fixed income market including 0-1 variables
ABSTRACT.

The 2008 credit crisis has deeply affected the price of corporate liabilities in both equity and fixed income secondary markets leading to unprecedented portfolio losses by financial investors. A coordinated intervention by monetary institutions limited the systemic consequences of the crisis, without, however, avoiding a significant fall of corporate bond prices across international markets. In this article, we analyse alternative portfolio optimization approaches in the fixed income market over the 2008–2009 period, a time in which credit derivative markets became very illiquid. All policies are analysed relying on a unique set of market and credit scenarios generated by common and idiosyncratic risk factors on an extended investment universe. This article, also includes some approaches that need mixed-integer programming techniques, such as the stop-loss constrains in dynamic optimization (very demanded by financial institutions), or the well known stochastic dominance constrains (first-, second-, or interval second-order). The different approaches are compared in terms of required computational time and desirable risk treatment, in a realistic market. Those that imply 0-1 variables, become quite non-treatable when considering a considerable number of scenarios, so we are planning to solve them using the BFC approach.

10:10
Development Environment for Decision Support in Finance
ABSTRACT.

A large-scale stochastic programming model is adopted to support worldwide insurance company to manage exogenous constraints and capital allocation. Many requirements stem from input data sources, computation and output results. The proper development environment must provide a sufficient capability to adapt software to sudden changes. This work presents a group of well-known softwares combined together that creates a very powerful development environment. Various project already benefited by this solution.

10:35
Dynamic stochastic programming applied to insurance portfolios stress testing
ABSTRACT.

The introduction of the Solvency II regulatory framework in 2011 and unprecen- dented property and casualty claims experienced in recent years by large insur- ance firms have motivated the adoption of risk-based capital allocation policies in the insurance sector. In this paper we present the key features of a dynamic stochastic program leading to an optimal asset-liability management and capital allocation strategy by a large P/C insurance company and describe how from such formulation a specific, industry-relevant, stress-testing analysis can be de- rived. Throughout the article the investment manager of the insurance portfolio is regarded as the relevant decision maker: he faces exogenous constraints deter- mined by the core insurance division and he is subject to the capital allocation policy decided by the management, consistently with the company’s risk expo- sure. A novel approach to stress-testing analysis by the insurance management, based on a recursive solution of a large-scale dynamic stochastic program is presented.

09:45-11:00 Session 8B: Optimization in the Energy Sector
09:45
Transmission switching in electricity networks via nonlinear stochastic programming
SPEAKER: Francesco Piu
ABSTRACT.

Switching off selected transmission lines of an electricity network can lead to savings in the total production costs. This—perhaps surprising—fact has gained increasing interest in the recent past, as the overall profitability of a given network can be increased. Energy is often produced in different places than in the past, for example in off-shore wind parks as in Germany, Denmark and many other countries. This situation offers the opportunity of re-designing the existing power flow network and to incorporate switching possibilities in the network. The central problem consists in finding and identifying those transmission lines, which provide the highest savings potential, while the power supply has to be secure at the same time in the whole area. This paper employs stochastic programming to elaborate the difficulties of the whole problem. In particular possibilities of how to reduce the problem to a tractable size are elaborated.

10:10
Optimal operation of medium-voltage AC networks with distributed generation and storage devices
ABSTRACT.

A medium-voltage AC network with distributed generation and storage devices is considered for which set points are assigned in each time period of a given time horizon. A set point in a time period is defined by modules and phases of voltages in all nodes, active and reactive powers, on load tap changer and variable loads. When some parameters vary, in order to restore feasibility new set points need to be determined so as to minimize the variations with respect to the initial ones. This can be done by minimizing distributor’s redispatching costs, which are modeled by means of binary variables, while satisfying service security requirements and ensuring service quality, which are represented by nonlinear constraints, such as the nodal balance of active and reactive power and the current transits on lines and transformers for security. Storage devices are modeled by means of constraints that relate adjacent time periods. A two-step solution procedure is proposed, which is based on decoupling active and reactive variables: in the first step a MILP model determines the active power production and the use of storage devices that minimize redispatching costs over all time periods in the time horizon; in the second step, given the optimal active power production computed in the first step, reactive variables in each time period are computed by solving a nonlinear programming model.

10:35
A Leader-Followers Model of Power Transmission Capacity Expansion in a Market Driven Environment
ABSTRACT.

We introduce a model intended for the analysis of the upgrade of the national transmission grid which explicitly accounts for responses given by the power generating companies in terms of generating unit expansion. The problem is modeled as a bilevel program with mixed integer structure in both, upper and lower level. Upper level is de ned by the transmission company problem which has to de ne how to upgrade the network so to avoid congestions. Lower level models the reactions of generating companies, which take a decision on new facilities and power output, and market operator which strikes a new balance between demand and supply providing new Locational Marginal Prices. We illustrate our methodology by means of an example based on the Garver's 6-bus Network.

09:45-11:00 Session 8C: Social Networks
09:45
Sentiment Analysis in Social Networks: a Markov-based prediction model
ABSTRACT.

According to the definition reported in [1], sentiment “suggests a settled opinion reflective of one’s feelings”. The extraction of this subjective information from texts in natural language is crucial to create structured and actionable knowledge to be used by a decision maker [2]. The analysis of sentiment is particularly challenging when dealing with online social networks, where decisions (e.g. in business and government intelligence) should take into account sentiment behind thousands of text messages posted by interconnected users. This work address the problem of classifying users sentiment in a social network environment by proposing a probabilistic model that obeys to the Markov assumption: the sentiment of a user is influenced by the sentiment of his text messages and the sentiment of the directly connected neighbours. The latent variables of the model, which weight the importance of text messages and neighbours, are estimated using a gradient-based approach [3] that maximizes the likelihood of the user sentiment labels. The experiments reveal that, thanks to the Markov property, the prediction of user sentiment affects all the other users by a “propagation” effect. We compare the results of the proposed model with traditional approaches that only consider text messages, disregarding connections among users. This investigation proves that the inclusion of relationships in the probabilistic model leads to accurate and robust sentiment prediction when dealing with networked environments.

[1] Pang, B., Lee, L.: Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval 2 (2008) 1–135. [2] Pozzi, F.A., Fersini, E., Messina, E.: Bayesian model averaging and model selection for polarity classification. In: Proc. of the 18th International Conference on Application of Natural Language to Information Systems. (2013) 189–200 [3] Wick, M., Rohanimanesh, K., Culotta, A., McCallum, A.: Samplerank: Learning preferences from atomic gradients. In: NIPS Workshop on Advances in Ranking. (2009)

10:10
Community detection in Social Networks
ABSTRACT.

One of the most important tasks in social network analysis is concerned with the discovery of homogeneous groups of users, or communities of users sharing some common interests. This task can be viewed as an uncapacitated location problem, where users (nodes) have to be associated to communities (clusters). For solving this problem, two alternative mathematical programming formulations are traditionally provided: the first one aims at maximizing the similarity between nodes belonging to the same community, where the similarity is computed based on the text messages posted by the users [1]. The second formulation aims at maximizing the number of connections (friendship, approval) among users belonging to the same cluster [2, 3] regardless the user generated contents. Detecting communities according to one of these formulations could lead to biased communities. Indeed, similarities between users’ text messages can help to group users with few connections in the network, while the connections can complement when textual information is missing. Therefore, both paradigms should be taken into account and by considering social network communities as sets of nodes that are densely connected, but which also share some common message contents. In this work we propose a p-median formulation of this problem able to take into account both the contents of the messages posted by the users messages and the underlying social network structure. The proposed approach is validated on real data and compared against traditional baseline community detection methods.

[1] J. Liu. Comparative analysis for k-means algorithms in network community detection. In Advances in Computation and Intelligence (pp. 158-169). Springer Berlin Heidelberg, 2010. [2] S. Fortunato. Community detection in graphs. Physics Reports, 2010. [3] J. Xie, S. Kelley, and B. K. Szymanski. Overlapping community detection in networks: the state of the art and comparative study. ACM Computing Surveys, 2013.

10:35
Sales force allocation in pharmaceutical industry in presence of social contagion
SPEAKER: Enza Messina
ABSTRACT.

In this paper we deal with the influence of social contagion on the sales representatives allocation problem for a pharmaceutical company. First of all, we present a heuristic algorithm that defines the number of calls for each drug to each physicians, taking into account a call budget, a set of specific response functions, and the network of possible relationships among physicians. Because social contagion is a phenomenon inherently affected by uncertainty, we develop a methodology for the evaluation of the sales force allocation strategy taking into account the uncertainty and partial knowledge that may characterize the structure of the physician social network that regulates the information flow. We also analyze the impact of this partial knowledge on revenues generated by a call plan thus enabling the measurement of the information value.

09:45-11:00 Session 8D: Heuristics
09:45
Diversity-driven Hybrid Genetic Algorithm for a Course Timetabling Problem
SPEAKER: Ayla Gulcu
ABSTRACT.

A diversity-driven Genetic Algorithm (GA) for the course timetabling problem at Marmara University Faculty of Engineering is developed. The performance of the algorithm is tested by the International Timetabling Competition 2 curriculum track dataset. Initial population is generated by a backtracking-based constraint programming (BBCP) technique which guarantees feasibility. Two diversity measures are developed in order to measure the distance between two individuals. The crossover operator is selected empirically among a number of CX operators according to both the diversity and the quality. The variable neighborhood search is applied to improve the solution quality. The BBCP is able to find a feasible solution very quickly. For example, it finds a solution to instance 5, which is among the hardest instances, in 3 seconds with a Linux laptop with i7 processor and 16 GB of ram. At each step, the BBCP extends the partial solution by selecting an unassigned event and the best time and room value from the domain of that event. With the help of node consistency and filtering techniques, it is ensured that the domain of each event includes only consistent values. During the selection of the event to be assigned, one of the low-level ordering heuristics is used. A filtering technique is applied to remove the current time from the domain of other events whose assignment would cause inconsistency. When a dead-end is reached, which means that the selected event has no value consistent with the existing assignments, a backtracking is applied. Starting from the most recent assignment, each assignment and related filtering are undone until a consistent value is found. After assigning the current event, all the undone assignments that did not cause the dead-end are re-done if they are still consistent. The cycling is avoided by keeping a tabu list for each undone event.

10:10
The Broadening Future of Hyper-heuristic Interfaces
ABSTRACT.

Hyper-heuristics provide a general-purpose software component to help reduce the implementation time needed for effective search methods. However, hyper-heuristics studies have generally used a framework with an overly limited communication between the high-level search control and the low-level domains. In particular, the hyper-heuristic received no other information than the current value of the objective function. We discuss various ideas for enriching the interface to allow a greater information flow from the low-level domain so as to allow better search control. The key challenge is to enable increased information but with the hyper-heuristic still able to work without needing knowledge of the specific domain knowledge. These can include approaches to the hyper-heuristic having information about features of the instances being solved, including, as a simple example, an indicator of the problem size. We give progress on converting it into a set of implemented APIs and benchmark problems. The dual aim is to support both future research in hyper-heuristics, and also usage in specific problem domains.

10:35
The Second Cross-domain Heuristic Search Challenge (CHeSC 2014)
SPEAKER: Ender Ozcan
ABSTRACT.

The Second Cross-domain Heuristic Search Challenge (CHeSC 2014) seeks to bring together researchers and practitioners from operational research, computer science and artificial intelligence who are interested in developing more general and reusable methods for automated search control. Designing a single high level search strategy (e.g. hyper-heuristic) that can operate across different domains and control a set of low level heuristics provided for each domain is a challenging task. This competition encourages extension of high-level search strategies so as to treat different instances from different domains collectively as a batch. We expect a good hyper-heuristic in this competition will exhibit two new capabilities. Firstly, “effort balancing”, that is, the better division of computational effort between the instances. Secondly, “inter-instance learning”, that is, learning from the earlier instances in order to perform better on the latter ones. These aspects were not tested in the previous competition, CHeSC 2011. We will report the organisation and outcomes of the challenge and lessons learned for the future of hyper-heuristics. We expect that the competition will form the basis of a future benchmark for performance comparison of hyper-heuristics.

11:30-12:45 Session 9A: Stochastic Programming & Energy
11:30
A Multi-Period Stochastic Equilibrium Model for Global Energy Markets
SPEAKER: Zhonghua Su
ABSTRACT.

We present a multi-period stochastic equilibrium model for global energy markets that capture everyday economic operations, infrastructure investments, fuel substitution, CO2 tax, market power in a unified framework. It includes players in the supply chains of various fuels, including production, trade, storage, transformation, transmission, consumption, emission permit auction

We take advantage of multi-horizon scenario trees, by which we need fewer scenarios to represent the uncertainties compared to traditional scenario trees. Specifically, multi-horizon scenario trees classify the uncertainties into long-term and short-term uncertainties, both of which affect strategic investment decisions (e.g., production capacity expansion) and operational decisions (e.g., natural gas exports). In this paper, we suppose that the long-term uncertainties are deterministic. Further, we suppose that all players have symmetric information of scenarios and that upstream producers are Cournot players. By solving this one-level game model, equilibriums are reached, which are contingent on scenarios.

We use GAMS to model the problem and PATH to solve it. Finally, a case study of Chinese CO2 cap policy is discussed.

11:55
Optimal scheduling of Demand Side Flexible Units under Uncertainty in SmartGrids
ABSTRACT.

In this paper we propose a general stochastic mixed integer program to support decisions for the short-term scheduling problem for end-users with flexible energy resources. We have defined a concept of internal energy systems with converter, storage and load units. We propose a set of standard classes for load units reflecting their flexibility properties including both technic and economic constraints. Since in real life all information is not known at the time of decision we propose a stochastic programming approach where uncertain parameters are modeled in scenarios and clustered into a scenario tree representing the information revelation process. The model is demonstrated in a case study for a Norwegian university college building. Load forecasts are modelled based on multiple linear regressions with exogenous explanatory variables. The scheduling decision process is presented by two approaches: 1) A deterministic approach (the expected value solution) where load is assumed to be known with certainty and 2) A stochastic approach (the 2-stage recourse problem) where the day is divided into 2 stages and load is known with certainty for the first 12 hours. Load for the last 12 hours are represented in 7 scenarios generated based on the load forecasts and the residuals. The results show a cost saving potential even in the existing Norwegian electricity market. There are three sources for the cost savings: 1) Electricity price variations during the day, 2) Price differences between electricity and oil and 3) Reduction in peak electricity. The case study also shows a value of the stochastic solution, due to a minimum running time constraint for the converter units generating hot water from electricity or oil respectively.

12:20
Integrating energy system models in a complementarity setting
ABSTRACT.

Bottom up energy system models are often solved using linear programming techniques in order to minimize the costs of producing the projected energy demand. It is desirable to complement such models with top-down Computable General Equilibrium models. These are usually formulated and solved as Mixed Complementarity problems.

The complementarity format has advantages over the linear programming: 1) It provides both primal and dual variables in one model, and allows relationships to be formulated. 2) It allows to include multiple agents that individually optimize their behavior towards their own goals.

One example illustrating the first advantage is the PIES model, where the dual price variables affect the primal demand. The PIES model was previously solved using iterations, but can now be solved directly by Mixed Complementarity solvers like PATH or MILES.

The second advantage is central in CGE models, which includes different agents like households, firms and government.

The complementarity format also has disadvantages: 1) Although an equilibrium solution is known to exist, there is not a proven solution method that always will find such a solution. 2) Uniqueness of an equilibrium solution can in general not be guaranteed.

CGE models are exposed to these disadvantages already. By reformulating a linear programming energy system model like the TIMES model as a complementarity model, it could be integrated into a CGE model and solved directly. This is done through the associated Karush-Kuhn-Tucker conditions. Developing such an integrated model would be very important, providing equilibrium solutions towards which solutions from other heuristic approaches could be measured. Empirical models may still be intractable to solve in an integrated manner, but different linked approaches could be evaluated against an integrated approach.

11:30-12:45 Session 9B: Financial Optimization
11:30
No-Arbitrage ROM Simulation
ABSTRACT.

Ledermann et al. (2011) propose random orthogonal matrix (ROM) simulation for generating multivariate samples matching means and covariances exactly. Its computational efficiency compared to standard Monte Carlo methods makes it an interesting alternative. In this paper we enhance this method's attractiveness by focusing on applications in finance. Many financial applications require simulated asset returns to be free of arbitrage opportunities. We analytically derive no-arbitrage bounds for expected excess returns to be used in the context of ROM simulation, and we establish the theoretical relation between the number of states (i.e., the sample size) and the size of (no-)arbitrage regions. Based on these results, we present a No-Arbitrage ROM simulation algorithm, which generates arbitrage-free random samples by purposefully rotating a simplex. Hence, the proposed algorithm completely avoids any need for checking samples for arbitrage. Compared to the alternative of (potentially frequent) re-sampling followed by arbitrage checks, it is considerably more efficient. As a by-product, we provide interesting geometrical insights into affine transformations associated with the No-Arbitrage ROM simulation algorithm.

11:55
Financial Optimization Modeling using R
ABSTRACT.

Simplifying the task of modeling optimization problems is important. Many commercial products have been created to support the optimization modeling process, but none of these products has been adopted by a significantly large number of users. As soon as real-world decision problems under uncertainty have to be modeled, flexible and quick changes to the underlying model are necessary. Simplifications are crucial to implement such optimization models into business processes successfully. Examples from portfolio optimization will be shown to substantiate the proposed modeling environment.

12:20
Recent Computational Trends and Opportunities in Equity Portfolio Optimization
ABSTRACT.

Portfolio managers have traditionally attempted to maximize alpha, the portfolio outperformance relative to a benchmark. The financial crisis of 2007-2008, however, has increased interest in smart beta, risk-based and risk parity allocation strategies. This talk discusses computational challenges and opportunities associated with the implementation of such strategies.

11:30-12:45 Session 9C: Linear Algebra for Large-Scale Optimization Problems
11:30
Preconditioning IPMs for block-angular problems with "almost linearly dependent" constraints
ABSTRACT.

One of the most efficient interior-point methods for some classes of primal-block angular problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. We show that the principal angles between the subspaces generated by the diagonal blocks and the linking constraints play an important role in the proper choice of the preconditioner. Under these considerations we propose an easily computable preconditioner which results to be particularly effective when the principal angles between the subspaces generated by the diagonal blocks and the linking constraints are small, that is to say, when the linking constraints are almost linearly dependent with the block constraints. This preconditioner resulted to be complementary to the currently available power series preconditioner for block angular problems: the smaller the principal angles, the more the power series preconditioner is outperformed by the new preconditioner. Computational results will be presented.

11:55
Adaptive Discontinuous Galerkin Approximation of Optimal Control Problems Governed by Transient Convection-Diffusion Equations
ABSTRACT.

This talk will focus on the numerical solution of unsteady optimal control problems governed by convection diffusion partial differential equations (PDEs). The solutions of these PDEs can exhibit layers on small regions where the solution has large gradients, when convection dominates diffusion. To avoid spurious oscillations emerging from the layers, we use adaptive mesh refinement. The symmetric interior penalty Galerkin (SIPG) method with upwinding for the convection term is used for space discretization, whereas backward Euler is used for time discretization. We derive some a posteriori error estimates for the state, the adjoint and the control variables. The arising saddle point system is solved using a suitable preconditioner. Numerical examples are presented for convection dominated problems to illustrate the effectiveness of the adaptivity.

12:20
Preconditioning techniques for large scale optimization problems
SPEAKER: Tyrone Rees
ABSTRACT.

The interior point method has emerged as one of the most popular methods for solving quadratic programming problems. At each iteration of this method one has to solve a large linear system, and this step is the computational bottleneck in modern interior point solvers. For large-scale optimization problems it is not feasible to factorize this matrix, and so we must turn to iterative methods.

Krylov subspace methods are, in general, the most efficient iterative methods known, but they have to be applied with a preconditioner in order to be effective. The optimization community has favoured constraint preconditioners to date for solving the augmented system, as it can be shown that the approximate solutions obtained satisfy the constraints exactly, and hence convergence of an interior point method can be proved. In this talk I will describe a method which allows us to use other preconditioners to solve this problem while maintaining convergence of the outer iterative method. I will demonstrate my findings with numerical results from the CUTEst test collection.

11:30-12:45 Session 9D: Numerical Algorithms II
Chair: Jan Fiala
11:30
Fast Solvers for PDE-Constrained Optimization Problems
SPEAKER: John Pearson
ABSTRACT.

An active area of research within the fields of numerical linear algebra and continuous optimization is that of developing effective solution strategies for PDE-constrained optimization problems. In this talk we present a framework for constructing preconditioned iterative methods for the matrix systems arising from such problems. We do this by exploiting the saddle point structure of the matrices involved, and using this to derive good approximations of the (1,1)-block and Schur complement. We consider PDE-constrained optimization problems of both time-independent and time-dependent form, as well as nonlinear examples. For each problem that we examine, we motivate and derive our suggested preconditioners, and present numerical results to demonstrate the performance of our methods.

11:55
PSMG: A parallel problem generator for a structure conveying modelling language for mathematical programming
SPEAKER: Feng Qiang
ABSTRACT.

SML is a structured conveying modelling language that enables the modeller to express a problem structure. SML allow modeller to construct models from sub-models and also facilitate the formulation of stochastic programming problems with recourse. In this talk, we briefly introduces the syntax SML and then present PSMG--Parallel Structured Model Generator — a parallel implementation for the SML. PSMG analyses the structure of an optimization problem given as an SML model file and uses this information to parallelise the model generation process itself. As far as we are aware PSMG is the only algebraic modelling language that can perform parallel problem generation. PSMG offers an interface that can be linked in parallel with many different categories of structure exploiting optimization solvers such as interior point or decomposition based solvers. One of the features of this interface is that the decision on how to distribute problem parts to processors can be delegated to the solver thus enabling better data locality and load balancing. We also demonstrate the use of PSMG by modelling and solving a non-linear optimization problem, known as Security Constrained Optimal Power Flow Problem. We have chosen OOPS- a parallel interior point optimization solver to link with PSMG. The results show that PSMG achieves good parallel efficiency. They also show that exploitation of parallelism enables the generation of problems that cannot be processed on a single node due to memory restrictions.

12:20
Nonlinear semidefinite optimization at NAG
SPEAKER: Jan Fiala
ABSTRACT.

Problem formulations which lead to nonlinear semidefinite optimization are not yet very common. They were, until recently, considered numerically unsolvable and a lack of available software meant that researches tended to avoid them. This talk introduces a new solver provided by NAG which can handle nonlinear optimization, linear and nonlinear semidefinite optimization and any combination of these. A variant of the solver written in Matlab is also available as an open source software package to support further development in this field.

The first part of the talk will be devoted to the description of the algorithm which is based on a generalized augmented Lagrangian method and on a suitable choice of nonlinear rescaling functions. Some of the applications, such as a nearest correlation matrix problem, will be presented in the second part.

14:15-15:30 Session 10A: Robust Optimization
14:15
Robust network design with uncertain outsourcing cost
SPEAKER: Michael Poss
ABSTRACT.

The expansion of a telecommunications network faces two sources of uncertainty, which are the total amount of traffic that shall transit through the expanded network and the outsourcing cost that the network operator will have to pay for unmet demands. The latter is specified by the future service level agreements, whose exact terms are unknown at the time the expansion is planned. Unlike previous robust optimization works on the subject, we consider in this paper both sources of uncertainty. The resulting linear program exhibits a constraint with quadratic dependency on the uncertainties. We propose a decomposition approach that avoids considering the constraint for all scenarios. Instead, we use a cutting plane algorithm that generates required scenarios on the fly by solving linear multiplicative programs. Computational experiments realized on the networks from SNDlib show that our approach is orders of magnitude faster than the classical semi-definite programming reformulation for such problems.

14:40
Multistage Facility Location and Inventory Problem
ABSTRACT.

This paper considers the integrated facility location and inventory problem with

uncertain customer's demand. The total expected cost consisting of transportation

and inventory costs as well as the fixed location cost is minimized.

We study robust approximations to the problem in order to

incorporate information about the random demand distribution in the best

possible, computationally tractable way. Finally, we present numerical

experiments that illustrate the performance of the different robust formulations.

15:05
Robustness Criteria and Robust Topology Optimization with Uncertain Loads
ABSTRACT.

We propose a new algorithm for the solution of the robust multiple-load topology optimization problem. The algorithm can be applied to any type of problem, e.g., truss topology, variable thickness sheet or free material optimization. We assume that the given loads are uncertain and can be subject to small random perturbations. Furthermore, we define a rigorous measure of robustness of the given design with respect to these perturbations. To implement the algorithm, the users only need software to solve their standard multiple-load problem. Additionally, they have to solve a few small-dimensional eigenvalue problems. Numerical examples demonstrate the efficiency of our approach.

14:15-15:30 Session 10B: Combinatorial Optimization II
14:15
A Column Generation Approach for Bus Driver Rostering Problems
SPEAKER: Leena Suhl
ABSTRACT.

The crew rostering problem arises in public transport bus companies, and addresses the task of assigning a given set of anonymous duties and some other activities, such as standbys and days off, to drivers or groups of drivers, without violating any complex labor union rules. Additionally, the preferences of drivers are considered during the assignment. The plan generated for each driver/group of drivers is called a roster. Optimal rosters are characterized by maximum satisfaction of drivers and minimal operational costs. In order to generate a personalized roster for each driver/group of drivers, the problem is formulated in this paper as a multi-commodity network flow problem. In each network layer, a roster is generated for each driver or driver group. The network model is very flexible and can accommodate a variety of constraints. Additionally, with a minor modification, the network can formulate the cyclic and non-cyclic crew rostering problems. Column Generation approach is applied in this work to solve both crew rostering problems for diversified real-world instances. Moreover, the obtained results are compared to those solved with MIP-models.

14:40
Solving large-sized quadratic assignment problems by neural networks
ABSTRACT.

Assignment problems are well studied topic in combinatorial optimization. These kind of problems find numerous applications in production planning, telecommunication, VLSI design, economics, etc. The Quadratic Assignment Problem (QAP) was introduced by Koopmans and Beckmann in 1957 as a mathematical model for the location of a set of indivisible economical activities. This problem consists of allocating a set of facilities to a set of locations, taking into account the costs of the distance and flow between facilities, and the cost of the facility’s installation in a certain location. Therefore, the problem is to assign all the facilities to the locations with the objective of minimizing the total cost. The QAP is NP-hard and one of the fundamental combinatorial optimization problems in the area of facility location. This problem has been solved by many different techniques; but no exact algorithm is known for solving large-sized instances of the QAP in reasonable computational time. In this work, neural networks are proposed for solving large-sized instances of the QAP. Preliminary results show that neural networks are capable to provide good solutions in a low computational time.

15:05
Two Exact Algorithms for the Hamiltonian p-Median Problem
SPEAKER: Gunes Erdogan
ABSTRACT.

This study provides two new exact solution algorithms for the Hamiltonian p-Median Problem (HpMP). The HpMP is the problem of finding p cycles on a given graph that cover every vertex with minimum total cost. The first solution method is a branch-and-cut algorithm based on an enhanced p-median formulation with reduced symmetry. For the second method, a polynomial time column generation scheme based on fractional programming is presented. The resulting branch-and-price-and-cut algorithm is described in detail. Computational results show that the branch-and-cut algorithm outperform the existing exact solution methods. Preliminary results for the branch-and-cut-and-price algorithm are discussed.

14:15-15:30 Session 10C: Machine Learning
14:15
Spectral analysis and machine learning in water distribution networks
SPEAKER: unknown
ABSTRACT.

Leaks in urban water distribution networks (WDN) are leading to large amounts of Non-Revenue Water (NRW) as well as higher energy and rehabilitation costs. To obtain a smarter network management a strategy based on the combination of hydraulic simulation and machine learning is proposed in this paper. Hydraulic simulation is used to run different leakage scenarios by introducing a leak on each pipe, in turn, and varying its severity. For each scenario , pressure and flow variations in correspondence of the actual monitoring points into the WDN, and with respect to the faultless model, are stored. These scenarios are the nodes of a graph, whose edges are weighted by a similarity measure between each pair of nodes, and which is then analysed in the eigen-subspace spanned by the most relevant eigenvectors of its Normalized Laplacian matrix. This transformation from the physical space, associated to the WDN, to the eigenvectors space, allows for a computationally efficient application of the Spectral Clustering procedure. In this way leaks generating similar effects, in terms of observable pressure and flow variations are grouped together; each scenario is then labelled with the corresponding cluster to obtaining a dataset on which a Support Vector Machine is trained. In this way we encode the non-linear transformation, from Input Space (i.e., pressure and flow variations) to Feature Space (i.e., most relevant eigen-vectors) and therefore to the correspondent scenarios cluster. When an actual leak is detected, the variations in measured pressure and flow with respect to the faultless hydraulic model, are given as input to the SVM which provides the most probable cluster label. Leaky pipes belonging to that cluster are provided as the list of components to investigate for leakage. The approach has been validated on two WDNs: a Pressure Management Zone in Milan (Italy) and a District Metered Area in Timisoara (Romania).

14:40
Enhancing teamwork by optimizing collaboration
ABSTRACT.

Assembling an effective team to perform a particular set of collaborative tasks requires an approach that goes beyond the analysis of individual skills and capabilities. Team members' ability to work together and communicate with each other is paramount in collaborative tasks. Thus, social relationships play an important role when teaming up people and are essential elements to add to decision making processes. In order to investigate this, we started off by exploring collaboration aspects that can be extracted from the network (an undirected weighted graph) built from a particularly popular but unexpected dataset: we use the social relationship and historical data of Marvel comic books to examine the problem of team formation. In this context, the aspect we would like to assess is how important is the ability of working together for super heroes, considering their power grids amended by relationships extracted from comic books records The approaches we implemented automatically assemble a group of villains and a group of heroes, both considering the social links among every team member personal attributes. In our experimentation we examine the performance of three heuristic strategies and report computational results on them: a genetic algorithm, a GRASP and an optimization heuristic tailored specifically to tackle the problem. Our experimental results show that our algorithms produce meaningful and useful results, matching groups that occur in the Marvel world as well as interesting possibilities of new teams. We consider metrics related to the connectivity and collaboration of teams and discuss on variations for the initial team formation problem. As a result, experiments with this first dataset reinforces the relevance of considering social aspects when dealing with optimization decisions. Hence, our results corroborate the increasing necessity of enhancing optimization problems by introducing social network information, whenever their descriptions are susceptible to people's interactions.

15:05
A heuristic for tuning the kernel in dynamic clustering
ABSTRACT.

Kernel methods are a class of methods for data analysis that generalize existing techniques by implicitly mapping the data into a high dimensional feature space. In this talk we focus on kernel clustering in a dynamic context where the groups in the cluster and the features evolve over time. As in any kernel method, in kernel clustering, the choice of the kernel is crucial for the results. We explore different definitions of the performance of a kernel in the context of dynamic clustering and develop a heuristic to tune the kernel in order to maximize such performance. Kernel models that are very flexible allow us to capture important information in the data, at the expense of a need to tune many parameters. When the number of parameters is large, it is difficult for traditional metaheuristics to find good solutions. Our algorithm takes advantage of the fact that complex kernel models can be seen as generalization of simpler ones, yielding a nested sequence of models of increasing complexity.

14:15-15:30 Session 10D: Humanitarian Logistics I
14:15
A review of management tools for humanitarian logistics
SPEAKER: Priyanka Roy
ABSTRACT.

Abstract:

Purpose: In recent years, there are large number of humanitarian disasters happen, e.g., the 2010 floods in Pakistan, the 2011 Earthquake in Japan and the 2013 earthquake in Philippines. These events have captured international attention both in the media and disaster management research community. Aid is available to the beneficiaries, by relief organisations, after a disaster strikes. This supply chain process has to be very rapid and efficient to minimize the human casualty. An effective logistics system (such as demand forecasting, inventory management, material handling, warehousing, order processing, and distribution management) is very important to continue a well organised supply chain process. This paper identifies the rapidly expanding body of literature which contribute to understand the approach towards design and operation of humanitarian logistics for the disaster response operation.

Design/Methodology: Structured literature review of academic journals, conference papers which are appeared in the international journals from 1990 to 2012 are gathered and analysed so that the following two questions can be answered. (1) Which methods are the most popular? (2) Which problem attracts the most attention? The time frame is chosen because the technology, communication, GPS system evolved more rapidly during this period and usage of these facilities could make huge difference in relief logistic operations. The other reason of choosing this time frame is the availability of more literature during this period.

Findings: The review finds that optimisation method is the most popular in humanitarian logistical operation followed by qualitative method. Surprisingly 35 articles are about nonspecific disaster but 17 articles explain about earthquake problem. Future work should engage clearly with current developments of disaster management to identify the relevant practical logistical problems during the implement and design phase of decision support system.

Originality/Value: The humanitarian logistic operation needs more attention of academics for future research of logistical drivers.

Paper Type: Review paper

14:40
Mathematical methods in humanitarian logistics – the case of route planning under application of the heuristic “savings” after the typhoon Haiyan on the Philippines
ABSTRACT.

The presentation and publication deal with the opportunities of mathematical methods in humanitarian logistics. Unexpected hard but not totally unforeseen made the typhoon Haiyan landfall in the Philippines in November 2013. Humanitarian aid began short time after the landfall and even the day before when the weather forecast had anticipated a typhoon. First aid deliveries such as food, medicines and non-food relief items arrived few days later at accessible seaports and airports on the Philippines – if possible near to the main crisis region around Tacloban. How can mathematical heuristics and optimization methods help to distribute the relief items to the people in need? How can humanitarian logistics be organized in the same time efficient and effective? Effective means in the sense of humanitarian logistics to act fast, flexible and reliable and therewith to realize a high logistics service. The term efficient focuses on costs; lowering logistics costs enables to spend more money for relief items or other measures of humanitarian aid. Is it possible to integrate destroyed streets, bridges or other parts of the infrastructure into solutions and mathematical methods? This contribution to the session deals with the mentioned questions and tries to find answers. Using the example of the typhoon Haiyan the author demonstrates the application of the mathematical method “Savings”. Starting from a first but poor solution for the route planning problem the solution is modified iterative to a solution which can be characterized as effective and efficient. The people in need on the Philippines receive aid faster and by considerable lower costs than in the first solution. The method can be adjusted flexible to new conditions such as new intersections for the delivery, different amounts of relief items, available modes of transport, destroyed or re-established infrastructure, new information about people in need. Over that it is comprehensible for the personnel in humanitarian aid and programmable for IT-solutions.

15:05
Coordinating static and dynamic flow models for a multi-criteria humanitarian aid distribution problem
ABSTRACT.

Natural and human-made disasters cause humanitarian crises all around the world. Recent disasters, such as Haiti earthquake (2010), Japan earthquake and subsequent tsunami (2011) or Philippines earthquake (2013), among others, have highlighted the importance of Disaster and Emergency Management, in order to alleviate the suffering of vulnerable people and save lives. This is a highly relevant area of interest nowadays, which is undergoing widespread development. In this context, Humanitarian Logistics play a major role and a wide range of optimization models, among others, are being developed.

The problem addressed in this work concerns last-mile distribution in disaster relief operations. In particular, the problem consists of designing routes for vehicles among nodes that have an available quantity of goods or have a demand of those goods, choosing the types of vehicles more adequate and determining the flow of the aid. In order to do so, a lexicographical dynamic flow model that takes into account the decision maker’s preferences is presented, extending a previously introduced static flow model. The new model is validated in a realistic case study and a computational study is performed to compare both models, showing how they can be coordinated. The computational experiments suggest that the coordination of both models improves significantly their overall performance.

16:00-17:15 Session 11A: -
16:00-17:15 Session 11B: Financial Asset Allocation
16:00
Portfolio Optimization using Feature Selection
ABSTRACT.

It's well-known that mean-variance portfolios perform poorly out-of-sample due to estimation problems, especially when the number of assets to be considered is very large. We propose hierarchical clustering to reduce the investment universe and apply classical asset allocation techniques to the selected assets. In addition to an in-depth analysis of the performance, we analyze the impact of the key parameters for the algorithm

16:25
Dynamic portfolio optimization with transaction costs and state-dependent drift
SPEAKER: Rolf Poulsen
ABSTRACT.

We present an efficient numerical method to determine optimal portfolio strategies under time- and state-dependent drift and proportional transaction costs. This scenario arises when investors have behavioral biases or the actual drift is unknown and needs to be estimated. The numerical method solves dynamic optimal portfolio problems for time-horizons of up to 40 years. It is applied to measure the value of information and the loss from transaction costs using the indifference principle.

16:50
Time under the Water
SPEAKER: Omri Ross
ABSTRACT.

We explore the construction of a risk measure that focus on the time a firm has been below its highest water mark. We show that such a risk measure can be used for portfolio optimization and suggest an empirical evidence for its usefulness in portfolio optimization.

16:00-17:15 Session 11C: Humanitarian Logistics II
16:00
A Wardrop equilibrium model for the bi-objective location of distribution centers in disaster relief
ABSTRACT.

In a post-disaster situation, the choice of locations for distribution centers (DCs) is an important decision faced by an NGO providing humanitarian aid. Usually, people from the affected population have to walk from their homes to a DC to obtain relief goods. The throughput of a DC is limited, not only by storage considerations, but also by staff requirements. In the case of congestion at a DC, only part of the overall demand can possibly be satisfied, which may motivate part of the people to go to more remote DCs that are not congested. This shows that the assumption of some facility location models that clients will always choose the nearest facility may not be realistic in a disaster relief situation.

We try to overcome this limitation by means of the theory of Wardrop equilibria. In our case, a Wardrop equilibrium is defined by the property that the gain a client can expect to receive in a chosen DC (which depends how many other clients go there), reduced by the cost of traveling to this DC, is not worse than for alternative DCs. The Wardrop equilibrium can be computed as the solution of a convex mathematical program. We do this by means of the Frank-Wolfe algorithm.

On the upper decision level representing the viewpoint of the NGO, a bi-objective combinatorial optimization problem is solved, the objectives of which are cost and uncovered demand.

We illustrate the approach at real-world test instances from Senegal. The efficient solutions of our approach are compared to those obtained from a “nearest facility” model. It is shown that our more realistic client behavior prediction allows for relevant improvements in the solution quality.

16:25
Optimizing covering tours for the distribution of disaster relief items using competitive location models
SPEAKER: Pamela Nolz
ABSTRACT.

We consider the problem of designing the logistic system to assure adequate distribution of relief aid after a natural disaster. We face the situation, where the population members stay in the affected regions and need to be supplied with food and drinking water. These relief items are to be transported from a central depot to distribution centers, where they are handed out to the population in need. The problem is formulated as a multi-objective optimization problem, encompassing two objective functions of central interest in such problems. The underlying competitive Covering Tour Problem (CTP) aims at minimizing (i) total uncovered demand and (ii) total distribution costs.

We include modeling approaches for customer behavior as they have been developed in the field of competitive location models. In disaster relief, the provision of help by different organizations should not be competitive, such that we face a different situation, but as the mentioned approaches possibly allow to predict the behavior of the individuals affected by a disaster in more detail, they may enable a more efficient organization of the relief logistics. Depending on the distribution centers made available to the population members, they will make decisions where to go to (if at all), which is an important information for health providing organizations, such that supplies can be delivered to the right places in the right quantities.

We solve the multi-objective competitive CTP with the Nondominated Sorting Genetic Algorithm II (Deb et al., 2002) and compare the proposed solutions to the Pareto-optimal solutions generated with a Brute Force Complete Enumeration procedure. The suggested metaheuristic solution approach is tested on real-world data from the southern part of Mozambique near the river Limpopo, which is regularly affected by drought. The sets of solutions are evaluated using the hypervolume indicator as proposed by Zitzler and Thiele (1998).

References:

Deb K., Pratap A., Agarwal S., Meyarivan T. (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6 (2), pp. 182-197.

Zitzler E., Thiele L. (1998) Multiobjective optimization using evolutionary algorithms: a comparative case study. In: Eiben A.E., Bäck T., Schoenauer M., Schwefel H.-P. (editors) PPSN 1998. LNCS, vol. 1498, pp. 292-301. Springer, Heidelberg.

16:50
A multiobjective integral approach for the humanitarian logistics problem. Application to a case study in Mexico.
SPEAKER: unknown
ABSTRACT.

Disasters are phenomena which strike countries around the world. The work introduces an integral proposal to consider distribution, evacuation, location of facilities and a preposition stock policy during floods with multicriteria (equity: minimizing the maximum evacuation and distribution flow-times, as well as total cost). The efficient frontier for the preparedness phase is built through the weighting with the epsilon-constraint methods and for the response phase through a metaheuristic based on tabu and scatter search. The usefulness of the model is validated through a Mexican case study and the analysis of different scenarios created from three key factors in humanitarian logistics.

16:00-17:15 Session 11D: -